Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hch/buildsystem fixes #1

Closed
wants to merge 5 commits into from
Closed

Hch/buildsystem fixes #1

wants to merge 5 commits into from

Conversation

ghost
Copy link

@ghost ghost commented Jan 7, 2013

No description provided.

Christoph Hellwig added 5 commits January 7, 2013 16:37
This allows to build on a deb based system that doesn't have an rpm
binary installed.
If we already have a clean tree xargs won't find any object files.
Add the -f argument to rm so that it doesn't complain about the lack
of arguments.
Debian and ubuntu use a different path for 64-bit libraries then RedHat,
try to autodetect these.
@ghost ghost closed this Jan 8, 2013
@nyh nyh mentioned this pull request Jun 23, 2013
penberg pushed a commit that referenced this pull request Aug 15, 2013
Building OSv with debug memory allocator enabled:

  $ make -j mode=debug conf-preempt=0 conf-debug_memory=1

Causes the guest to enter a busy loop right after JVM starts up:

  $ ./scripts/run.py -d

  [...]

  OpenJDK 64-Bit Server VM warning: Can't detect initial thread stack location - find_vma failed

GDB explains:

  #0  0x00000000003b5c54 in
boost::intrusive::rbtree_impl<boost::intrusive::setopt<boost::intrusive::detail::member_hook_traits<memory::page_range,
boost::intrusive::set_member_hook<boost::intrusive::none,
boost::intrusive::none, boost::intrusive::none, boost::intrusive::none>,
&memory::page_range::member_hook>, memory::addr_cmp, unsigned long, true>
>::private_erase (this=0x1d2f8c8 <memory::free_page_ranges+8>, b=..., e=...,
n=@0x3b40e9: 6179885759521391432) at
../../external/misc.bin/usr/include/boost/intrusive/rbtree.hpp:1417
  #1  0x00000000003b552e in
boost::intrusive::rbtree_impl<boost::intrusive::setopt<boost::intrusive::detail::member_hook_traits<memory::page_range,
boost::intrusive::set_member_hook<boost::intrusive::none,
boost::intrusive::none, boost::intrusive::none, boost::intrusive::none>,
&memory::page_range::member_hook>, memory::addr_cmp, unsigned long, true>
>::erase<memory::page_range, memory::addr_cmp>(memory::page_range const&,
memory::addr_cmp,
boost::intrusive::detail::enable_if_c<!boost::intrusive::detail::is_convertible<memory::addr_cmp,
boost::intrusive::tree_iterator<boost::intrusive::rbtree_impl<boost::intrusive::setopt<boost::intrusive::detail::member_hook_traits<memory::page_range,
boost::intrusive::set_member_hook<boost::intrusive::none,
boost::intrusive::none, boost::intrusive::none, boost::intrusive::none>,
&memory::page_range::member_hook>, memory::addr_cmp, unsigned long, true> >,
true> >::value, void>::type*) (this=0x1d2f8c0 <memory::free_page_ranges>,
key=..., comp=...) at
../../external/misc.bin/usr/include/boost/intrusive/rbtree.hpp:878
  #2  0x00000000003b4c4e in
boost::intrusive::rbtree_impl<boost::intrusive::setopt<boost::intrusive::detail::member_hook_traits<memory::page_range,
boost::intrusive::set_member_hook<boost::intrusive::none,
boost::intrusive::none, boost::intrusive::none, boost::intrusive::none>,
&memory::page_range::member_hook>, memory::addr_cmp, unsigned long, true>
>::erase (this=0x1d2f8c0 <memory::free_page_ranges>, value=...) at
../../external/misc.bin/usr/include/boost/intrusive/rbtree.hpp:856
  #3  0x00000000003b4145 in
boost::intrusive::set_impl<boost::intrusive::setopt<boost::intrusive::detail::member_hook_traits<memory::page_range,
boost::intrusive::set_member_hook<boost::intrusive::none,
boost::intrusive::none, boost::intrusive::none, boost::intrusive::none>,
&memory::page_range::member_hook>, memory::addr_cmp, unsigned long, true>
>::erase (this=0x1d2f8c0 <memory::free_page_ranges>, value=...) at
../../external/misc.bin/usr/include/boost/intrusive/set.hpp:601
  #4  0x00000000003b0130 in memory::refill_page_buffer () at ../../core/mempool.cc:487
  #5  0x00000000003b05f8 in memory::untracked_alloc_page () at ../../core/mempool.cc:569
  #6  0x00000000003b0631 in memory::alloc_page () at ../../core/mempool.cc:577
  #7  0x0000000000367a7c in mmu::populate::small_page (this=0x2000001fd460, ptep=..., offset=0) at ../../core/mmu.cc:456
  #8  0x0000000000365b00 in mmu::page_range_operation::operate_page
(this=0x2000001fd460, huge=false, addr=0xffffe0004ec9b000, offset=0) at
../../core/mmu.cc:438
  #9  0x0000000000365790 in mmu::page_range_operation::operate
(this=0x2000001fd460, start=0xffffe0004ec9b000, size=4096) at
../../core/mmu.cc:387
  #10 0x0000000000366148 in mmu::vpopulate (addr=0xffffe0004ec9b000, size=4096) at ../../core/mmu.cc:657
  #11 0x00000000003b0d8d in dbg::malloc (size=16) at ../../core/mempool.cc:818
  #12 0x00000000003b0f32 in malloc (size=16) at ../../core/mempool.cc:854

Fix the problem by checking if free_page_ranges is empty in
refill_page_buffer(). This fixes the busy loop issue and shows what's
really happening:

  OpenJDK 64-Bit Server VM warning: Can't detect initial thread stack location - find_vma failed
  alloc_page(): out of memory
  Aborted
penberg pushed a commit that referenced this pull request Aug 16, 2013
Starting up Cassandra with debug memory allocator GPFs as follows:

  Breakpoint 1, abort () at ../../runtime.cc:85
  85	{
  (gdb) bt
  #0  abort () at ../../runtime.cc:85
  #1  0x0000000000375812 in osv::generate_signal (siginfo=..., ef=ef@entry=0xffffc0003ffe3008) at ../../libc/signal.cc:40
  #2  0x000000000037587c in osv::handle_segmentation_fault (addr=addr@entry=18446708889768681440, ef=ef@entry=0xffffc0003ffe3008)
      at ../../libc/signal.cc:55
  #3  0x00000000002fba02 in page_fault (ef=0xffffc0003ffe3008) at ../../core/mmu.cc:876
  #4  <signal handler called>
  #5  dbg::realloc (v=v@entry=0xffffe00019b3e000, size=size@entry=16) at ../../core/mempool.cc:846
  #6  0x000000000032654c in realloc (obj=0xffffe00019b3e000, size=16) at ../../core/mempool.cc:870
  #7  0x0000100000627743 in ?? ()
  #8  0x00002000001fe770 in ?? ()
  #9  0x00002000001fe780 in ?? ()
  #10 0x00002000001fe710 in ?? ()
  #11 0x00002000001fe700 in ?? ()
  #12 0xffffe000170e8000 in ?? ()
  #13 0x0000000200000001 in ?? ()
  #14 0x0000000000000020 in ?? ()
  #15 0x00002000001ffe70 in ?? ()
  #16 0xffffe000170e0004 in ?? ()
  #17 0x000000000036f361 in strcpy (dest=0x100001087420 "", src=<optimized out>) at ../../libc/string/strcpy.c:8
  #18 0x0000100000629b53 in ?? ()
  #19 0xffffe00019b22000 in ?? ()
  #20 0x0000000000000001 in ?? ()
  #21 0x0000000000000000 in ?? ()

The problem was introduced in commit 1ea5672 ("memory: let the debug
allocator mimic the standard allocator more closely") which forgot
to convert realloc() to use 'pad_before'.
penberg pushed a commit that referenced this pull request Aug 26, 2013
If a crashed OSv guest is restarted, ZFS mount causes a GPF in early
startup:

  VFS: mounting zfs at /usr
  zfs: mounting osv/usr from device /dev/vblk1
  Aborted

GDB backtrace points finger at zfs_rmnode():

  #0  processor::halt_no_interrupts () at ../../arch/x64/processor.hh:212
  #1  0x00000000003e7f2a in osv::halt () at ../../core/power.cc:20
  #2  0x000000000021cdd4 in abort (msg=0x636df0 "Aborted\n") at ../../runtime.cc:95
  #3  0x000000000021cda2 in abort () at ../../runtime.cc:86
  #4  0x000000000044c149 in osv::generate_signal (siginfo=..., ef=0xffffc0003ffe7008) at ../../libc/signal.cc:44
  #5  0x000000000044c220 in osv::handle_segmentation_fault (addr=72, ef=0xffffc0003ffe7008) at ../../libc/signal.cc:55
  #6  0x0000000000366df3 in page_fault (ef=0xffffc0003ffe7008) at ../../core/mmu.cc:876
  #7  <signal handler called>
  #8  0x0000000000345eaa in zfs_rmnode (zp=0xffffc0003d1de400)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:611
  #9  0x000000000035650c in zfs_zinactive (zp=0xffffc0003d1de400)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c:1355
  #10 0x0000000000345be1 in zfs_unlinked_drain (zfsvfs=0xffffc0003ddfe000)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:523
  #11 0x000000000034f45c in zfsvfs_setup (zfsvfs=0xffffc0003ddfe000, mounting=true)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:881
  #12 0x000000000034f7a4 in zfs_domount (vfsp=0xffffc0003de02000, osname=0x6b14cb "osv/usr")
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:1016
  #13 0x000000000034f98c in zfs_mount (mp=0xffffc0003de02000, dev=0x6b14d7 "/dev/vblk1", flags=0, data=0x6b14cb)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:1415
  #14 0x0000000000406852 in sys_mount (dev=0x6b14d7 "/dev/vblk1", dir=0x6b14a3 "/usr", fsname=0x6b14d3 "zfs", flags=0, data=0x6b14cb)
      at ../../fs/vfs/vfs_mount.c:171
  #15 0x00000000003eff97 in mount_usr () at ../../fs/vfs/main.cc:1415
  #16 0x0000000000203a89 in do_main_thread (_args=0xffffc0003fe9ced0) at ../../loader.cc:215
  #17 0x0000000000448575 in pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, pthread_private::thread_attr const*)::{lambda()#1}::operator()() const () at ../../libc/pthread.cc:59
  #18 0x00000000004499d3 in std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda0>::_M_invoke(const std::_Any_data &) (__functor=...)
      at ../../external/gcc.bin/usr/include/c++/4.8.1/functional:2071
  #19 0x000000000037e602 in std::function<void ()>::operator()() const (this=0xffffc0003e170038)
      at ../../external/gcc.bin/usr/include/c++/4.8.1/functional:2468
  #20 0x00000000003bae3e in sched::thread::main (this=0xffffc0003e170010) at ../../core/sched.cc:581
  #21 0x00000000003b8c92 in sched::thread_main_c (t=0xffffc0003e170010) at ../../arch/x64/arch-switch.hh:133
  #22 0x0000000000399c8e in thread_main () at ../../arch/x64/entry.S:101

The problem is that ZFS tries to check if the znode is an attribute
directory and trips over zp->z_vnode being NULL.  However, as explained
in commit b7ee91e ("zfs: port vop_lookup"), we don't even support
extended attributes so drop the check completely for OSv.
penberg pushed a commit that referenced this pull request Aug 26, 2013
If a crashed OSv guest is restarted, ZFS mount causes a GPF in early
startup:

  VFS: mounting zfs at /usr
  zfs: mounting osv/usr from device /dev/vblk1
  Aborted

GDB backtrace points finger at zfs_rmnode():

  #0  processor::halt_no_interrupts () at ../../arch/x64/processor.hh:212
  #1  0x00000000003e7f2a in osv::halt () at ../../core/power.cc:20
  #2  0x000000000021cdd4 in abort (msg=0x636df0 "Aborted\n") at ../../runtime.cc:95
  #3  0x000000000021cda2 in abort () at ../../runtime.cc:86
  #4  0x000000000044c149 in osv::generate_signal (siginfo=..., ef=0xffffc0003ffe7008) at ../../libc/signal.cc:44
  #5  0x000000000044c220 in osv::handle_segmentation_fault (addr=72, ef=0xffffc0003ffe7008) at ../../libc/signal.cc:55
  #6  0x0000000000366df3 in page_fault (ef=0xffffc0003ffe7008) at ../../core/mmu.cc:876
  #7  <signal handler called>
  #8  0x0000000000345eaa in zfs_rmnode (zp=0xffffc0003d1de400)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:611
  #9  0x000000000035650c in zfs_zinactive (zp=0xffffc0003d1de400)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_znode.c:1355
  #10 0x0000000000345be1 in zfs_unlinked_drain (zfsvfs=0xffffc0003ddfe000)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_dir.c:523
  #11 0x000000000034f45c in zfsvfs_setup (zfsvfs=0xffffc0003ddfe000, mounting=true)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:881
  #12 0x000000000034f7a4 in zfs_domount (vfsp=0xffffc0003de02000, osname=0x6b14cb "osv/usr")
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:1016
  #13 0x000000000034f98c in zfs_mount (mp=0xffffc0003de02000, dev=0x6b14d7 "/dev/vblk1", flags=0, data=0x6b14cb)
      at ../../bsd/sys/cddl/contrib/opensolaris/uts/common/fs/zfs/zfs_vfsops.c:1415
  #14 0x0000000000406852 in sys_mount (dev=0x6b14d7 "/dev/vblk1", dir=0x6b14a3 "/usr", fsname=0x6b14d3 "zfs", flags=0, data=0x6b14cb)
      at ../../fs/vfs/vfs_mount.c:171
  #15 0x00000000003eff97 in mount_usr () at ../../fs/vfs/main.cc:1415
  #16 0x0000000000203a89 in do_main_thread (_args=0xffffc0003fe9ced0) at ../../loader.cc:215
  #17 0x0000000000448575 in pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, pthread_private::thread_attr const*)::{lambda()#1}::operator()() const () at ../../libc/pthread.cc:59
  #18 0x00000000004499d3 in std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda0>::_M_invoke(const std::_Any_data &) (__functor=...)
      at ../../external/gcc.bin/usr/include/c++/4.8.1/functional:2071
  #19 0x000000000037e602 in std::function<void ()>::operator()() const (this=0xffffc0003e170038)
      at ../../external/gcc.bin/usr/include/c++/4.8.1/functional:2468
  #20 0x00000000003bae3e in sched::thread::main (this=0xffffc0003e170010) at ../../core/sched.cc:581
  #21 0x00000000003b8c92 in sched::thread_main_c (t=0xffffc0003e170010) at ../../arch/x64/arch-switch.hh:133
  #22 0x0000000000399c8e in thread_main () at ../../arch/x64/entry.S:101

The problem is that ZFS tries to check if the znode is an attribute
directory and trips over zp->z_vnode being NULL.  However, as explained
in commit b7ee91e ("zfs: port vop_lookup"), we don't even support
extended attributes so drop the check completely for OSv.
penberg pushed a commit that referenced this pull request Aug 26, 2013
Fix mincore() to deal with unmapped addresses like msync() does.

This fixes a SIGSEGV in libunwind's access_mem() when leak detector is
enabled:

   (gdb) bt
  #0  page_fault (ef=0xffffc0003ffe7008) at ../../core/mmu.cc:871
  #1  <signal handler called>
  #2  ContiguousSpace::block_start_const (this=<optimized out>, p=0x77d2f3968)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/oops/oop.inline.hpp:411
  #3  0x00001000008ae16c in GenerationBlockStartClosure::do_space (this=0x2000001f9100, s=<optimized out>)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/memory/generation.cpp:242
  #4  0x00001000007f097c in DefNewGeneration::space_iterate (this=0xffffc0003fb68c00, blk=0x2000001f9100, usedOnly=<optimized out>)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/memory/defNewGeneration.cpp:480
  #5  0x00001000008aca0e in Generation::block_start (this=<optimized out>, p=<optimized out>)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/memory/generation.cpp:251
  #6  0x0000100000b06d2f in os::print_location (st=st@entry=0x2000001f9560, x=32165017960, verbose=verbose@entry=false)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/runtime/os.cpp:868
  #7  0x0000100000b11b5b in os::print_register_info (st=0x2000001f9560, context=0x2000001f9740)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:839
  #8  0x0000100000c6cde8 in VMError::report (this=0x2000001f9610, st=st@entry=0x2000001f9560)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:551
  #9  0x0000100000c6da3b in VMError::report_and_die (this=this@entry=0x2000001f9610)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/utilities/vmError.cpp:984
  #10 0x0000100000b1109f in JVM_handle_linux_signal (sig=11, info=0x2000001f9bb8, ucVoid=0x2000001f9740,
      abort_if_unrecognized=<optimized out>)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/os_cpu/linux_x86/vm/os_linux_x86.cpp:528
  #11 0x000000000039f242 in call_signal_handler (frame=0x2000001f9b10) at ../../arch/x64/signal.cc:69
  #12 <signal handler called>
  #13 0x000000000057d721 in access_mem ()
  #14 0x000000000057cb1d in dwarf_get ()
  #15 0x000000000057ce51 in _ULx86_64_step ()
  #16 0x00000000004315fd in backtrace (buffer=0x1ff9d80 <memory::alloc_tracker::remember(void*, int)::bt>, size=20)
      at ../../libc/misc/backtrace.cc:16
  #17 0x00000000003b8d99 in memory::alloc_tracker::remember (this=0x1777ae0 <memory::tracker>, addr=0xffffc0004508de00, size=54)
      at ../../core/alloctracker.cc:59
  #18 0x00000000003b0504 in memory::tracker_remember (addr=0xffffc0004508de00, size=54) at ../../core/mempool.cc:43
  #19 0x00000000003b2152 in std_malloc (size=54) at ../../core/mempool.cc:723
  #20 0x00000000003b259c in malloc (size=54) at ../../core/mempool.cc:856
  #21 0x0000100001615e4c in JNU_GetStringPlatformChars (env=env@entry=0xffffc0003a4dc1d8, jstr=jstr@entry=0xffffc0004591b800,
      isCopy=isCopy@entry=0x0) at ../../../src/share/native/common/jni_util.c:801
  #22 0x000010000161ada6 in Java_java_io_UnixFileSystem_getBooleanAttributes0 (env=0xffffc0003a4dc1d8, this=<optimized out>,
      file=<optimized out>) at ../../../src/solaris/native/java/io/UnixFileSystem_md.c:111
  #23 0x000020000021ed8e in ?? ()
  #24 0x00002000001faa58 in ?? ()
  #25 0x00002000001faac0 in ?? ()
  #26 0x00002000001faa50 in ?? ()
  #27 0x0000000000000000 in ?? ()

Spotted by Avi Kivity.
penberg pushed a commit that referenced this pull request Sep 3, 2013
If "-javaagent" option is enabled, the JVM page faults:

  (gdb) bt
  #0  0x000000000033d432 in halt_no_interrupts () at ../../arch/x64/processor.hh:232
  #1  osv::halt () at ../../core/power.cc:20
  #2  0x0000000000214d82 in abort (msg=msg@entry=0x57ea90 "page fault outside application") at ../../runtime.cc:98
  #3  0x00000000002fd4d8 in page_fault (ef=0xffffc0003ffe6008) at ../../core/mmu.cc:943
  #4  <signal handler called>
  #5  0x0000000000373169 in __stpncpy (d=d@entry=0x2000001fe7f0 "dlsym: symbol JVM_begin_signal_setting not found", s=0x0, n=1023)
      at ../../libc/string/stpncpy.c:19
  #6  0x0000000000373ad1 in strncpy (d=0x2000001fe7f0 "dlsym: symbol JVM_begin_signal_setting not found", s=<optimized out>,
      n=<optimized out>) at ../../libc/string/strncpy.c:7
  #7  0x0000100000b0ab67 in os::dll_load (filename=filename@entry=0x2000001febf0 "libinstrument.so",
      ebuf=ebuf@entry=0x2000001fe7f0 "dlsym: symbol JVM_begin_signal_setting not found", ebuflen=ebuflen@entry=1024)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/os/linux/vm/os_linux.cpp:1841
  #8  0x0000100000c247eb in lookup_on_load (agent=agent@entry=0xffffc0003976ccc0,
      on_load_symbols=on_load_symbols@entry=0x2000001ffd40, num_symbol_entries=1)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/runtime/thread.cpp:3585
  #9  0x0000100000c2a64f in lookup_agent_on_load (agent=0xffffc0003976ccc0)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/runtime/thread.cpp:3617
  #10 create_vm_init_agents ()
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/runtime/thread.cpp:3656
  #11 Threads::create_vm (args=<optimized out>, canTryAgain=canTryAgain@entry=0x2000001ffdb0)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/runtime/thread.cpp:3177
  #12 0x000010000094d7b0 in JNI_CreateJavaVM (vm=0x2000001ffe58, penv=0x2000001ffe60, args=<optimized out>)
      at /usr/src/debug/java-1.7.0-openjdk-1.7.0.25-2.3.12.3.fc19.x86_64/openjdk/hotspot/src/share/vm/prims/jni.cpp:5127
  #13 0x0000100000007b3b in main (argc=<optimized out>, argv=0xffffc0003fff2008) at ../../java/java.cc:73
  #14 0x0000000000208ac8 in run_main (prog=<optimized out>, args=args@entry=0xffffc0003fe9bfa0) at ../../loader.cc:196
  #15 0x0000000000208c68 in do_main_thread (_args=0xffffc0003fe9bfa0) at ../../loader.cc:217
  #16 0x0000000000376d36 in operator() (__closure=0xffffc0003b5e2a00) at ../../libc/pthread.cc:59
  #17 std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda0>::_M_invoke(const std::_Any_data &) (__functor=...) at ../../external/gcc.bin/usr/include/c++/4.8.1/functional:2071
  #18 0x000000000032ed6b in main (this=0xffffc0003a807010) at ../../core/sched.cc:536
  #19 sched::thread_main_c (t=0xffffc0003a807010) at ../../arch/x64/arch-switch.hh:133
  #20 0x000000000031873e in thread_main () at ../../arch/x64/entry.S:101
  Backtrace stopped: frame did not save the PC

This is caused by the JVM expecting dlerror() to return an error string
if dlopen() fails. Fix that up.
penberg pushed a commit that referenced this pull request Dec 23, 2013
I saw this Abort:

35.159 Mb/s
50.230 Mb/s
46.648 Mb/s
68.850 Mb/s
Wrote 613.418 MB in 10.00 s
Aborted

The backtrace says:

  (gdb) bt
  #0  0x000000000035bb82 in halt_no_interrupts () at
  /home/asias/src/cloudius-systems/osv/arch/x64/processor.hh:241
  #1  osv::halt () at
  /home/asias/src/cloudius-systems/osv/core/power.cc:28
  #2  0x0000000000218142 in abort (msg=msg@entry=0x55197f "Aborted\n") at
  /home/asias/src/cloudius-systems/osv/runtime.cc:89
  #3  0x000000000021816e in abort () at
  /home/asias/src/cloudius-systems/osv/runtime.cc:79
  #4  0x000000000039eaa2 in osv::generate_signal (siginfo=...,
  ef=0xffffc0003eb56008) at
  /home/asias/src/cloudius-systems/osv/libc/signal.cc:58
  #5  0x000000000039eb0c in osv::handle_segmentation_fault
  (addr=<optimized out>, ef=<optimized out>) at
  /home/asias/src/cloudius-systems/osv/libc/signal.cc:73
  #6  0x000000000030b45c in mmu::vm_sigsegv
  (addr=addr@entry=17592186060800, ef=ef@entry=0xffffc0003eb56008) at
  /home/asias/src/cloudius-systems/osv/core/mmu.cc:763
  #7  0x000000000030b54b in mmu::vm_fault (addr=<optimized out>,
  addr@entry=17592186061840, ef=ef@entry=0xffffc0003eb56008)
      at /home/asias/src/cloudius-systems/osv/core/mmu.cc:773
  #8  0x000000000032bff5 in page_fault (ef=0xffffc0003eb56008) at
  /home/asias/src/cloudius-systems/osv/arch/x64/mmu.cc:35
  #9  <signal handler called>
  #10 0x0000100000004410 in ?? ()
  #11 0x000000000031e5fd in virtio::blk::req_done
  (this=0xffffc0003eddb800) at
  /home/asias/src/cloudius-systems/osv/drivers/virtio-blk.

Wait until all the bio are done to fix this use after free.

This patch also make the test to measure completed writes instead of
submitted writes.

Reviewed-by: Tomasz Grabiec <[email protected]>
Signed-off-by: Asias He <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
penberg pushed a commit that referenced this pull request Jan 13, 2014
See

  scripts/trace.py prof-wait -h

The command is using sched_wait and sched_wait_ret tracepoints to
calculate the amount of time a thread was waiting. Samples are
collected and presented in a form of call graph tree.

By default callees are closer to the root. To inverse the order pass
-r|--caller-oriented.

If there is too much output, it can be narrowed down using
--max-levels and --min-duration options.

The presented time spectrum can be narrowed down using --since and --until
options which accept timestamps.

Example:

  scripts/trace.py prof-wait --max-levels 3 trace-file

=== Thread 0xffffc0003eaeb010 ===

12.43 s (100.00%, #7696) All
 |-- 12.43 s (99.99%, #7658) sched::thread::do_wait_until
 |    |-- 10.47 s (84.22%, #6417) condvar::wait(lockfree::mutex*, unsigned long)
 |    |    condvar_wait
 |    |    |-- 6.47 s (52.08%, #6250) cv_timedwait
 |    |    |    txg_delay
 |    |    |    dsl_pool_tempreserve_space
 |    |    |    dsl_dir_tempreserve_space
 |    |    |    dmu_tx_try_assign
 |    |    |    dmu_tx_assign
 |    |    |
 |    |    |-- 2.37 s (19.06%, #24) arc_read_nolock
 |    |    |    arc_read
 |    |    |    dsl_read
 |    |    |    traverse_visitbp
 |    |    |
 |    |    |-- 911.75 ms (7.33%, #3) txg_wait_open
 |    |    |    dmu_tx_wait
 |    |    |    zfs_write
 |    |    |    vfs_file::write(uio*, int)
 |    |    |    sys_write
 |    |    |    pwritev
 |    |    |    writev
 |    |    |    __stdio_write
 |    |    |    __fwritex
 |    |    |    fwrite
 |    |    |    0x100000005a5f
 |    |    |    osv::run(std::string, int, char**, int*)

By default every thread has a separate tree, because duration is best
interpreted in the context of particular thread. There is however an
option to merge samples from all threads into one tree:
-m|--merge-threads. It may be useful if you want to inspect all paths
going in/out to/from particular function. The direction can be changed
with -r|--caller-oriented option. Function names is passed to --function
parameter.

Example: check where zfs_write() blocks:

  scripts/trace.py prof-wait -rm --function=zfs_write trace-file

7.46 s (100.00%, #7314) All
 zfs_write
 |-- 6.48 s (86.85%, #6371) dmu_tx_assign
 |    |-- 6.47 s (86.75%, #6273) dmu_tx_try_assign
 |    |    dsl_dir_tempreserve_space
 |    |    |-- 6.47 s (86.75%, #6248) dsl_pool_tempreserve_space
 |    |    |    txg_delay
 |    |    |    cv_timedwait
 |    |    |    condvar_wait
 |    |    |    condvar::wait(lockfree::mutex*, unsigned long)
 |    |    |    sched::thread::do_wait_until
 |    |    |
 |    |    |-- 87.87 us (0.00%, #24) mutex_lock
 |    |    |    sched::thread::do_wait_until
 |    |    |
 |    |    \-- 6.40 us (0.00%, #1) dsl_dir_tempreserve_impl
 |    |         mutex_lock
 |    |         sched::thread::do_wait_until
 |    |
 |    \-- 7.32 ms (0.10%, #98) mutex_lock
 |         sched::thread::do_wait_until
 |
 |-- 911.75 ms (12.22%, #3) dmu_tx_wait
 |    txg_wait_open
 |    condvar_wait
 |    condvar::wait(lockfree::mutex*, unsigned long)
 |    sched::thread::do_wait_until

Signed-off-by: Tomasz Grabiec <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
@asias asias mentioned this pull request May 6, 2014
@asias asias mentioned this pull request May 23, 2014
asias pushed a commit to asias/osv that referenced this pull request May 26, 2014
asias pushed a commit to asias/osv that referenced this pull request May 26, 2014
cassandra: Increase stack size to fix startup
penberg added a commit that referenced this pull request Jul 3, 2014
Httpserver migration fixes
penberg pushed a commit that referenced this pull request Jul 17, 2014
This reverts commit 902535a.

Raphael reports the following breakage in tst-poll.so:

  OSv
  ---

    TEST tst-nway-merger.so        OK  (0.782 s)
    TEST tst-openat.so             OK  (0.731 s)
    TEST tst-pipe.so               OK  (6.840 s)
    TEST tst-poll.so              OSv v0.10-319-g1bdd4a2
  eth0: 192.168.122.15
  Running 3 test cases...
  Assertion failed: tp->t_flags & 0x1000000
  (/data/osv/bsd/sys/netinet/tcp_timer.cc: tcp_timer_tso_flush: 438)

  [backtrace]
  0x0000000000223e98 <__assert_fail+24>
  0x0000000000279de4 <???+2596324>
  0x00000000003dbd9d <async::timer_task::fire(async::percpu_timer_task&)+125>
  0x00000000003dcb13 <async::async_worker::run()+1587>
  0x00000000003be11a <thread_main_c+26>
  0x0000000000372525 <???+3613989>

  GDB
  ---

  (gdb) bt
  #0  0x00000000003d2a72 in cli_hlt () at /data/osv/arch/x64/processor.hh:242
  #1  halt_no_interrupts () at /data/osv/arch/x64/arch.hh:48
  #2  osv::halt () at /data/osv/core/power.cc:34
  #3  0x0000000000223df0 in abort (fmt=fmt@entry=0x5c9980 "Assertion
  failed: %s (%s: %s: %d)\n") at /data/osv/runtime.cc:139
  #4  0x0000000000223e99 in __assert_fail (expr=<optimized out>,
  file=<optimized out>, line=<optimized out>, func=<optimized out>)
      at /data/osv/runtime.cc:145
  #5  0x0000000000279de5 in tcp_timer_tso_flush (timer=...,
  tp=0xffffa0003db51800) at /data/osv/bsd/sys/netinet/tcp_timer.cc:438
  #6  0x00000000003dbd9e in async::timer_task::fire
  (this=this@entry=0xffffa0003d6b9a10, task=...) at
  /data/osv/core/async.cc:362
  #7  0x00000000003dcb14 in fire (task=..., this=0xffff80003fa84030) at
  /data/osv/core/async.cc:227
  #8  async::async_worker::run (this=0xffff80003fa84030) at
  /data/osv/core/async.cc:175
  #9  0x00000000003be11b in main (this=0xffff80003fa84730) at
  /data/osv/core/sched.cc:915
  #10 sched::thread_main_c (t=0xffff80003fa84730) at
  /data/osv/arch/x64/arch-switch.hh:137
  #11 0x0000000000372526 in thread_main () at /data/osv/arch/x64/entry.S:113
  (gdb) print tp->t_flags
  No symbol "tp" in current context.
  (gdb) frame 5
  #5  0x0000000000279de5 in tcp_timer_tso_flush (timer=...,
  tp=0xffffa0003db51800) at /data/osv/bsd/sys/netinet/tcp_timer.cc:438
  438     KASSERT(tp->t_flags & TF_TSO, "tcp_timer_tso_flush: TSO disabled");
  (gdb) print tp->t_flags
  $1 = 992

Signed-off-by: Pekka Enberg <[email protected]>
asias pushed a commit to asias/osv that referenced this pull request Jul 22, 2014
The first patch replaces the calls to the BSD functions in three places
by calls to a new function route_cache::lookup(). lookup() is a new
function which still does the slow lock-full lookup, but the point of
this patch is twofold: 1. To verify that indeed, these three places are
all the lookups done in the fast-path of the UDP memcached and tomcat
benchmarks (other benchmarks may involve more code paths, can be fixed
later).
2. Now lookup() takes care of the locking, and there is no need to
unlock a route after calling it - and the first patch verifies this
movement of the unlock code indeed works as a expected.

Note that this patch defines an invalidate() function but nothing
ever defines it, nor uses it. Once we're confident about the direction
of these two patches, I think it should be pretty straighforward to
implement and call invalidate() on route changes.

Signed-off-by: Nadav Har'El <[email protected]>
asias pushed a commit to asias/osv that referenced this pull request Jul 23, 2014
The first patch replaces the calls to the BSD functions in three places
by calls to a new function route_cache::lookup(). lookup() is a new
function which still does the slow lock-full lookup, but the point of
this patch is twofold: 1. To verify that indeed, these three places are
all the lookups done in the fast-path of the UDP memcached and tomcat
benchmarks (other benchmarks may involve more code paths, can be fixed
later).
2. Now lookup() takes care of the locking, and there is no need to
unlock a route after calling it - and the first patch verifies this
movement of the unlock code indeed works as a expected.

Note that this patch defines an invalidate() function but nothing
ever defines it, nor uses it. Once we're confident about the direction
of these two patches, I think it should be pretty straighforward to
implement and call invalidate() on route changes.

Signed-off-by: Nadav Har'El <[email protected]>
asias pushed a commit to asias/osv that referenced this pull request Jul 24, 2014
The first patch replaces the calls to the BSD functions in three places
by calls to a new function route_cache::lookup(). lookup() is a new
function which still does the slow lock-full lookup, but the point of
this patch is twofold: 1. To verify that indeed, these three places are
all the lookups done in the fast-path of the UDP memcached and tomcat
benchmarks (other benchmarks may involve more code paths, can be fixed
later).
2. Now lookup() takes care of the locking, and there is no need to
unlock a route after calling it - and the first patch verifies this
movement of the unlock code indeed works as a expected.

Note that this patch defines an invalidate() function but nothing
ever defines it, nor uses it. Once we're confident about the direction
of these two patches, I think it should be pretty straighforward to
implement and call invalidate() on route changes.

Signed-off-by: Nadav Har'El <[email protected]>
asias pushed a commit to asias/osv that referenced this pull request Jul 28, 2014
The first patch replaces the calls to the BSD functions in three places
by calls to a new function route_cache::lookup(). lookup() is a new
function which still does the slow lock-full lookup, but the point of
this patch is twofold: 1. To verify that indeed, these three places are
all the lookups done in the fast-path of the UDP memcached and tomcat
benchmarks (other benchmarks may involve more code paths, can be fixed
later).
2. Now lookup() takes care of the locking, and there is no need to
unlock a route after calling it - and the first patch verifies this
movement of the unlock code indeed works as a expected.

Note that this patch defines an invalidate() function but nothing
ever defines it, nor uses it. Once we're confident about the direction
of these two patches, I think it should be pretty straighforward to
implement and call invalidate() on route changes.

Signed-off-by: Nadav Har'El <[email protected]>
penberg pushed a commit that referenced this pull request Jul 28, 2014
The first patch replaces the calls to the BSD functions in three places
by calls to a new function route_cache::lookup(). lookup() is a new
function which still does the slow lock-full lookup, but the point of
this patch is twofold: 1. To verify that indeed, these three places are
all the lookups done in the fast-path of the UDP memcached and tomcat
benchmarks (other benchmarks may involve more code paths, can be fixed
later).
2. Now lookup() takes care of the locking, and there is no need to
unlock a route after calling it - and the first patch verifies this
movement of the unlock code indeed works as a expected.

Note that this patch defines an invalidate() function but nothing
ever defines it, nor uses it. Once we're confident about the direction
of these two patches, I think it should be pretty straighforward to
implement and call invalidate() on route changes.

Signed-off-by: Nadav Har'El <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
asias added a commit to asias/osv that referenced this pull request Aug 28, 2014
   $ sudo scripts/run.py -c1 -nv -m4g  -e "/tests/tst-epoll.so"
   ...

   PASS: epoll_wait
   PASS: write
   PASS: epoll_wait
   PASS: epoll_ctl MOD
   PASS: epoll_wait
   PASS: epoll_ctl ADD
   PASS: errno == EEXIST
   Assertion failed: i != f_epolls->end() (/home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc: epoll_del: 249)

   [backtrace]
   0x0000000000223648 <__assert_fail+24>
   0x000000000041743a <file::epoll_del(epoll_ptr)+186>
   0x00000000003f7682 <epoll_ctl+786>
   0x0000100000c032db <???+12595931>
   0x000000000040510a <osv::application::run_main(std::string, int, char**)+266>
   0x0000000000405847 <osv::application::run_main()+551>
   0x000000000020bbd1 <osv::application::main()+97>
   0x0000000000406698 <???+4220568>
   0x0000000000433745 <???+4405061>
   0x00000000003e38aa <thread_main_c+26>
   0x0000000000389c75 <???+3710069>

   (gdb) bt
   #0  0x00000000003f8e92 in cli_hlt () at /home/asias/src/cloudius-systems/osv/arch/x64/processor.hh:242
   cloudius-systems#1  halt_no_interrupts () at /home/asias/src/cloudius-systems/osv/arch/x64/arch.hh:49
   cloudius-systems#2  osv::halt () at /home/asias/src/cloudius-systems/osv/core/power.cc:34
   cloudius-systems#3  0x000000000022360f in abort (fmt=fmt@entry=0x603b60 "Assertion failed: %s (%s: %s: %d)\n") at /home/asias/src/cloudius-systems/osv/runtime.cc:150
   cloudius-systems#4  0x0000000000223649 in __assert_fail (expr=<optimized out>, file=<optimized out>, line=<optimized out>, func=<optimized out>)
       at /home/asias/src/cloudius-systems/osv/runtime.cc:156
   cloudius-systems#5  0x000000000041743b in file::epoll_del (this=<optimized out>, ep=...) at /home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc:249
   cloudius-systems#6  0x00000000003f7683 in del (key=..., this=0xffff800001746040) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:96
   cloudius-systems#7  epoll_ctl (epfd=<optimized out>, op=<optimized out>, fd=<optimized out>, event=0x2000002f9d50) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:283
   cloudius-systems#8  0x0000100000c032dc in test_epolloneshot () at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:78
   cloudius-systems#9  main (ac=<optimized out>, av=<optimized out>) at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:231
   cloudius-systems#10 0x000000000040510b in osv::application::run_main (this=this@entry=0xffffa00001561f18, path="/tests/tst-epoll.so", argc=argc@entry=1,
       argv=argv@entry=0xffffa00002994760) at /home/asias/src/cloudius-systems/osv/core/app.cc:182
   cloudius-systems#11 0x0000000000405848 in osv::application::run_main (this=this@entry=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:197
   cloudius-systems#12 0x000000000020bbd2 in osv::application::main (this=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:144
   cloudius-systems#13 0x0000000000406699 in operator() (__closure=0x0, app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:104
   cloudius-systems#14 osv::application::__lambda3::_FUN (app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:106
   cloudius-systems#15 0x0000000000433746 in operator() (__closure=0xffffa00003320600) at /home/asias/src/cloudius-systems/osv/libc/pthread.cc:97
   cloudius-systems#16 std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda7>::_M_invoke(const std::_Any_data &) (__functor=...) at /home/asias/src/cloudius-systems/osv/external/x64/gcc.bin/usr/include/c++/4.8.2/functional:2071
   cloudius-systems#17 0x00000000003e38ab in main (this=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/core/sched.cc:940
   cloudius-systems#18 sched::thread_main_c (t=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/arch/x64/arch-switch.hh:137
   cloudius-systems#19 0x0000000000389c76 in thread_main () at /home/asias/src/cloudius-systems/osv/arch/x64/entry.S:113
asias added a commit to asias/osv that referenced this pull request Aug 28, 2014
After:
    Revert "Revert "epoll: fix up lock ordering issues""
    This reverts commit 408a982.

I saw the following without this patch:

   $ sudo scripts/run.py -c1 -nv -m4g  -e "/tests/tst-epoll.so"
   ...

   PASS: epoll_wait
   PASS: write
   PASS: epoll_wait
   PASS: epoll_ctl MOD
   PASS: epoll_wait
   PASS: epoll_ctl ADD
   PASS: errno == EEXIST
   Assertion failed: i != f_epolls->end() (/home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc: epoll_del: 249)

   [backtrace]
   0x0000000000223648 <__assert_fail+24>
   0x000000000041743a <file::epoll_del(epoll_ptr)+186>
   0x00000000003f7682 <epoll_ctl+786>
   0x0000100000c032db <???+12595931>
   0x000000000040510a <osv::application::run_main(std::string, int, char**)+266>
   0x0000000000405847 <osv::application::run_main()+551>
   0x000000000020bbd1 <osv::application::main()+97>
   0x0000000000406698 <???+4220568>
   0x0000000000433745 <???+4405061>
   0x00000000003e38aa <thread_main_c+26>
   0x0000000000389c75 <???+3710069>

   (gdb) bt
   #0  0x00000000003f8e92 in cli_hlt () at /home/asias/src/cloudius-systems/osv/arch/x64/processor.hh:242
   cloudius-systems#1  halt_no_interrupts () at /home/asias/src/cloudius-systems/osv/arch/x64/arch.hh:49
   cloudius-systems#2  osv::halt () at /home/asias/src/cloudius-systems/osv/core/power.cc:34
   cloudius-systems#3  0x000000000022360f in abort (fmt=fmt@entry=0x603b60 "Assertion failed: %s (%s: %s: %d)\n") at /home/asias/src/cloudius-systems/osv/runtime.cc:150
   cloudius-systems#4  0x0000000000223649 in __assert_fail (expr=<optimized out>, file=<optimized out>, line=<optimized out>, func=<optimized out>)
       at /home/asias/src/cloudius-systems/osv/runtime.cc:156
   cloudius-systems#5  0x000000000041743b in file::epoll_del (this=<optimized out>, ep=...) at /home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc:249
   cloudius-systems#6  0x00000000003f7683 in del (key=..., this=0xffff800001746040) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:96
   cloudius-systems#7  epoll_ctl (epfd=<optimized out>, op=<optimized out>, fd=<optimized out>, event=0x2000002f9d50) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:283
   cloudius-systems#8  0x0000100000c032dc in test_epolloneshot () at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:78
   cloudius-systems#9  main (ac=<optimized out>, av=<optimized out>) at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:231
   cloudius-systems#10 0x000000000040510b in osv::application::run_main (this=this@entry=0xffffa00001561f18, path="/tests/tst-epoll.so", argc=argc@entry=1,
       argv=argv@entry=0xffffa00002994760) at /home/asias/src/cloudius-systems/osv/core/app.cc:182
   cloudius-systems#11 0x0000000000405848 in osv::application::run_main (this=this@entry=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:197
   cloudius-systems#12 0x000000000020bbd2 in osv::application::main (this=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:144
   cloudius-systems#13 0x0000000000406699 in operator() (__closure=0x0, app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:104
   cloudius-systems#14 osv::application::__lambda3::_FUN (app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:106
   cloudius-systems#15 0x0000000000433746 in operator() (__closure=0xffffa00003320600) at /home/asias/src/cloudius-systems/osv/libc/pthread.cc:97
   cloudius-systems#16 std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda7>::_M_invoke(const std::_Any_data &) (__functor=...) at /home/asias/src/cloudius-systems/osv/external/x64/gcc.bin/usr/include/c++/4.8.2/functional:2071
   cloudius-systems#17 0x00000000003e38ab in main (this=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/core/sched.cc:940
   cloudius-systems#18 sched::thread_main_c (t=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/arch/x64/arch-switch.hh:137
   cloudius-systems#19 0x0000000000389c76 in thread_main () at /home/asias/src/cloudius-systems/osv/arch/x64/entry.S:113
asias added a commit to asias/osv that referenced this pull request Aug 28, 2014
The problem is that when epoll_wait removes the key from _file map, it
does not remove the key from _epoll map, so when epoll_ctrl(DEL) tries
to remove _file_map again, we hit the assert.

This fixes the following.

After:
    Revert "Revert "epoll: fix up lock ordering issues""

I saw:

   $ sudo scripts/run.py -c1 -nv -m4g  -e "/tests/tst-epoll.so"
   ...

   PASS: epoll_wait
   PASS: write
   PASS: epoll_wait
   PASS: epoll_ctl MOD
   PASS: epoll_wait
   PASS: epoll_ctl ADD
   PASS: errno == EEXIST
   Assertion failed: i != f_epolls->end() (/home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc: epoll_del: 249)

   [backtrace]
   0x0000000000223648 <__assert_fail+24>
   0x000000000041743a <file::epoll_del(epoll_ptr)+186>
   0x00000000003f7682 <epoll_ctl+786>
   0x0000100000c032db <???+12595931>
   0x000000000040510a <osv::application::run_main(std::string, int, char**)+266>
   0x0000000000405847 <osv::application::run_main()+551>
   0x000000000020bbd1 <osv::application::main()+97>
   0x0000000000406698 <???+4220568>
   0x0000000000433745 <???+4405061>
   0x00000000003e38aa <thread_main_c+26>
   0x0000000000389c75 <???+3710069>

   (gdb) bt
   #0  0x00000000003f8e92 in cli_hlt () at /home/asias/src/cloudius-systems/osv/arch/x64/processor.hh:242
   cloudius-systems#1  halt_no_interrupts () at /home/asias/src/cloudius-systems/osv/arch/x64/arch.hh:49
   cloudius-systems#2  osv::halt () at /home/asias/src/cloudius-systems/osv/core/power.cc:34
   cloudius-systems#3  0x000000000022360f in abort (fmt=fmt@entry=0x603b60 "Assertion failed: %s (%s: %s: %d)\n") at /home/asias/src/cloudius-systems/osv/runtime.cc:150
   cloudius-systems#4  0x0000000000223649 in __assert_fail (expr=<optimized out>, file=<optimized out>, line=<optimized out>, func=<optimized out>)
       at /home/asias/src/cloudius-systems/osv/runtime.cc:156
   cloudius-systems#5  0x000000000041743b in file::epoll_del (this=<optimized out>, ep=...) at /home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc:249
   cloudius-systems#6  0x00000000003f7683 in del (key=..., this=0xffff800001746040) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:96
   cloudius-systems#7  epoll_ctl (epfd=<optimized out>, op=<optimized out>, fd=<optimized out>, event=0x2000002f9d50) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:283
   cloudius-systems#8  0x0000100000c032dc in test_epolloneshot () at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:78
   cloudius-systems#9  main (ac=<optimized out>, av=<optimized out>) at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:231
   cloudius-systems#10 0x000000000040510b in osv::application::run_main (this=this@entry=0xffffa00001561f18, path="/tests/tst-epoll.so", argc=argc@entry=1,
       argv=argv@entry=0xffffa00002994760) at /home/asias/src/cloudius-systems/osv/core/app.cc:182
   cloudius-systems#11 0x0000000000405848 in osv::application::run_main (this=this@entry=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:197
   cloudius-systems#12 0x000000000020bbd2 in osv::application::main (this=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:144
   cloudius-systems#13 0x0000000000406699 in operator() (__closure=0x0, app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:104
   cloudius-systems#14 osv::application::__lambda3::_FUN (app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:106
   cloudius-systems#15 0x0000000000433746 in operator() (__closure=0xffffa00003320600) at /home/asias/src/cloudius-systems/osv/libc/pthread.cc:97
   cloudius-systems#16 std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda7>::_M_invoke(const std::_Any_data &) (__functor=...) at /home/asias/src/cloudius-systems/osv/external/x64/gcc.bin/usr/include/c++/4.8.2/functional:2071
   cloudius-systems#17 0x00000000003e38ab in main (this=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/core/sched.cc:940
   cloudius-systems#18 sched::thread_main_c (t=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/arch/x64/arch-switch.hh:137
   cloudius-systems#19 0x0000000000389c76 in thread_main () at /home/asias/src/cloudius-systems/osv/arch/x64/entry.S:113

Signed-off-by: Asias He <[email protected]>
avikivity pushed a commit that referenced this pull request Aug 28, 2014
The problem is that when epoll_wait removes the key from _file map, it
does not remove the key from _epoll map, so when epoll_ctrl(DEL) tries
to remove _file_map again, we hit the assert.

This fixes the following.

After:
    Revert "Revert "epoll: fix up lock ordering issues""

I saw:

   $ sudo scripts/run.py -c1 -nv -m4g  -e "/tests/tst-epoll.so"
   ...

   PASS: epoll_wait
   PASS: write
   PASS: epoll_wait
   PASS: epoll_ctl MOD
   PASS: epoll_wait
   PASS: epoll_ctl ADD
   PASS: errno == EEXIST
   Assertion failed: i != f_epolls->end() (/home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc: epoll_del: 249)

   [backtrace]
   0x0000000000223648 <__assert_fail+24>
   0x000000000041743a <file::epoll_del(epoll_ptr)+186>
   0x00000000003f7682 <epoll_ctl+786>
   0x0000100000c032db <???+12595931>
   0x000000000040510a <osv::application::run_main(std::string, int, char**)+266>
   0x0000000000405847 <osv::application::run_main()+551>
   0x000000000020bbd1 <osv::application::main()+97>
   0x0000000000406698 <???+4220568>
   0x0000000000433745 <???+4405061>
   0x00000000003e38aa <thread_main_c+26>
   0x0000000000389c75 <???+3710069>

   (gdb) bt
   #0  0x00000000003f8e92 in cli_hlt () at /home/asias/src/cloudius-systems/osv/arch/x64/processor.hh:242
   #1  halt_no_interrupts () at /home/asias/src/cloudius-systems/osv/arch/x64/arch.hh:49
   #2  osv::halt () at /home/asias/src/cloudius-systems/osv/core/power.cc:34
   #3  0x000000000022360f in abort (fmt=fmt@entry=0x603b60 "Assertion failed: %s (%s: %s: %d)\n") at /home/asias/src/cloudius-systems/osv/runtime.cc:150
   #4  0x0000000000223649 in __assert_fail (expr=<optimized out>, file=<optimized out>, line=<optimized out>, func=<optimized out>)
       at /home/asias/src/cloudius-systems/osv/runtime.cc:156
   #5  0x000000000041743b in file::epoll_del (this=<optimized out>, ep=...) at /home/asias/src/cloudius-systems/osv/fs/vfs/kern_descrip.cc:249
   #6  0x00000000003f7683 in del (key=..., this=0xffff800001746040) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:96
   #7  epoll_ctl (epfd=<optimized out>, op=<optimized out>, fd=<optimized out>, event=0x2000002f9d50) at /home/asias/src/cloudius-systems/osv/core/epoll.cc:283
   #8  0x0000100000c032dc in test_epolloneshot () at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:78
   #9  main (ac=<optimized out>, av=<optimized out>) at /home/asias/src/cloudius-systems/osv/tests/tst-epoll.cc:231
   #10 0x000000000040510b in osv::application::run_main (this=this@entry=0xffffa00001561f18, path="/tests/tst-epoll.so", argc=argc@entry=1,
       argv=argv@entry=0xffffa00002994760) at /home/asias/src/cloudius-systems/osv/core/app.cc:182
   #11 0x0000000000405848 in osv::application::run_main (this=this@entry=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:197
   #12 0x000000000020bbd2 in osv::application::main (this=0xffffa00001561f18) at /home/asias/src/cloudius-systems/osv/core/app.cc:144
   #13 0x0000000000406699 in operator() (__closure=0x0, app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:104
   #14 osv::application::__lambda3::_FUN (app=<optimized out>) at /home/asias/src/cloudius-systems/osv/core/app.cc:106
   #15 0x0000000000433746 in operator() (__closure=0xffffa00003320600) at /home/asias/src/cloudius-systems/osv/libc/pthread.cc:97
   #16 std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::__lambda7>::_M_invoke(const std::_Any_data &) (__functor=...) at /home/asias/src/cloudius-systems/osv/external/x64/gcc.bin/usr/include/c++/4.8.2/functional:2071
   #17 0x00000000003e38ab in main (this=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/core/sched.cc:940
   #18 sched::thread_main_c (t=0xffff80000354d050) at /home/asias/src/cloudius-systems/osv/arch/x64/arch-switch.hh:137
   #19 0x0000000000389c76 in thread_main () at /home/asias/src/cloudius-systems/osv/arch/x64/entry.S:113

Signed-off-by: Asias He <[email protected]>
Signed-off-by: Avi Kivity <[email protected]>
penberg pushed a commit that referenced this pull request Apr 22, 2015
libstdc++ comes with a python script to help gdb pretty-print some useful
C++ types (unique_ptr, vector, etc.). Our current gdb script takes this
script from external/, but unfortunately the script we have in external/
is from Fedora 20, written for Python 2, while some systems' gdb have
Python 3 so this script doesn't work.

The more sensible thing to do - given that we're already using (by default)
the build machine's libstdc++, and its gdb, is to run the script installed
on the build machine. This is what this patch does, unfortunately in an
ugly way and not a very robust way (namely, if two versions of
libstdc++ are installed, it picks one arbitrarily). But I can't figure
out a cleaner way to do this.

Fixes cloudius/gcc-bin/#1

Signed-off-by: Nadav Har'El <[email protected]>
Signed-off-by: Pekka Enberg <[email protected]>
jdroot added a commit to SEI-TAS/osv that referenced this pull request Jan 5, 2016
…the build

system and VFS to allow using the new file system as the root file system.

There are a few reasons why this is an important addition to OSv:
    1. A read-only root file system allows the user to force the hypervisor
       to not allow writing to disk. This means that you can boot multiple
       intances off of a single image file. A use case for this feature is
       something like Redis, allowing you to only have to maintain
       a single image file.

    2. An immutable image will always have the same hash signature. This will
       allow the user of the image to verify that the image file has not been
       tampered with, even after the image has been run.

    3. From Nadav Har'El: "it allows building an OSv image *without* running
       it, which solves all sorts of nasty corner cases in OSv's current
       scripts/build, when it is not desired to *run* the image during build.
       For example, cross-compilation, building on a VM without efficient
       nested virtualization, etc."

The file system itself is titled MFS (originally standing for My File System,
but the name is not very important). It has a limited set of features, but
the reduced feature set allows the implementation and the layout of the blocks
to be very simple. Currently MFS supports regular files/directories and
symbolic links. File and directory names are limited to 63 characters and link
paths are limited to 511 characters.

The layout of the file system is as follows:
    Block 0:       The superblock. This stores identifing information about the
                   intstance of MFS as well as the Block ID for Inode cloudius-systems#1.

    Block 1 - N:   The next set of blocks store files, directory entires, and
                   links. *Everything* in MFS is stored sequentially, meaning
                   that there will be no fragmentation. Also, files can be
                   referenced by knowing their starting block id as well as
                   their length.

    Block N - End: The last set of blocks stores the inodes, in sequential
                   order. Due to being ordered, it is possible to know the
                   location of any inode given only its number.

Based on conversations with Nadav, I went back and forth about implementing a
caching mechanism in the MFS driver. After some testing, I opted to not include
a caching mechanism for the following reasons:

    1. The block I/O system (fs/vfs/vfs_bio.h) already has a cache built in
       when using the bread function.
    2. Through testing with the redis-memonly image, I found that over 96%
       (1132/1162) of file reads are first time reads of a block, meaning that
       it could not have been cached (without a much more complex pre-caching
       system). Of blocks that were read multiple times, the block i/o system
       had an 80% cache hit rate (31/39). This seems to be sufficient for the
       time being. Tests were run with MFS as the root file system and multiple
       runs of redis-benchmark against a single running image.

The build system and OSv modifications were extremly minimal. An option was
added to scripts/build titled rootfs that allows specifying mfs or zfs to be
used (with zfs as the default). Once OSV is running, it will attempt to mount
the root file system as MFS every time. Upon failure (in the case of ZFS being
the root file system), it will fall back to trying to mount the file system as
ZFS. Currently, I do not believe it is possible to have both ZFS and MFS
running in the same image (zpool is not included in MFS images) but this should
be implemented in the future.
wkozaczuk added a commit to wkozaczuk/osv that referenced this pull request Nov 17, 2017
…the build

system and VFS to allow using the new file system as the root file system.

There are a few reasons why this is an important addition to OSv:
    1. A read-only root file system allows the user to force the hypervisor
       to not allow writing to disk. This means that you can boot multiple
       intances off of a single image file. A use case for this feature is
       something like Redis, allowing you to only have to maintain
       a single image file.

    2. An immutable image will always have the same hash signature. This will
       allow the user of the image to verify that the image file has not been
       tampered with, even after the image has been run.

    3. From Nadav Har'El: "it allows building an OSv image *without* running
       it, which solves all sorts of nasty corner cases in OSv's current
       scripts/build, when it is not desired to *run* the image during build.
       For example, cross-compilation, building on a VM without efficient
       nested virtualization, etc."

The file system itself is titled MFS (originally standing for My File System,
but the name is not very important). It has a limited set of features, but
the reduced feature set allows the implementation and the layout of the blocks
to be very simple. Currently MFS supports regular files/directories and
symbolic links. File and directory names are limited to 63 characters and link
paths are limited to 511 characters.

The layout of the file system is as follows:
    Block 0:       The superblock. This stores identifing information about the
                   intstance of MFS as well as the Block ID for Inode cloudius-systems#1.

    Block 1 - N:   The next set of blocks store files, directory entires, and
                   links. *Everything* in MFS is stored sequentially, meaning
                   that there will be no fragmentation. Also, files can be
                   referenced by knowing their starting block id as well as
                   their length.

    Block N - End: The last set of blocks stores the inodes, in sequential
                   order. Due to being ordered, it is possible to know the
                   location of any inode given only its number.

Based on conversations with Nadav, I went back and forth about implementing a
caching mechanism in the MFS driver. After some testing, I opted to not include
a caching mechanism for the following reasons:

    1. The block I/O system (fs/vfs/vfs_bio.h) already has a cache built in
       when using the bread function.
    2. Through testing with the redis-memonly image, I found that over 96%
       (1132/1162) of file reads are first time reads of a block, meaning that
       it could not have been cached (without a much more complex pre-caching
       system). Of blocks that were read multiple times, the block i/o system
       had an 80% cache hit rate (31/39). This seems to be sufficient for the
       time being. Tests were run with MFS as the root file system and multiple
       runs of redis-benchmark against a single running image.

The build system and OSv modifications were extremly minimal. An option was
added to scripts/build titled rootfs that allows specifying mfs or zfs to be
used (with zfs as the default). Once OSV is running, it will attempt to mount
the root file system as MFS every time. Upon failure (in the case of ZFS being
the root file system), it will fall back to trying to mount the file system as
ZFS. Currently, I do not believe it is possible to have both ZFS and MFS
running in the same image (zpool is not included in MFS images) but this should
be implemented in the future.

Signed-off-by: James Root <[email protected]>
nyh pushed a commit that referenced this pull request Dec 9, 2019
New versions of boost like 1.69.0 have more complicated
dependendencies and cause issues with some of the unit tests.

```
tst-rcu-list.so
  libboost_unit_test_framework.so.1.69.0
    libboost_timer.so.1.69.0
      libboost_chrono.so.1.69.0
        libboost_system.so.1.69.0
        libgcc_s.so.1
      libboost_system.so.1.69.0
      libgcc_s.so.1
    libboost_system.so.1.69.0
    libgcc_s.so.1
  libboost_filesystem.so.1.69.0
    libboost_system.so.1.69.0
    libgcc_s.so.1
  libgcc_s.so.1
```

More specifically unit tests like tst-rcu-list and tst-rcu-hashtable
crash when executing FINI functions while unloading boost libraries
with a stack trace like this:

```
#0  0x00000000403a3cf2 in processor::cli_hlt () at arch/x64/processor.hh:247
#1  arch::halt_no_interrupts () at arch/x64/arch.hh:48
#2  osv::halt () at arch/x64/power.cc:26
#3  0x0000000040239dae in abort (fmt=fmt@entry=0x4064155f "Aborted\n") at runtime.cc:132
#4  0x00000000402028ab in abort () at runtime.cc:98
#5  0x000000004021981e in osv::generate_signal (siginfo=..., ef=0xffff80000123c068) at libc/signal.cc:124
#6  0x0000000040463f7f in osv::handle_mmap_fault (addr=<optimized out>, sig=<optimized out>, ef=<optimized out>) at libc/signal.cc:139
#7  0x000000004033da62 in mmu::vm_fault (addr=17592187015168, addr@entry=17592187015648, ef=ef@entry=0xffff80000123c068) at core/mmu.cc:1337
#8  0x000000004039dc30 in page_fault (ef=0xffff80000123c068) at arch/x64/mmu.cc:42
#9  <signal handler called>
#10 0x00001000000ed1e0 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::~basic_string() ()
#11 0x000000004023a357 in __cxxabiv1::__cxa_finalize (dso=<optimized out>) at runtime.cc:183
#12 0x000010000041f5f7 in __do_global_dtors_aux ()
#13 0x00002000001008f0 in ?? ()
#14 0x0000000040352f34 in elf::object::run_fini_funcs (this=0xffffa000015d0a00) at core/elf.cc:1105
```

The order in which FINI functions are executed currently follows
the order in which its objects are unloaded. It is not exactly clear why,
but there seems to be some dependency between child
objects in the DT_NEEDED list of paricular object. For example,
if in tst-rcu-list.so the libboost_unit_test_framework.so.1.69.0
is unloaded first and libboost_filesystem.so.1.69.0 unloaded later,
the std string destructor called by libboost_filesystem.so.1.69.0 FINI
function causes page fault shown above.

The commit #4d24b90aa38d90bdcf5ebecf7a5badeca3b657eb from 6 years ago
made process of unloading of objects symmetrical to the loading one
but not quite exactly which seems to be a culprit.

This patch refines the `elf::unload_needed()` method by changing the order
in which the dependent object references are released.

First we release references held by the `_used_by_resolve_plt_got` set
mostly because they are added after DT_NEEDED is processed and `_needed` populated and
we want this to be symmetrical.

Secondly we release references from the `_needed` vector
by iterating and removing elements starting at the end. This
also makes the unloading behavior symmetrical to the order
in which the objects from DT_NEEDED were loaded in first place.

Concluding with this patch all the unit tests now
pass when linked the boost 1.69.0 from Fedora 30 host.

Signed-off-by: Waldemar Kozaczuk <[email protected]>
Message-Id: <[email protected]>
nyh pushed a commit that referenced this pull request Mar 29, 2021
This patch improves debugging experience by making gdb
show correct stack traces that look similar to what one can
see in x86_64 port. It also solves sporadic errors when
iterating over list of threads using 'osv info threads'
when gdb would break like so:

`Python Exception <class 'gdb.error'> Attempt to assign to an unmodifiable value.:
 Error occurred in Python: Attempt to assign to an unmodifiable value.'

So in essence this patch adds new function implemented in assembly - 'thread_main' -
which calls 'thread_main_c'. The address of this new function is also what we
set 'pc' field of the thread _state instead of 'thread_main_c'. The main
benefit of adding this extra redirection is that we can add proper CFI
(Call Frame Information) directive - '.cfi_undefined x30` - to force gdb not
to look up the stack for call frames beyond the 'thread_main' one. This
is similar to what we do in x86_64 port.

So in the end, besides fixing sporadic gdb errors when iterating over
threads, we also get much better and correct looking stacktraces
that end with (or start with from the bottom):

Before the patch:
-----------------
#0  reschedule_from_interrupt () at arch/aarch64/sched.S:54
#1  0x00000000402e36d4 in sched::cpu::schedule () at core/sched.cc:229
#2  0x00000000402e79a0 in sched::thread::wait (this=<optimized out>) at core/sched.cc:1273
#3  sched::thread::do_wait_until<sched::noninterruptible, sched::thread::dummy_lock, sched::cpu::load_balance()::<lambda()> > (mtx=<synthetic pointer>..., pred=...)
    at include/osv/sched.hh:1101
#4  sched::thread::wait_until<sched::cpu::load_balance()::<lambda()> > (pred=...) at include/osv/sched.hh:1112
#5  sched::cpu::load_balance (
    this=0x40700960 <construction vtable for std::ostream-in-std::__cxx11::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> >+24>)
    at core/sched.cc:730
#6  0x00000000402e72b4 in sched::thread::main (this=0xffffa000408a39a0) at core/sched.cc:1267
#7  sched::thread_main_c (t=0xffffa000408a39a0) at arch/aarch64/arch-switch.hh:161
#8  0x00000000402c04e0 in reschedule_from_interrupt () at arch/aarch64/sched.S:50
#9  0x0000000000000000 in ?? ()

After the patch:
----------------
#0  reschedule_from_interrupt () at arch/aarch64/sched.S:54
#1  0x00000000402e36d4 in sched::cpu::schedule () at core/sched.cc:229
#2  0x00000000402e79a0 in sched::thread::wait (this=<optimized out>) at core/sched.cc:1273
#3  sched::thread::do_wait_until<sched::noninterruptible, sched::thread::dummy_lock, sched::cpu::load_balance()::<lambda()> > (mtx=<synthetic pointer>..., pred=...)
    at include/osv/sched.hh:1101
#4  sched::thread::wait_until<sched::cpu::load_balance()::<lambda()> > (pred=...) at include/osv/sched.hh:1112
#5  sched::cpu::load_balance (
    this=0x407009b8 <construction vtable for std::ostream-in-std::__cxx11::basic_ostringstream<char, std::char_traits<char>, std::allocator<char> >+24>)
    at core/sched.cc:730
#6  0x00000000402e72b4 in sched::thread::main (this=0xffffa000408a39a0) at core/sched.cc:1267
#7  sched::thread_main_c (t=0xffffa000408a39a0) at arch/aarch64/arch-switch.hh:162
#8  0x000000004020b788 in thread_main () at arch/aarch64/entry.S:112

Refs #1128

Signed-off-by: Waldemar Kozaczuk <[email protected]>
Message-Id: <[email protected]>
nyh pushed a commit that referenced this pull request Mar 29, 2021
…ssembly

This patch adds necessary CFI (Call Frame Information) directives
to the assembly code in entry.S and macros.S to make stack traces
for page faults and interrupt handler look correct. Otherwise
debugging scenarios that involve those becomes very painful.

Before the patch:
-----------------
#0  mmu::file_vma::fault (this=0xffffa00040c9b280, addr=17592186171392, ef=0x200000100110) at core/mmu.cc:1708
#1  0x00000000401da05c in mmu::vm_fault (addr=17592186171392, ef=0x200000100110) at core/mmu.cc:1354
#2  0x000000004020ba9c in page_fault (ef=0x200000100110) at arch/aarch64/mmu.cc:50
#3  0x000000004020b82c in handle_mem_abort () at arch/aarch64/entry.S:161
Backtrace stopped: previous frame identical to this frame (corrupt stack?)

After the patch:
----------------
#0  mmu::file_vma::fault (this=0xffffa00040c9b280, addr=17592186171392, ef=0x200000100110) at core/mmu.cc:1708
#1  0x00000000401da05c in mmu::vm_fault (addr=17592186171392, ef=0x200000100110) at core/mmu.cc:1354
#2  0x000000004020ba9c in page_fault (ef=0x200000100110) at arch/aarch64/mmu.cc:50
#3  <signal handler called>
#4  elf::object::_dynamic_tag (tag=29, this=0xffffa00040dcf800) at core/elf.cc:649
#5  elf::object::dynamic_exists (tag=29, this=0xffffa00040dcf800) at core/elf.cc:641
#6  elf::object::load_needed (this=0xffffa00040dcf800, loaded_objects=std::vector of length 1, capacity 1 = {...}) at core/elf.cc:1068
#7  0x00000000401f1bd4 in elf::program::load_object (this=this@entry=0xffffa00040897ba0, name="/libvdso.so", extra_path=std::vector of length 0, capacity 0,
    loaded_objects=std::vector of length 1, capacity 1 = {...}) at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/shared_ptr_base.h:1321
#8  0x00000000401f24fc in elf::program::get_library (this=0xffffa00040897ba0, name="libvdso.so", extra_path=std::vector of length 0, capacity 0, delay_init=false)
    at core/elf.cc:1459
#9  0x0000000040314ea0 in osv::application::prepare_argv (this=this@entry=0xffffa00040dcf610, program=program@entry=0xffffa00040897ba0) at core/app.cc:361
#10 0x0000000040315694 in osv::application::application(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&, bool, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::hash<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > > const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) (this=0xffffa00040dcf610, command="/tests/tst-hello.so", args=..., new_program=false, env=0x0, main_function_name="main", post_main=...)
    at core/app.cc:182
...

Before the patch:
-----------------
#0  virtio::blk::ack_irq (this=0xffffa00040a91c00) at drivers/virtio-blk.cc:103
#1  operator() (__closure=0xffffa00040950e40) at drivers/virtio-blk.cc:143
#2  std::__invoke_impl<bool, virtio::blk::blk(virtio::virtio_device&)::<lambda(pci::device&)>::<lambda()>&> (__f=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/invoke.h:60
#3  std::__invoke_r<bool, virtio::blk::blk(virtio::virtio_device&)::<lambda(pci::device&)>::<lambda()>&> (__fn=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/invoke.h:141
#4  std::_Function_handler<bool(), virtio::blk::blk(virtio::virtio_device&)::<lambda(pci::device&)>::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/std_function.h:291
#5  0x000000004020c784 in std::function<bool ()>::operator()() const (this=<optimized out>)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/std_function.h:622
#6  interrupt_table::invoke_interrupt (this=this@entry=0x40780968 <idt>, id=id@entry=36) at arch/aarch64/exceptions.cc:135
#7  0x000000004020c95c in interrupt (frame=<optimized out>) at arch/aarch64/exceptions.cc:167
#8  0x000000004020b930 in entry_irq () at arch/aarch64/entry.S:175
#9  0x000000004020b930 in entry_irq () at arch/aarch64/entry.S:175
#10 0x000000004020b930 in entry_irq () at arch/aarch64/entry.S:175
#11 0x000000004020b930 in entry_irq () at arch/aarch64/entry.S:175
#12 0x000000004020b930 in entry_irq () at arch/aarch64/entry.S:175
#13 0x000000004020b930 in entry_irq () at arch/aarch64/entry.S:175
... (infinite number of frames)

After the patch:
----------------
#0  virtio::blk::ack_irq (this=0xffffa00040a91c00) at drivers/virtio-blk.cc:103
#1  operator() (__closure=0xffffa00040950e40) at drivers/virtio-blk.cc:143
#2  std::__invoke_impl<bool, virtio::blk::blk(virtio::virtio_device&)::<lambda(pci::device&)>::<lambda()>&> (__f=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/invoke.h:60
#3  std::__invoke_r<bool, virtio::blk::blk(virtio::virtio_device&)::<lambda(pci::device&)>::<lambda()>&> (__fn=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/invoke.h:141
#4  std::_Function_handler<bool(), virtio::blk::blk(virtio::virtio_device&)::<lambda(pci::device&)>::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/std_function.h:291
#5  0x000000004020c784 in std::function<bool ()>::operator()() const (this=<optimized out>)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/std_function.h:622
#6  interrupt_table::invoke_interrupt (this=this@entry=0x40780968 <idt>, id=id@entry=36) at arch/aarch64/exceptions.cc:135
#7  0x000000004020c95c in interrupt (frame=<optimized out>) at arch/aarch64/exceptions.cc:167
#8  <signal handler called>
#9  sched::cpu::do_idle (this=this@entry=0xffff8000407fe040) at core/sched.cc:454
#10 0x00000000402e6b78 in sched::cpu::idle (this=0xffff8000407fe040) at core/sched.cc:472
#11 0x00000000402e6ba0 in operator() (__closure=<optimized out>, __closure=<optimized out>) at core/sched.cc:165
#12 std::__invoke_impl<void, sched::cpu::init_idle_thread()::<lambda()>&> (__f=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/invoke.h:60
#13 std::__invoke_r<void, sched::cpu::init_idle_thread()::<lambda()>&> (__fn=...) at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/invoke.h:153
#14 std::_Function_handler<void(), sched::cpu::init_idle_thread()::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...)
    at build/downloaded_packages/aarch64/gcc/install/usr/include/c++/10/bits/std_function.h:291
#15 0x00000000402e72b4 in sched::thread::main (this=0xffffa000408eea48) at core/sched.cc:1267
#16 sched::thread_main_c (t=0xffffa000408eea48) at arch/aarch64/arch-switch.hh:162
#17 0x000000004020b788 in thread_main () at arch/aarch64/entry.S:114

Refs #1128

Signed-off-by: Waldemar Kozaczuk <[email protected]>
Message-Id: <[email protected]>
nyh pushed a commit that referenced this pull request Nov 14, 2021
After 4 years of asking in mailing list, this patch adds basic
support of generating flame graph "folded" intermediate format files.
Those files can be then fed to the flamegraph.pl to generate the SVG
files to visualize the captured trace file.

The recipe to generate the flame graphs for OSv:

1. In one terminal run OSv image with the option '--sampler' like so:
  './scripts/run.py --sampler 999'. This enables the
"trace_sampler_tick" tracepoint that captures periodically the
backtraces on each CPU at specified frequency (in the example 999 times
a second).

2. In another terminal run the stress test and extract the trace
file using ./scripts/trace.py extract.

3. Run ./scripts/trace.py prof-flame > /path/to/the/flame/file to
convert the tracefile into folded flame format.

4. Finally, run flamegraph.pl /path/to/the/flame/file >
/path/to/the/svg/file to generate the SVG file.

Please note that at this moment the trace scripts is not able to
resolve the application symbols so the generated flame graph is not as
helpful as it can be:

sched::thread::main();__invoke_r<void, virtio::net::net(virtio::virtio_device&)::<lambda()>&>;__invoke_impl<void, virtio::net::net(virtio::virtio_device&)::<lambda()>&>;std::_Function_handler<void (), virtio::net::net(virtio::virtio_device&)::{lambda()#1}>::_M_invoke(std::_Any_data const&);virtio::net::receiver() 342
17592192727762;17592191756991;17592192722413;17592192722076 118
18446744069951476397 130
sched::thread::main();__invoke_r<void, virtio::net::net(virtio::virtio_device&)::<lambda()>&>;__invoke_impl<void, virtio::net::net(virtio::virtio_device&)::<lambda()>&>;std::_Function_handler<void (), virtio::net::net(virtio::virtio_device&)::{lambda()#1}>::_M_invoke(std::_Any_data const&);virtio::net::receiver();classifier::post_packet(mbuf*) 134
536930194 117
536871396 179

Signed-off-by: Waldemar Kozaczuk <[email protected]>
Message-Id: <[email protected]>
nyh pushed a commit that referenced this pull request May 11, 2022
The manifest skeleton files like usr.manifest.skel have an entry
for libgcc_s_dir looking like this:

/usr/lib/libgcc_s.so.1: %(libgcc_s_dir)s/libgcc_s.so.1

This actually gets expanded quite late during the build process
by upload_manifest.py. The unfortunate consequence of this is that
loader.py used during debugging which reads usr.manifest does not have
any logic to load libgcc_s.so.1. And this makes stack traces
look useless in those cases that involve libgcc_s.so.1.

So this patch slightly changes the scripts/build and scripts/module.py
to expand %(libgcc_s_dir) when writing to build/release/usr.manifest.
As a result of this the stack trace of the crash I have been working
on looks much more reasonable:

(gdb) bt
'#0  0x00000000403047c2 in processor::cli_hlt () at arch/x64/processor.hh:247
#1  arch::halt_no_interrupts () at arch/x64/arch.hh:48
#2  osv::halt () at arch/x64/power.cc:29
#3  0x0000000040239504 in abort (fmt=fmt@entry=0x405b1e93 "Aborted\n") at runtime.cc:142
#4  0x0000000040202e80 in abort () at runtime.cc:106
#5  0x000010000002b6b6 in ?? ()
#6  0x000010000003f5cb in _Unwind_Resume ()
#7  0x0000100000062daa in ?? ()
#8  0x0000100000075b5c in boost::execution_monitor::vexecute(boost::function<void ()> const&) ()
#9  0x000010000007f0a9 in boost::unit_test::framework::init(bool (*)(), int, char**) ()
#10 0x000010000009254d in boost::unit_test::unit_test_main(bool (*)(), int, char**) ()
#11 0x000000004039d021 in osv::application::run_main (this=0xffffa00000bd8c10) at core/app.cc:416
#12 0x000000004039d22d in operator() (app=<optimized out>, __closure=0x0) at core/app.cc:236
#13 _FUN () at core/app.cc:238
#14 0x00000000403d089a in operator() (__closure=0xffffa00000d57800) at libc/pthread.cc:116
#15 std::__invoke_impl<void, pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::<lambda()>&> (__f=...) at /usr/include/c++/11/bits/invoke.h:61
#16 std::__invoke_r<void, pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::<lambda()>&> (__fn=...) at /usr/include/c++/11/bits/invoke.h:154
#17 std::_Function_handler<void(), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::<lambda()> >::_M_invoke(const std::_Any_data &) (__functor=...) at /usr/include/c++/11/bits/std_function.h:290
#18 0x000000004036b5ae in sched::thread::main (this=0xffff800000f6a040) at core/sched.cc:1267
#19 sched::thread_main_c (t=0xffff800000f6a040) at arch/x64/arch-switch.hh:325
#20 0x00000000402fda43 in thread_main () at arch/x64/entry.S:116

Signed-off-by: Waldemar Kozaczuk <[email protected]>
Message-Id: <[email protected]>
wkozaczuk added a commit that referenced this pull request Oct 16, 2022
This last patch of the series adds new tracepoint - mmu_vm_stack_fault -
which when enabled allows one to see how particular app triggers
the stack page faults. The tracepoint captures the stack fault address,
the thread id and number of the page (0 being the 1st page).

Please note this does not capture the 1st page of the stack (page_no 0) as this one
pre-faulted by the parent thread that creates a new one.

./scripts/run.py -e /tests/tst-pipe.so --trace=mmu_vm_stack_fault --trace-backtrace -H
./scripts/trace.py extract && ./scripts/trace.py list -bl

0xffff8000016b7040 >init            0         0.002215401 mmu_vm_stack_fault   thread=32, addr=0x00002000000ff9d0, page_no=1
  mmu::vm_fault(unsigned long, exception_frame*)
  page_fault
  ex_pf
  std_malloc(unsigned long, unsigned long)
  malloc
  operator new(unsigned long)
  do_main_thread(void*)
  std::_Function_handler<void (), pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, pthread_private::thread_attr const*)::{lambda()#1}>::_M_invoke(std::_Any_data const&) __invoke_impl<void, pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::<lambda()>&>__invoke_r<void, pthread_private::pthread::pthread(void* (*)(void*), void*, sigset_t, const pthread_private::thread_attr*)::<lambda()>&> _M_invoke
  sched::thread::main()
  thread_main_c

...

0xffff8000016b7040 >init            0         0.084799151 mmu_vm_stack_fault   thread=32, addr=0x00002000000f8440, page_no=8
  mmu::vm_fault(unsigned long, exception_frame*)
  page_fault
  ex_pf
  memory::page_pool::l1::alloc_page()
  untracked_alloc_page
  memory::alloc_page()
  std_malloc(unsigned long, unsigned long)
  malloc
  operator new(unsigned long)
  lookup
  sys_lstat

Signed-off-by: Waldemar Kozaczuk <[email protected]>
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants