Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ksmbd.control shutdown is hung and causing 100% cpu indefinitely #318

Open
putnam opened this issue May 20, 2024 · 14 comments
Open

ksmbd.control shutdown is hung and causing 100% cpu indefinitely #318

putnam opened this issue May 20, 2024 · 14 comments

Comments

@putnam
Copy link

putnam commented May 20, 2024

I was just testing out ksmbd and have had lots of trouble with Mac clients working reliably. On one of my restarts (systemctl restart ksmbd) it hung in such a way that now ksmbd.control is using 80-100% CPU and ksmbd shows as locked in iowait. No clients are connected right now. I tried attaching to ksmbd.control with strace and nothing prints to stdout and strace itself ends up nonresponsive and has to get SIGKILL'ed.

# ps aux|grep ksmbd
root     3993336  0.0  0.0      0     0 ?        I    May19   0:00 [kworker/22:3-ksmbd-io]
root     4001544  0.0  0.0      0     0 ?        I    May19   0:00 [kworker/18:0-ksmbd-io]
root     4001591  0.0  0.0      0     0 ?        D    May19   0:02 [ksmbd:58379]
root     4011752  0.0  0.0      0     0 ?        I    00:13   0:00 [kworker/3:1-ksmbd-io]
root     4013966 84.1  0.0   5584  1920 ?        Rs   00:20   7:58 /usr/sbin/ksmbd.control --shutdown
root     4017007  0.0  0.0   6352  1536 pts/5    S+   00:30   0:00 grep ksmbd

I am on Linux kernel 6.7.12 on Debian testing.

I have debug logging set to all in the kernel and there has been nothing printed since this group of logs from immediately after the restart:

[Mon May 20 00:20:31 2024] ksmbd: kill command received
[Mon May 20 00:20:31 2024] ksmbd: releasing socket
[Mon May 20 00:20:31 2024] ksmbd: releasing socket
[Mon May 20 00:20:31 2024] ksmbd: releasing socket
[Mon May 20 00:20:31 2024] ksmbd: releasing socket
[Mon May 20 00:20:31 2024] ksmbd: releasing socket
[Mon May 20 00:20:31 2024] ksmbd: releasing socket
[Mon May 20 00:20:35 2024] ksmbd: Unable to close RPC pipe 1
@namjaejeon
Copy link
Member

Okay, Is this problem reproducible? Or is it an issue that cannot be reproduced well?

@namjaejeon
Copy link
Member

Because I want to reproduce it on my side to check it at detall.

@putnam
Copy link
Author

putnam commented May 20, 2024

I'm not sure yet -- it happened just after a few attempts at restarting, and the only clients that had accessed it while up were a current MacOS client and a Windows 11 client (both over TCP). I do still have the repro active right now, if you have any suggestions for more debug info.

@namjaejeon
Copy link
Member

Could you please tell me how to restart ksmbd ?

@putnam
Copy link
Author

putnam commented May 20, 2024

Each time, I used systemctl restart ksmbd on my Debian install. It looks like it runs /usr/sbin/ksmbd.control --shutdown which is what is using 100% CPU right now. The ksmbd kernel thread is in the iowait (D) state.

I can't attach to ksmbd.control with gdb or strace. Both just hang.

@putnam
Copy link
Author

putnam commented May 20, 2024

This may be interesting -- I looked at the journalctl for the ksmbd service during my couple of restarts and here is the state:

May 20 00:20:40 server systemd[1]: Stopping ksmbd.service - ksmbd userspace daemon...
May 20 00:20:40 server ksmbd[4001570]: [ksmbd.mountd(worker)/4001570]: INFO: Terminated
May 20 00:20:40 server ksmbd[4001569]: [ksmbd.mountd(manager)/4001569]: INFO: Terminated
May 20 00:22:10 server systemd[1]: ksmbd.service: Stopping timed out. Terminating.
May 20 00:23:40 server systemd[1]: ksmbd.service: State 'stop-sigterm' timed out. Killing.
May 20 00:23:40 server systemd[1]: ksmbd.service: Killing process 4013966 (ksmbd.control) with signal SIGKILL.
May 20 00:25:11 server systemd[1]: ksmbd.service: Processes still around after SIGKILL. Ignoring.
May 20 00:26:41 server systemd[1]: ksmbd.service: State 'final-sigterm' timed out. Killing.
May 20 00:26:41 server systemd[1]: ksmbd.service: Killing process 4013966 (ksmbd.control) with signal SIGKILL.
May 20 00:28:11 server systemd[1]: ksmbd.service: Processes still around after final SIGKILL. Entering failed mode.
May 20 00:28:11 server systemd[1]: ksmbd.service: Failed with result 'timeout'.
May 20 00:28:11 server systemd[1]: ksmbd.service: Unit process 4013966 (ksmbd.control) remains running after unit stopped.
May 20 00:28:11 server systemd[1]: Stopped ksmbd.service - ksmbd userspace daemon.
May 20 00:28:11 server systemd[1]: ksmbd.service: Consumed 6min 757ms CPU time, 1.5M memory peak, 0B memory swap peak.
May 20 00:28:11 server systemd[1]: ksmbd.service: Found left-over process 4013966 (ksmbd.control) in control group while starting unit. Ignoring.
May 20 00:28:11 server systemd[1]: ksmbd.service: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
May 20 00:28:11 server systemd[1]: Starting ksmbd.service - ksmbd userspace daemon...
May 20 00:28:11 server ksmbd[4016225]: [ksmbd.mountd/4016225]: INFO: Wrote `/run/ksmbd.lock'
May 20 00:28:11 server ksmbd[4016225]: [ksmbd.mountd(manager)/4016225]: INFO: Started worker
May 20 00:28:11 server ksmbd.mountd[4016224]: [ksmbd.mountd/4016224]: INFO: Started manager
May 20 00:28:11 server systemd[1]: Started ksmbd.service - ksmbd userspace daemon.
May 20 00:28:11 server ksmbd[4016226]: [ksmbd.mountd(worker)/4016226]: ERROR: nl_recv() failed, check dmesg, error: Invalid input data or parameter
May 20 00:28:11 server ksmbd[4016226]: [ksmbd.mountd(worker)/4016226]: INFO: Terminated
May 20 00:28:11 server ksmbd[4016225]: [ksmbd.mountd(manager)/4016225]: INFO: Terminated
May 20 00:28:11 server systemd[1]: ksmbd.service: Main process exited, code=exited, status=1/FAILURE
May 20 00:29:41 server systemd[1]: ksmbd.service: State 'stop-sigterm' timed out. Killing.
May 20 00:29:41 server systemd[1]: ksmbd.service: Killing process 4013966 (ksmbd.control) with signal SIGKILL.
May 20 00:31:11 server systemd[1]: ksmbd.service: Processes still around after SIGKILL. Ignoring.
May 20 00:32:42 server systemd[1]: ksmbd.service: State 'final-sigterm' timed out. Killing.
May 20 00:32:42 server systemd[1]: ksmbd.service: Killing process 4013966 (ksmbd.control) with signal SIGKILL.
May 20 00:34:12 server systemd[1]: ksmbd.service: Processes still around after final SIGKILL. Entering failed mode.
May 20 00:34:12 server systemd[1]: ksmbd.service: Failed with result 'exit-code'.
May 20 00:34:12 server systemd[1]: ksmbd.service: Unit process 4013966 (ksmbd.control) remains running after unit stopped.
May 20 00:34:12 server systemd[1]: ksmbd.service: Consumed 6min 946ms CPU time, 1.7M memory peak, 0B memory swap peak.

Note that PID 4013966 is the instance still in the task list.

It seems the systemd service file is written such that a restart will:

  1. Stop ksmbd by sending SIGTERM to ksmbd.control --shutdown
  2. It will wait some default amount of time before trying to kill it with SIGKILL
  3. After trying with SIGKILL several times it gives up, but then tries starting again anyway;
  4. Now it runs ksmbd.mountd

The systemd service:

[Unit]
Description=ksmbd userspace daemon
[email protected]
Wants=network-online.target
[email protected] network.target network-online.target

[Service]
Type=forking
PIDFile=/run/ksmbd.lock
ExecStart=/usr/sbin/ksmbd.mountd
ExecReload=/usr/sbin/ksmbd.control --reload
ExecStop=/usr/sbin/ksmbd.control --shutdown

[Install]
WantedBy=multi-user.target

Based on timestamps in dmesg and systemctl, it seems the following error occurred right around the time shutdown was called:

[Mon May 20 00:20:35 2024] ksmbd: Unable to close RPC pipe 1

I don't know what state it was in but perhaps there is a code path here that caused IOWAIT and blocked shutdown?

@putnam
Copy link
Author

putnam commented May 20, 2024

Looking at lsof:

image

You can see the sysfs path for "kill_server" is still open.

That suggests we're in this code path, before the fd is closed:
https://github.com/cifsd-team/ksmbd-tools/blob/master/control/control.c#L80

Not sure why CPU would be spinning @ 100% but that's what's still happening.

@putnam
Copy link
Author

putnam commented May 20, 2024

I got a stack trace from sysrq.

ksmbd.control:

May 20 04:02:46 server kernel: task:ksmbd.control   state:S stack:0     pid:4013966 tgid:4013966 ppid:1      flags:0x00004006
May 20 04:02:46 server kernel: Call Trace:
May 20 04:02:46 server kernel:  <TASK>
May 20 04:02:46 server kernel:  ? __schedule+0x98c/0xb20
May 20 04:02:46 server kernel:  ? lock_timer_base+0x61/0x80
May 20 04:02:46 server kernel:  ? __mod_timer+0x11f/0x370
May 20 04:02:46 server kernel:  ? schedule+0x32/0xd0
May 20 04:02:46 server kernel:  ? schedule_timeout+0xa2/0x160
May 20 04:02:46 server kernel:  ? __pfx_process_timeout+0x10/0x10
May 20 04:02:46 server kernel:  ? ksmbd_conn_transport_destroy+0xae/0xd0 [ksmbd]
May 20 04:02:46 server kernel:  ? kill_server_store+0x5c/0x110 [ksmbd]
May 20 04:02:46 server kernel:  ? kernfs_fop_write_iter+0x136/0x1d0
May 20 04:02:46 server kernel:  ? vfs_write+0x23d/0x400
May 20 04:02:46 server kernel:  ? ksys_write+0x6f/0xf0
May 20 04:02:46 server kernel:  ? do_syscall_64+0x64/0x120
May 20 04:02:46 server kernel:  ? kmem_cache_free+0x22/0x380
May 20 04:02:46 server kernel:  ? srso_return_thunk+0x5/0x5f
May 20 04:02:46 server kernel:  ? do_sys_openat2+0x97/0xe0
May 20 04:02:46 server kernel:  ? count_memcg_events.constprop.0+0x1a/0x30
May 20 04:02:46 server kernel:  ? srso_return_thunk+0x5/0x5f
May 20 04:02:46 server kernel:  ? exit_to_user_mode_prepare+0x40/0x1e0
May 20 04:02:46 server kernel:  ? srso_return_thunk+0x5/0x5f
May 20 04:02:46 server kernel:  ? syscall_exit_to_user_mode+0x22/0x40
May 20 04:02:46 server kernel:  ? srso_return_thunk+0x5/0x5f
May 20 04:02:46 server kernel:  ? do_syscall_64+0x70/0x120
May 20 04:02:46 server kernel:  ? srso_return_thunk+0x5/0x5f
May 20 04:02:46 server kernel:  ? exit_to_user_mode_prepare+0x40/0x1e0
May 20 04:02:46 server kernel:  ? entry_SYSCALL_64_after_hwframe+0x73/0x7b
May 20 04:02:46 server kernel:  </TASK>

And ksmbd:

May 20 04:02:46 server kernel: task:ksmbd:58379     state:D stack:0     pid:4001591 tgid:4001591 ppid:2      flags:0x00004000
May 20 04:02:46 server kernel: Call Trace:
May 20 04:02:46 server kernel:  <TASK>
May 20 04:02:46 server kernel:  __schedule+0x3da/0xb20
May 20 04:02:46 server kernel:  ? __pfx_ksmbd_conn_handler_loop+0x10/0x10 [ksmbd]
May 20 04:02:46 server kernel:  schedule+0x32/0xd0
May 20 04:02:46 server kernel:  ksmbd_conn_handler_loop+0x2ea/0x380 [ksmbd]
May 20 04:02:46 server kernel:  ? __pfx_autoremove_wake_function+0x10/0x10
May 20 04:02:46 server kernel:  kthread+0xe8/0x120
May 20 04:02:46 server kernel:  ? __pfx_kthread+0x10/0x10
May 20 04:02:46 server kernel:  ret_from_fork+0x34/0x50
May 20 04:02:46 server kernel:  ? __pfx_kthread+0x10/0x10
May 20 04:02:46 server kernel:  ret_from_fork_asm+0x1b/0x30
May 20 04:02:46 server kernel:  </TASK>

Busy work queues:

May 20 04:02:53 server kernel: Showing busy workqueues and worker pools:
May 20 04:02:53 server kernel: workqueue events: flags=0x0
May 20 04:02:53 server kernel:   pwq 70: cpus=35 node=0 flags=0x0 nice=0 active=7 refcnt=8
May 20 04:02:53 server kernel:     pending: drm_fb_helper_damage_work [drm_kms_helper], 4*iova_depot_work_func, psi_avgs_work, iova_depot_work_func
May 20 04:02:53 server kernel: workqueue events_power_efficient: flags=0x80
May 20 04:02:53 server kernel:   pwq 70: cpus=35 node=0 flags=0x0 nice=0 active=1 refcnt=2
May 20 04:02:53 server kernel:     pending: fb_flashcursor
May 20 04:02:53 server kernel: workqueue ksmbd-io: flags=0x0
May 20 04:02:53 server kernel:   pwq 8: cpus=4 node=0 flags=0x0 nice=0 active=1 refcnt=2
May 20 04:02:53 server kernel:     in-flight: 4086040:handle_ksmbd_work [ksmbd]
May 20 04:02:53 server kernel:   pwq 4: cpus=2 node=0 flags=0x0 nice=0 active=1 refcnt=2
May 20 04:02:53 server kernel:     in-flight: 4090716:handle_ksmbd_work [ksmbd]
May 20 04:02:53 server kernel: pool 4: cpus=2 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 4073865 4086039
May 20 04:02:53 server kernel: pool 8: cpus=4 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 4087357 4087359

@namjaejeon
Copy link
Member

@putnam Thanks for giving the hints. There are some suspicious points, but I wish I could reproduce it in my environment to definitively fix this issue. This can only be reproduced with a MacOS client, right?

@namjaejeon
Copy link
Member

@putnam Can you run ksmbd(https://github.com/namjaejeon/ksmbd) on your env ? Because I want to add the debug codes to find clue.

@namjaejeon
Copy link
Member

@putnam please share the debug prints using the following ksmbd.

git clone https://github.com/namjaejeon/ksmbd --branch=ksmbd-debug

@putnam
Copy link
Author

putnam commented May 22, 2024

Working on it. I have some issues compiling from source on Debian trixie:

make -C /lib/modules/6.7.12-amd64/build M=/home/putnam/src/ksmbd modules
make[1]: Entering directory '/usr/src/linux-headers-6.7.12-amd64'
  CC [M]  /home/putnam/src/ksmbd/unicode.o
  CC [M]  /home/putnam/src/ksmbd/auth.o
  CC [M]  /home/putnam/src/ksmbd/vfs.o
  CC [M]  /home/putnam/src/ksmbd/vfs_cache.o
  CC [M]  /home/putnam/src/ksmbd/connection.o
  CC [M]  /home/putnam/src/ksmbd/crypto_ctx.o
  CC [M]  /home/putnam/src/ksmbd/server.o
  CC [M]  /home/putnam/src/ksmbd/misc.o
  CC [M]  /home/putnam/src/ksmbd/oplock.o
  CC [M]  /home/putnam/src/ksmbd/ksmbd_work.o
  CC [M]  /home/putnam/src/ksmbd/smbacl.o
  CC [M]  /home/putnam/src/ksmbd/ndr.o
  CC [M]  /home/putnam/src/ksmbd/mgmt/ksmbd_ida.o
  CC [M]  /home/putnam/src/ksmbd/mgmt/user_config.o
  CC [M]  /home/putnam/src/ksmbd/mgmt/share_config.o
  CC [M]  /home/putnam/src/ksmbd/mgmt/tree_connect.o
  CC [M]  /home/putnam/src/ksmbd/mgmt/user_session.o
  CC [M]  /home/putnam/src/ksmbd/smb_common.o
  CC [M]  /home/putnam/src/ksmbd/transport_tcp.o
  CC [M]  /home/putnam/src/ksmbd/transport_ipc.o
  CC [M]  /home/putnam/src/ksmbd/smb2pdu.o
  CC [M]  /home/putnam/src/ksmbd/smb2ops.o
  CC [M]  /home/putnam/src/ksmbd/smb2misc.o
make[3]: *** No rule to make target '/home/putnam/src/ksmbd/ksmbd_spnego_negtokeninit.asn1.c', needed by '/home/putnam/src/ksmbd/ksmbd_spnego_negtokeninit.asn1.o'.  Stop.
make[2]: *** [/usr/src/linux-headers-6.7.12-common/Makefile:1936: /home/putnam/src/ksmbd] Error 2
make[1]: *** [/usr/src/linux-headers-6.7.12-common/Makefile:246: __sub-make] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-6.7.12-amd64'
make: *** [Makefile:47: all] Error 2

It looks like the Makefile is missing some preprocessor step for creating the .c/.h files from the asn1 files. I tried looking through repo history and could not find anything...

I also noticed the Debian kernel did not set SMB_SERVER_SMBDIRECT to yes. I want SMBDIRECT! Is it OK to build this as a module and replace the module in my mainline kernel? I would prefer not to run a custom kernel.

@namjaejeon
Copy link
Member

Hm.. stranged.. I can build it on ubuntu.

Can you check if the below is enable in your kernel config ?
CONFIG_ASN1=y
CONFIG_OID_REGISTRY=y

I also noticed the Debian kernel did not set SMB_SERVER_SMBDIRECT to yes. I want SMBDIRECT! Is it OK to build this as a module and replace the module in my mainline kernel? I would prefer not to run a custom kernel.

not sure if it is possible. because other modules(rdma) should be enable in your kernel. can you install custom kernel for test ? because we can not find the way to build out of tree ksmbd(github).

@namjaejeon
Copy link
Member

@putnam Sorry for late response. Can you try to reproduce it with the following #next branch ? I have added debug print and update more. So you need to turn debug on.(e.g. echo conn > /sys/class/ksmbd-control/debug).

git clone https://github.com/namjaejeon/ksmbd --branch=next

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants