-
-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#[should_panic(expected = ...)]
non-functional.
#5054
Comments
This seems like another example of the recent panicking global task pool issue we encountered in our CI. |
the panic message will be different wether the panic happens on the main thread or on one of the other threads |
@mockersf is that intentionally the case, and if so is there any way to change it? I could be wrong, but it seems that at bare minimum there should be some way to lock-down things to always run on the main thread for testing, though I'd probably go further and say having the same error randomly show different messages depending on the thread setup is a really unergonomic corner, probably worth addressing even in non-test runtime. |
I think the thread pool should catch panics using |
If this is on 0.7 and not main, this seems to be more an issue with our general panic handling story, though the global task pool changes definitely exacerbate this. I agree with @bjorn3 here that we should definitely use catch/resume unwind, though the behavior, or at least the default behavior likely needs some more discussion and thought. If we catch the panic, we need to forward it to the main thread to resume it, but the main thread runs on its own accord and needs to explicitly call into the TaskPool to retrieve these panics. In the meantime, it's unclear what we should then do with the panicking thread.
It may be required to either stem all panics and never forward them to the main thread, force local tasks to be Send so they can be properly migrated in the case of a panic, or have the main thread operate as a worker thread in the pool and regularly yield to allow handling these panics. |
Unintended consequence of how we handle threads! As mentioned by James, panicking in threads can cause deadlocks (it uses to happen quite often with asset loader tasks that panicked and deadlocked all the io threads). I think we should panic on the main thread when a panic happens somewhere else, at least until we have a way better error recovery than now |
just remembered this PR that is related: #2307 |
@FraserLee as a workaround you could add the system to a single threaded stage. |
# Objective Right now, the `TaskPool` implementation allows panics to permanently kill worker threads upon panicking. This is currently non-recoverable without using a `std::panic::catch_unwind` in every scheduled task. This is poor ergonomics and even poorer developer experience. This is exacerbated by bevyengine#2250 as these threads are global and cannot be replaced after initialization. Removes the need for temporary fixes like bevyengine#4998. Fixes bevyengine#4996. Fixes bevyengine#6081. Fixes bevyengine#5285. Fixes bevyengine#5054. Supersedes bevyengine#2307. ## Solution The current solution is to wrap `Executor::run` in `TaskPool` with a `catch_unwind`, and discarding the potential panic. This was taken straight from [smol](https://github.com/smol-rs/smol/blob/404c7bcc0aea59b82d7347058043b8de7133241c/src/spawn.rs#L44)'s current implementation. ~~However, this is not entirely ideal as:~~ - ~~the signaled to the awaiting task. We would need to change `Task<T>` to use `async_task::FallibleTask` internally, and even then it doesn't signal *why* it panicked, just that it did.~~ (See below). - ~~no error is logged of any kind~~ (See below) - ~~it's unclear if it drops other tasks in the executor~~ (it does not) - ~~This allows the ECS parallel executor to keep chugging even though a system's task has been dropped. This inevitably leads to deadlock in the executor.~~ Assuming we don't catch the unwind in ParallelExecutor, this will naturally kill the main thread. ### Alternatives A final solution likely will incorporate elements of any or all of the following. #### ~~Log and Ignore~~ ~~Log the panic, drop the task, keep chugging. This only addresses the discoverability of the panic. The process will continue to run, probably deadlocking the executor. tokio's detatched tasks operate in this fashion.~~ Panics already do this by default, even when caught by `catch_unwind`. #### ~~`catch_unwind` in `ParallelExecutor`~~ ~~Add another layer catching system-level panics into the `ParallelExecutor`. How the executor continues when a core dependency of many systems fails to run is up for debate.~~ `async_task::Task` bubbles up panics already, this will transitively push panics all the way to the main thread. #### ~~Emulate/Copy `tokio::JoinHandle` with `Task<T>`~~ ~~`tokio::JoinHandle<T>` bubbles up the panic from the underlying task when awaited. This can be transitively applied across other APIs that also use `Task<T>` like `Query::par_for_each` and `TaskPool::scope`, bubbling up the panic until it's either caught or it reaches the main thread.~~ `async_task::Task` bubbles up panics already, this will transitively push panics all the way to the main thread. #### Abort on Panic The nuclear option. Log the error, abort the entire process on any thread in the task pool panicking. Definitely avoids any additional infrastructure for passing the panic around, and might actually lead to more efficient code as any unwinding is optimized out. However gives the developer zero options for dealing with the issue, a seemingly poor choice for debuggability, and prevents graceful shutdown of the process. Potentially an option for handling very low-level task management (a la bevyengine#4740). Roughly takes the shape of: ```rust struct AbortOnPanic; impl Drop for AbortOnPanic { fn drop(&mut self) { abort!(); } } let guard = AbortOnPanic; // Run task std::mem::forget(AbortOnPanic); ``` --- ## Changelog Changed: `bevy_tasks::TaskPool`'s threads will no longer terminate permanently when a task scheduled onto them panics. Changed: `bevy_tasks::Task` and`bevy_tasks::Scope` will propagate panics in the spawned tasks/scopes to the parent thread.
Bevy version
Release
0.7
What you did
Repeatedly run the following minimum code sample with
cargo test
About 9/10 times the result is as expected
What went wrong
About 1/10 times the test fails, with the message
The correct panic is triggered, but the wrong panic message is sent.
Additional information
I've tried both running tests with
cargo test -- --test-threads=1
, and using theserial_test
crate - neither helping.The text was updated successfully, but these errors were encountered: