-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a trait for abstracing over executors #93
Comments
Sounds like a good idea. |
I've created a demo here as to what I have in mind. I'll expand it over the coming weeks, this is more of a "request for comments". The design is especially hamstringed by our usage of Rust v1.63 and our lack of GATs. I'm not sure yet if GATs/TAIT is the preferred option for moving forwards here. |
I've expanded the demo here to include a "race" mechanism (solving smol-rs/smol#292) and slightly more refined traits. I would appreciate comments before I move forwards. Specifically, I have questions for our deployment of this trait. I see one of the following options working:
|
I actually think we should consider deprecating the notion of a local executor, in favor of using async concurrency operators. This might seem counter-intuitive to propose as it breaks with the ecosystem status quo, but I think it would probably make concurrency in (async) Rust less confusing. Executors exist to enable parallelismThe framing of "tasks are like async/.await versions of threads" is I think one that I popularized when we were developing In sync Rust both concurrency and parallelism are provided via the thread APIs. If you want to concurrently schedule two operations, your best option is typically to use threads. In async Rust however parallelism and concurrency are unbundled. We can execute any number of futures concurrently, and they don't have to be parallel. Under this model also the idea of a "single-threaded executors" makes little sense; as executors are only needed to enable parallelism. Parallelizable futuresI've written a demo of what a "parallelizable future" can look like as part of the tasky crate. The idea is that we can use the same concurrency operations regardless of whether the underlying futures can be moved to different threads or not: use tasky::prelude::*;
use futures_concurrency::prelude::*;
let a = Client::get("https://example.com").par(); // parallelizable future `a`
let b = Client::get("https://example.com").par(); // parallelizable future `b`
let (a, b) = (a, b).try_join().await?; // concurrently await both parallelizable futures References |
Thank you for taking the time to explain this to me. I read your blog posts but I might not have understood what you mean 100%, so forgive me for my ignorance.
There are two things that this model does not allow that I think should be considered in executors. Firstly, from a look at Secondly, I'm not sure if fn main() {
let state = RefCell::new(State::new());
let ex = smol::LocalExecutor::new();
smol::block_on(ex.run(async {
let socket = smol::net::TcpListener::bind("0.0.0.0:80").await.unwrap();
while let Ok((client, _)) = socket.accept().await {
let state = &state;
ex.spawn(async move {
do_something(client, state).await;
}).detach();
}
});
} You don't know how many tasks you'll spawn at a time here; it could be two, it could be two million. If we want it to be single-threaded (i.e. the
I actually think having an use tasky::prelude::*;
use futures_concurrency::prelude::*;
use smol::LocalExecutor;
let ex = LocalExecutor::new();
let a = Client::get("https://example.com").par(&ex); // parallelizable future `a`
let b = Client::get("https://example.com").par(&ex); // parallelizable future `b`
let (a, b) = (a, b).try_join().await?; // concurrently await both parallelizable futures I actually think this would be preferable to the design I currently have. It makes it so the actual concurrency is left to Heck, if you like the idea I can just add postfix notation to |
@notgull I appreciate your questions; let me try and answer them one by one. Scaling waking behaviorSo on the point of waking time: Dynamic concurrencyOne issue with the example you've posted is that it is unstructured - calling In We should still do better though: I want to leverage fn main() {
smol::block_on(ex.run(async {
let state = State::new();
let socket = smol::net::TcpListener::bind("0.0.0.0:80").await.unwrap();
socket
.incoming()
.concurrent()
.for_each(async |stream| do_something(stream, &state).await)
.await;
});
} Note here the absence of This work isn't quite implemented yet though; I've got a draft PR I need to tinker with some more to give it shape. But I've already used the core mechanics of futures-concurrency to prove that we can express these exact semantics. On a shared executor traitI'd love it if we could start adopting postfix spawn notations more widely, encouraging us to use the same concurrency operations for both parallel and concurrent execution. If you believe that a shared executor trait could help with that, then yes absolutely we should pursue that! Something I feel might still be missing from our executor models is non-workstealing spawn APIs. I'm not super sure about this one, but imagine a pub fn spawn<Fut>(f: Fut) -> Fut::Output
where
Fut: IntoFuture + Send + 'static,
<Fut as IntoFuture>::Output: Send + 'static, Notably: we have something which may return a future, and that is |
@notgull oh no, that's not good. I can assure you both Clearly something isn't going right though. Could you share the code you used for the benchmarks so I can investigate further? |
I've uploaded the benchmarks here. Specifically, the benchmark is: for _ in 0..1_000_000 {
group.insert(future::ready(1));
}
block_on(async {
while let Some(x) = group.next().await {
black_box(x);
}
}); |
I do think that there are some advantages to the local executor (e.g. the |
@notgull thank you for uploading the benchmark; it turns out you surfaced an issue in the behavior of
This is not an exact science of course, but treating this as a ballpark measurement shows that |
By the way, I've added postfix notation |
In
smol
,Executor
andLocalExecutor
don't implement any common traits. So, it's impossible to create a common abstraction over both traits. This is a blocker for notgull/smol-hyper#2 (comment) and smol-rs/smol#292.However I think that we should have an abstraction for executors in general. We put an emphasis on a diversity of executors like
smolscale
but we have no way of abstracting over them. Therefore I think we should have a trait for spawning futures, specifically in this crate.A couple of potential strategies here:
futures-task
, which would fit the general ethos of this crate. However I think that we should avoid this, asfutures-task
doesn't really fit well withasync-executor
outside of the basics.Spawn
trait that returns aTask
trait that is a superset ofFuture
.@smol-rs/admins Thoughts?
The text was updated successfully, but these errors were encountered: