-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async IO #1081
Comments
I don't see why this couldn't just remain in third-party external crates. |
It can. Hence the 'community library' and 'libs' tags both. |
Depends on who is actually maintaining the crate. In my opinion it should be at least some from the core members as once async and sync are not on the same page anymore, it can lead to confusion of even worse, broken projects. What I mean is this: It will be hard to keep up-to-date and collaborateurs may not have digged into the core of Rust, though it will either go into a different direction or need someone to sync it with the syncronous version of the core. |
The problem with "leaving it up to third party crates" is not having a blessed implementation that all libraries can interoperate with. This problem has already happened a few times now, namely Ruby and Python both have many competing and incompatible asynchronous IO libraries (gevent/twisted, celluloid/eventmachine). In and of itself that doesn't sound so bad until you realise the huge amount of stuff that gets built on top of said libraries. When you aren't then able to use powerful libraries with each other because they belong to different "async" camps then things get sad pretty quickly. Contrast this to C#, a language with built in async primitives, it also ships most of the higher level async integrated code (HTTP client etc). There is a single blessed solution and every library builds on top of it, they all interoperate and everyone is happy. I think it's super important to have async IO in core or blessed in some way to avoid fragmentation. |
But I think there should be language-level support for some key features that are important to async programming. That includes:
Yes we can already build a Node.JS-styled callback based async library -- that makes no sense. We need language-level support to build a modern one. |
I think it's important to distinguish between traditional async I/O from use cases requiring the standard C I'd like to see the current synchronous I/O API extended to support cancellation without necessarily exposing the underlying |
A minor correction to the "Python-styled" comment: it's technically Regarding Not that bandwagons are always the best reason to choose a direction, but
Hear, hear! |
Being somebody familiar with Python and its async story, but not as familiar with compiled static languages, it would be helpful to me (and perhaps others) if somebody could comment on something that I'm familiar with from Python. In Python 3.5, |
From @nathanaeljones's comment on rust-lang/rust#6842:
Now, what is "expensive"? I'm not sure. (I'm in the same boat as @ryanhiebert. I program mostly in Python. I was made aware of Rust by @mitsuhiko's blog posts.) |
@gotgenes The expense depends on how much thread-local storage you're using. I know that it's low enough now that new APIs are async-only. This really depends on the language runtime (and the operating system); I don't think much can be learned about the performance implications by looking at other languages. |
From @flaub:
I agree with this. Even trivial programs might suffer from hacks due to the inability to interrupt synchronous calls. (See rust-lang/rust#26446.) I think the language should support asynchronous IO, personally, but if it were really too complex to add to the API, synchronous IO should be made programmatic. |
Would be cool to have something like GJ in the core. |
Please first put in the straightforward wrapper around select/pselect. I understand you also want to build something better, but my first experience with Rust was hearing that it was 1.0, trying to do a project that involved using pseudoterminals, very close to something I'd already done in C, which involves waiting for input from the user and from a subprocess simultaneously. It immediately got bogged down in a rabbit hole of reverse-engineering macros from Linux system headers and corner cases of FFI, ending up much much more difficult than it had any right to be. |
@jimrandomh Keep in mind that |
Lack of complete support in Windows didn't stop Python from using it. |
I guess this belongs here: https://medium.com/@paulcolomiets/asynchronous-io-in-rust-36b623e7b965 |
👍 I doubt we could standardize an Async IO API for all Rust applications without making it part of the Core library... and Async IO seems (to me) to be super important - even if it's just a fallback to a I believe that a blessed / standard Async IO API is essential in order to promote Rust as a network programming alternative to Java, C, and other network languages (even, perhaps, Python or Ruby). Also, considering Async IO would probably benefit the programming of a Browser, this would help us keep Mozilla invested. ... Than again, I'm new to Rust, so I might have missed an existing solution (and no, |
we could standardize the interface in the core and let lib developers do the implementations. that way every one would use that one "blessed" interface but we could have multiple competing implementations (it may be a good idea, i dont know :)). Also there could be multiple async-api-abstraction-levels just like in JS: async-await: async function myFunc() {
const data = await loadStuffAsync();
return data.a + data.b;
} promises: function myFunc() {
return loadStuffAsync().then(data => data.a + data.b);
} callbacks (i hate them ;)): function myFunc(cb) {
loadStuffAsync((err, data) => {
if (err) {
return cb(err);
}
cb(null, data.a + data.b);
});
} streams: tcpSocket
.pipe(decodeRequest) // byte stream ->[decodeRequest]-> Request object stream
.pipe(handleRequest) // Request object stream ->[handleRequest]-> Response object stream
.pipe(encodeResponse) // Response object stream ->[encodeResponse]-> byte stream
.pipe(tcpSocket)
.on('error', handleErrors); or is there already a nice stream implementation in rust? capable of...
|
Looking over the code for the It seems to me that unsafe code should be limited, as much as possible, to Rust's core and FFI implementations. The "trust me" paradigm is different when developers are asked to trust Rust's core team vs. when they are asked to trust in third parties. I doubt that competitive implementations, as suggested by @chpio , would do a better job at promoting a performant solution... although it could, possibly, be used to select a most performant solution for the underlying core library. Ruby on Rails is a good example of how a less performant solution (although more comfortably designed) could win in a competitive environment. |
There's nothing wrong with unsafe code. A crate that uses it shouldn't be discouraged. Unsafe is required whenever memory safety is sufficiently complicated that the compiler cannot reason about it. In this specific case though, unsafe is used because Rust demands all ffi code be marked unsafe. The compiler cannot reason about functions defined by other languages. You will never have code that uses epoll without unsafe (even if that unsafety were eventually tucked into a module in libstd). |
@seanmonstar - On the main part, I agree with your assessment. However... Rust's main selling point is safety and I do believe that forcing third parties to write unsafe code does hurt the sales pitch. Also, unsafe code written by third parties isn't perceived as trust-worthy as unsafe code within the core library. I'm aware that it's impossible to use the epoll and kqueue API without using unsafe code and this is part of the reason I believe that writing an Async IO library would help utilize low level system calls while promoting Rust's main feature (safety). Having said that, I'm just one voice. Both opinions have their cons and pros and both opinions are legitimate. |
I'm not sure this is the place and maybe I need to open a separate issue somewhere, but since we don't seem to have it, I'd rate some sort of abstraction over at least socket select as super important. I got to this issue by looking for that and finding other issues that linked here indirectly from 2014; since I see at least one other comment here saying the same thing, I figure I'd add my two cents. Before I go on, I should admit that I'm still from the outside looking in; I really, really want to use Rust and plan to do so in the immediate future, but haven't yet. My primary is C++ and my secondary is Python. |
It turns out that python proves quite contrary. There was At the end of the day, implementing |
As mentioned earlier, Python has already moved on from I'd like to second pointing out curio as an interesting new model for async in Python. |
This is interesting. I've never heard of asyncore before now, but it looks like a very complicated way to use a select call. I'm not surprised that it didn't become popular, especially given the 1999 release date (I found one source placing it at Python 1.5.2, but can't find official confirmation). I'm not very convinced that it's good evidence that I'm wrong about Python proving the fragmentation point. I'm not saying that I'm right, just that I need more convincing before dropping my viewpoint as incorrect. In my opinion, something okay with many protocols is better than 5 or 6 options, each more amazing than the last, but each supporting different protocols. |
It seems that |
Thank you so much @brettcannon ! |
One thing that might be worth highlighting about Python's experience so far: we actually started out using |
I just want to second reading the blog post by @njsmith as it explains why Python might be shifting how we implement event loops while not having to change |
@njsmith The futures-rs effort (with tokio on top) is readiness-based (i.e. poll), not using callbacks. |
@eddyb: futures-rs certainly seems to use callbacks, in the sense that I see lots of |
@njsmith Those are adapters, e.g. imagine This is akin to how |
@njsmith I actually came here to provide a link to your async/await blog post to augment discussion, as it was incredibly thoughtful, but you beat me to it! I'll add that @mitsuhiko also recently wrote a blog post on async in Python. Maybe particularly pertinent to this thread are his thoughts on the overloading iterators. |
The post by @mitsuhiko is specifically about |
You forgot |
any update in 2017? |
And https://tokio.rs/ generally |
can https://tokio.rs/ be used together with the above async await? I'm not seeing an example |
Yeap, Tokio is also just using futures. futures is the library at the heart of all async operations in rust.
https://github.com/alexcrichton/futures-await/blob/master/examples/echo.rs |
@c0b https://internals.rust-lang.org/t/help-test-async-await-generators-coroutines/5835 might be the best place to start if you're trying to use the async/await experiment currently on nightly. |
If it's of any interest, I've been busy implementing this idea in Perl, and observing that Python, C#, JavaScript and Dart all also do basically the same thing. Perl: https://metacpan.org/pod/Future::AsyncAwait If you have something of an "official language overview" page similar to those above, I'd like to add it to my collection. |
Since discussion has moved on from here to tokio.rs, futures-rs, and other RFCs, I'll go ahead and close this issue. |
@Centril I think it would be useful to link those issues for readers stumbling across this one. Do you happen to have them at hand? |
Not so much about directly adding async IO directly to the standard library but rather enabling as crates: |
In reply to @Centril ,
This is what I (as well as some of us) are preventing from happening for years. Lack of language-level async IO would cause incompatibility among third-party crates, as I have already stated before. Additionally, since Rust offers FFI, lack of language-level async IO would cause problems for C code to utilize concurrent programming. In fact, nearly all languages have async IO. But only those who provide language-level concurrency (e.g. Go, Python 3, C#, etc.) may have more or less chance to win the "language for the cloud" battle. We all agree that Rust is a powerful language far more than the need to build a browser engine. But we don't want to see Rust losing the cloud battlefield, do we? Really sorry if my language is offensive to you. But I disagree with your opinion that "we are relying on third-party crates". True concurrency requires implementations on compiler level that third-party crates cannot offer. Update: Thank you for your response below. ❤️ |
note: I'm only describing things as they are, not as they ought to be. =) Anyone is free to file full RFC proposals for libstd / language level async IO and we will judge those on their merits. |
Note that "other RFCs" includes https://github.com/rust-lang/rfcs/blob/master/text/2033-experimental-coroutines.md, which basically is async/await syntax (albeit slightly less pretty since it's done with procedural macros for now). That seems like "language-level async IO" to me, even if it's not-so-secretly sugar over the futures library. |
That's OK. It's surface syntax sugar over futures in Perl as well. :) Probably true of many languages |
@m13253 In case you hadn't heard, here are more up-to-date proposals: |
Rust currently only includes synchronous IO in the standard library. Should we also have async IO as well, or leave that to an external crate? What would the design look like?
Moved from rust-lang/rust#6842
Related: #388
The text was updated successfully, but these errors were encountered: