- The easiest way to do this is mem::forget, a safe function which exists to leak memory deliberately.
- The most common real way to leak memory is to create a loop using the Rc<T> or Arc<T> reference count types. I've never seen this in any of my company's Rust code, but we don't write our own cyclic data structures, either. This is either a big deal to you, or a complete non-issue, depending on your program architecture.
Basically, "safe Rust" aims to protect you from memory corruption, undefined behavior, and data races between threads. It does not protect you from leaks, other kinds of race conditions, or (obviously) logic bugs.
According to: https://rust-lang.github.io/rfcs/3128-io-safety.html
Rust’s standard library almost provides I/O safety, a guarantee that if one part of a program holds a raw handle privately, other parts cannot access it.
According to the article:
I/O Safety: Ensuring that accepted TCP streams are properly closed without leaking connections.
These are not the same definition.
As I've mentioned several times [1], in Rust, the word "safety" is being abused to the point of causing confusion, and it looks like this is another instance of that.
Now let's take "the future" part... you seem to be impugning Async Rust for something it's not even purported to do in the present. What's the point of this?
You found a bug in monoio it seems... I don't see the argument you've presented as supporting the premise that "Async Rust is not safe".
The title of this submission is aimed specifically at "Async Rust", ie the language. The reality is that one third-party library with 4k stars on GitHub and 0.2.4 version number has a non-memory-unsafe leak.
I'd say the title is a full-on lie.
It's really that Rust might've made a poor choice here, as the article points out:
> Async Rust makes a few core assumptions about futures:
> 1. The state of futures only change when they are polled.
> 2. Futures are implicitly cancellable by simply never polling them again.
But at the same time, maybe this is just that Rust's Futures need to be used different here, in conjunction with a separate mechanism to manage the I/O operation that knows things need to be cancelled?
In the current Rust io_uring-like stuffs can be safely implemented with an additional layer of abstraction. Some io_uring operations can be ongoing when it looks fine to borrow the buffer, sure. Your API just has to ensure that it is not possible to borrow until all operations are finished then! Maybe it can error, or you can require something like `buf.borrow().await`. This explicit borrowing is not an alien concept in Rust (cf. RefCell etc.) and probably the best design at the moment, but it does need dynamic bookkeeping which some may want to eliminate.
NB: I have only done a little bit of Rust, but am hoping to move there in the future — but I am working on C code interfacing io_uring… I will say doing this correctly does in fact require a bit of brainpower when writing that code.
> We just need to encourage async runtimes to implement this fix.
This likely needs async drop if you need to perform a follow up call to cancel the outstanding tasks or closing the open sockets. Async Drop is currently experimental:
Maybe cancellation itself is problematic. There’s a reason it was dropped from threading APIs and AFAIK there is no way to externally cancel a goroutine. Goroutines are like async tasks with all the details hidden from you as it’s a higher level language.
FWIW, this was also painful to do across threads in our C event loop, but there was no way around the fact that we needed it (cf. https://github.com/FRRouting/frr/blob/56d994aecab08b9462f2c8... )
Cooperative cancellation can be implemented in languages that mark their suspension points explicitly in their coroutines, like Rust, Python and C++.
I think Python's asyncio models cancellation fairly well with asyncio.CancelledError being raised from the suspension points, although you need to have some discipline to use async context managers or try/finally, and to wait for cancelled tasks at appropriate places. But you can write your coroutines with the expectation that they eventually make forward progress or exit normally (via return or exception).
It looks like Rust's cancellation model is far more blunt, if you are just allowed to drop the coroutine.
You can only drop it if you own it (and nobody has borrowed it), which means you can only drop it at an `await` point.
This effectively means you need to use RAII guard objects like in C++ in async code if you want to guarantee cleanup of external resources. But it's otherwise completely well behaved with epoll-based systems.
I find that a bigger issue in my async Rust code is using Tokio-style async "streams", where a cancelled sender looks exactly like a clean "end of stream". In this case, I use something like:
enum StreamValue<T> {
Value(T),
End,
}
If I don't see StreamValue::End before the stream closes, then I assume the sender failed somehow and treat it as a broken stream (sort of like a Unix EPIPE error).This can obviously be wrapped. But any wrapper still requires the sender to explictly close the stream when done, and not via an implicit Drop.
Which limits cleanup after cancellation to be synchronous, doesn't it? I often use asynchronous cleanup logic in Python (which is the whole premise of `async with`).
Sync Rust is lovely, especially with a bit of practice, and doubly so if you already care about how things are stored in memory. (And caring how things are stored is how you get speed.)
Async Rust is manageable. There's more learning curve, and you're more likely to hit an odd corner case where you need to pair for 30 minutes with the team's Rust expert.
The majority of recent Rust networking libraries are async, which is usually OK. Especially if you tend to keep your code simple anyway. But there are edge cases where it really helps to have access to Rust experience—we hit one yesterday working on some HTTP retry code, where we needed to be careful how we passed values into an async retriable block.
"So I think this is the solution we should all adopt and move forward with: io-uring controls the buffers, the fastest interfaces on io-uring are the buffered interfaces, the unbuffered interfaces make an extra copy. We can stop being mired in trying to force the language to do something impossible. But there are still many many interesting questions ahead."
Containerd maintainers soon followed Google recommendations and updated seccomp profile to disallow io_uring calls [2].
io_uring was called out specifically for exposing increased attack surface by kernel security team as well long before G report was released [3].
Seems like less of a rust issue and more of a bug(s) in io_uring? I suppose user space apps can provide bandaid fix but ultimately needs to be handled at kernel.
[1] https://security.googleblog.com/2023/06/learnings-from-kctf-...
The article is more about the Rust io_uring async implementation breaking assumption that Rust's async makes, in that a Future can only get modified when it's poll()-ed.
I'm guessing that assumption came from an expectation that all async runtimes live in userland, and this newfangled kernel-backed runtime does things on its own inside the kernel, thus breaking the original assumption.
Rust went with poll-based API and synchronous cancellation design anyway, because that fits ownership and borrowing.
Making async cancellation work safely even in presence of memory leaks (destructors don't always run) and panics remains an unsolved design problem.
I'm working with io_uring currently and have to disagree hard on that one; io_uring definitely has issues, but the one here is that it's being used incorrectly, not something wrong with io_uring itself.
The io_uring issues overall also disaggregate in mostly 2 overall categories:
- lack of visibility into io_uring operations since they are no longer syscalls. This is an issue of adding e.g. seccomp and ptrace equivalents into io_uring. It's not something I'd even call a vulnerability, more of a missing feature.
- implementation correctness and concurrency issues due to its asynchronicity. It's just hard to do this correctly and bugs are being found and fixed. Some are security vulnerabilities. I'd call this a question of time for it to get stable and ready but I have no reason to believe this won't happen.
Being ALLOWED to be used badly, is the major cause of unsafety.
And consider that all the reports you reply to are by serious teams. NOT EVEN THEM succeed.
That is the #1 definition of
> something wrong with io_uring itself
This isn't like the Rust-vs-C argument, where the claim is that you should prefer the option of two equivalently-capable solutions in a space that doesn't allow mis-use.
This is more like assembly language, or the fact that memory inside kernel rings is flat and vulnerable: those are low-level tools to facilitate low-level goals with a high risk of mis-use, and the appropriate mitigation for that risk is to build higher-level tools that intercept/prevent that mis-use.
To bring it back to the FD leak in the original post: the kernel won't stop you from forgetting to call close() either, io_uring or no.
I don't see how you stop devs from using a system call or similar incorrectly. In this case the result is an FD leak, which is no more than a DoS, and easy to chase and fix [unless the library in question is designed in such a way that the problem is unfixable, in which case abandon that library].
It's such a shame. The io_uring interface is one of the coolest features of the kernel. It's essentially an asynchronous system call interface. A sane asynchronous I/O interface that finally did it right after many tries. Sucks to see it just get blocked everywhere because of vulnerabilities. I hope whatever problems it has get fixed because I want to write software that uses io_uring and I absolutely want to run said software on my phone with Termux.
That going to the sleep branch of the select should cancel the accept? Will cancelling the accept terminate any already-accepted connections? Shouldn't it be delayed instead?
Shouldn't newly accepted connections be dropped only if the listener is dropped, rather than when the listener.accept() future is dropped? If listener.accept() is dropped, the queue should be with the listener object, and thus the event should still be available in that queue on the next listener.accept().
This seems more like a bug with the runtime than anything.
Even the blog admits that cancellation of any kind, is racing with the kernel which might complete the accept request anyway. Even if you call `.cancel()`, the queue might have an accepted connection FD in it. Even if it doesn't, it might do by the time the kernel acknowledges the cancellation.
So you now have a mistakenly-accepted connection. What do you do with it? Drop it? That seems like the wrong answer, whoever writes a loop like the one in the blog will definitely not expect some of the connections mysteriously being dropped.
This is an implementation issue with monoio that just needs more polishing. And given how hard io_uring is to get right, monoio should be given that time before being chosen to be used in production for anything.
The solution proposed in this post doesn't work, though: if the accept completes before the SQE for the cancellation is submitted, the FD will still be leaked. io-uring's async cancellation mechanism is just an optimization opportunity and doesn't synchronize anything, so it can't be relied on for correctness here. My library could have submitted a cancellation when the future drops as such an optimization, but couldn't have relied on it to ensure the accept does not complete.
This is still a suboptimal solution as you've accepted a connection, informing the client side of this, and then killed it rather than never accepting it in the first place. (Worth noting that linux (presumably as an optimisation) accepts connections before you call accept anyway so maybe this entire point is moot and we just have to live with this weird behaviour.)
Now it's true that "never accepting it in the first place" might not be possible with io_uring in some cases but rather than hiding that under drop the code, it should be up front about it and prevent dropping (not currently possible in rust) in a situation where there might be uncompleted in-flight requests before you've explicitly made a decision between "oh okay then, let's handle this one last request" and "I don't care, just hang up".
But I’ve written TCP servers and little frameworks, asynchronously, and this whole model seems wrong. There’s a listening socket, a piece of code that accepts connections, and a backpressure mechanism, and that entire thing operates as a unit. There is no cancellable entity that accepts sockets but doesn’t also own the listening socket.
Or one can look at this another way: after all the abstractions and libraries are peeled back, the example in the OP is setting a timeout and canceling an accept when the timeout fires. That’s rather bizarre — surely the actual desired behavior is to keep listening (and accepting when appropriate) and do to the other timed work concurrently.
It just so happens that, at the syscall level, a nonblocking (polled, selected, epolled, or even just called at intervals) accept that hasn’t completed is a no-op, so canceling it doesn’t do anything, and the example code works. But it would fail in a threaded, blocking model, it would fail in an inetd-like design, and it fails with io_uring. And I really have trouble seeing linear types as the solution — the whole structure is IMO wrong.
(Okay, maybe a more correct structure would have you “await connection_available()” and then “pop a connection”, and “pop a connection” would not be async. And maybe a linear type system would prevent one from being daft, successfully popping a connection, and then dropping it by accident.)
This is the age-old distinction between a proactor and reactor async design. You can normally implement one abstraction of top of the other, but the conversion is sometimes leaky. It happens that the underlying OS "accept" facility is reactive and it doesn't map well to a pure async accept.
It breaks down in a context where you do an accept that can be canceled and you don’t handle it intelligently. In a system where cancellation is synchronous enough that values won’t just disappear into oblivion, one could arrange for a canceled accept that succeeded to put the accepted socket on a queue associated with the listening socket, fine. But, in general, the operation “wait for a new connection and irreversibly claim it as mine IMO just shouldn’t be done in a cancellable context, regardless of whether it’s a “reactor” or a “proactor”. The whole “select and, as one option, irrevocably claim a new connection” code path in the OP seems suspect to me, and the fact that it seems to work under epoll doesn’t really redeem it in my book.
The issue is the lack of synchronization between cancellation and not handling cancel failure.
All cancellations can fail because there is always a race when calling cancel() where the operation completes.
You have two options, synchronous cancel (block until we know if cancel succeded) or async cancel (callback or other notification).
This code simply handles the race incorrectly, no need to think too hard about this.
It may be that some io_uring operations cannot be cancelled, that is a linux limitation. I've also seen there is no async way to close sockets, which is another issue.
> This code simply handles the race incorrectly, no need to think too hard about this.
I still think the race is unnecessary. In the problematic code, there’s an operation (await accept) that needs special handling if it’s canceled. A linear type system would notice the lack of special handling and complain. But I would still solve it differently: make the sensitive operation impossible to cancel. “await accept()” can be canceled. Plain “accept” cannot. And there is no reason at all that this operation needs to be asynchronous or blocking!
(Even in Rust’s type system, one can build an “await ready_to_accept()” such that a subsequent accept is guaranteed to succeed, without races, by having ready_to_accept return a struct that implements Drop by putting the accepted socket back in the queue for someone else to accept. Or you can accept the race where you think you’re ready to accept but a different thread beat you to it and you don’t succeed.)
listen(2) takes a backlog parameter that is the number of queued (which I think it means ack'd) but not yet popped (i.e. listen'd) connections.
If the accept completes before the cancel SQE is submitted, the cancel operation will fail and the runtime will have a chance to poll the CQE in place and close the fd.
Rust assumes as part of its model that "state only changes when polled". Which is to say, it's not really "async" at all (none of these libraries are), it's just a framework for suspending in-progress work until it's ready. But "it's ready" is still a synchronous operation.
But io-uring is actually async. Your process memory state is being changed by the kernel at moments that have nothing to do with the instruction being executed by the Rust code.
Rust does not assume that state changes only when polled. Consider a channel primitive. When a message is put into a channel at the send end, the state of that channel changes; the task waiting to receive on that channel is awoken and finds the state already changed when it is polled. io-uring is really no different here.
I will replace to more exact description about this, thanks.
With io-uring the kernel writes CQEs into a ring buffer in shared memory and the user program reads them: its literally just a bounded channel, the same atomic synchronizations, the same algorithm. There is no difference whatsoever.
The io-uring library is responsible for reading CQEs from that ring buffer and then dispatching them to the task that submitted the SQE they correspond to. If that task has cancelled its interest in this syscall, they should instead clean up the resources owned by that CQE. According to this blog post, monoio fails to do so. That's all that's happening here.
I think you're being unduly harsh here. There are a variety of voices here, of various levels of expertise. If someone says something you think is incorrect but it seems that they are speaking in good faith then the best way to handle the situation is to politely provide a correct explanation.
If you really think they are in bad faith then calmly call them out on it and leave the conversation.
I think I've read this exact convo maybe 20+ times among HN, Reddit, Github Issues and Twitter among various topics including but not limited to, async i/o, Pin, and cancellation.
My opinion is that async Rust is an incredible achievement, primarily not mine (among the people who deserve more credit than me are Alex Crichton, Carl Lerche, and Aaron Turon). My only really significant contributions were making it safe to use references in an async function and documenting how to interface with completion based APIs like io-uring correctly. So it is very frustrating to see the discourse focused on inaccurate statements about async Rust which I believe is the best system for async IO in any language and which just needs to be finished.
> No, ajross is very confidently making false descriptions of how async Rust and io-using operate. This website favors people who sound right whether or not they are, because most readers are not well informed but have a ridiculous confidence that they can infer what is true based on the tone and language used by a commenter. I find this deplorable and think this website is a big part of why discourse around computer science is so ignorant, and I respond accordingly when someone confronts me with comments like this.
They had an inaccurate (from your point of view) understanding. That's all. If they were wrong that's not a reason to attack them. If you think they were over-confident (personally I don't) that's still not a reason to attack them.
Again, I think ajross set out their understanding in a clear and polite manner. You should correct them in a similar manner.
But that's really not what's going on here.
ajross has an understanding of the fundamentals of async that is different to withoutboats'. ajross is setting this out in a clear and polite way that seems to be totally in good faith.
withoutboats is responding in an extremely rude and insulting manner. Regardless of whether they are right or not (and given their background they probably are), they are absolutely in the wrong to adopt this tone.
Stick to technical arguments.
ajross has an understanding of the fundamentals of async, but a surface level understanding of io-uring and Rust async. It's 100% what is going on, and again, it something I've seen play out 100s of times.
>Rust assumes as part of its model that "state only changes when polled".
This is fundamentally wrong. If you have a surface level understanding of how the Rust state-machine works, you could make this inference, but it's wrong. This premise is wrong, so ajross' mental model is flawed - and withoutboats is at a loss of trying to educate people who get the basic facts wrong and has defaulted to curt expression. And I get it - you see it a lot with academic types when someone with a wikipedia overview of a subject tries to "debate". You either have to do an impromptu of 101 level material that is freely available or you just say "you're wrong". Neither tends to work.
I'm not saying I condone withoutboats' tone, but my comment is really just a funny anecdote because withoutboats engages in this often and I've seen his tone shift from the "try to educate" to the "you're just wrong" over the past 6 years.
I live in very different weeds, and I read the linked article and went "Oh, yeah, duh, it's racing on the io-uring buffer". And tried to explain that as a paradigm collision (because it is). And I guess that tries the patience of people who think hard about async[1] but never about concurrency and parallelism.
[1] A name that drives systems geeks like me bananas because everything in an async programming solution IS SYNCHRONOUS in the way we understand the word!
So, first: how is that not consistent with the contention that the bug is due to a collision in the meaning of "asynchronous"? You're describing, once more, a synchronous operation ("when ... cancel") on a data structure that doesn't support that ("the kernel writes ..." on its own schedule).
And second: the English language text of your solution has race conditions. How do you prevent reading from the buffer after the beginning of "cancel" and before the "dispatch"? You need some locking in there, which you don't in general async code. Ergo it's a paradigm clash. Developers, you among them it seems, don't really understand the requirements of a truly async process and get confused trying to shoehorn it into a "callbacks with context switch" framework like rust async.
This is an odd thing to say about someone who has written a correct solution to the problem which triggered this discussion.
Also, you really need to define what truly async means. Many layers of computing are async or not async depending on how you look at them.
APIs to deal with things like io-uring (or DMA device drivers, or shared memory media streams, etc...) tend necessarily to involve explicit locking all the way up at the top of the API to make the relationship explicit. Async can't do that, because there's nowhere to put the lock (it only understands "events"), and so you need to synthesize it (maybe by blocking the cancelling thread until the queue drains), which is complicated and error prone.
This isn't unsolvable. But it absolutely is a paradigm collision, and something I think people would be better served to treat seriously instead of calling others names on the internet.
I’m not sure what your level of experience with Rust’s async model is, but an important thing to note is that work is split between an executor and the Future itself. Executors are not “special” in any way. In fact, the Rust standard library doesn’t even provide an executor.
Futures in Rust rely on their executors to do anything nontrivial. That includes the actual interaction with the io-uring api in this case.
A properly implemented executor really should handle cases where a Future decides to cancel its interest in an event.
Executors are themselves not implemented with async code [0]. So I’m not quite able to understand your claim of a paradigm mismatch.
[0]: subexecutors like FuturesUnordered notwithstanding.
See this comment describing how an executor can properly handle cancelations.
This is why people don't like the Rust community.
With epoll, the reactor just maps FDs to Wakers, and then wakes whatever Waker is waiting on that FD. Then that task does the syscall.
With io-uring, instead the reactor is reading completion events from a queue. It processes those events, sets some state, and then wakes those tasks. Those tasks find the result of the syscall in that state that the reactor set.
This is the difference between readiness (epoll) and completion (io-uring): with readiness the task wakes when the syscall is ready to be performed without blocking, with completion the task wakes when the syscall is already complete.
When a task loses interest in an event in epoll, all that happens is it gets "spuriously awoken," so it sees there's nothing for it to do and goes back to sleep. With io-uring, the reactor needs to do more: when a task has lost interest in an incomplete event, that task needs to set the reactor into a state where instead of waking it, it will clean up the resources owned by the completion event. In the case of accept, this means closing that FD. According to your post, monoio fails to do this, and just spuriously wakes up the task, leaking the resource.
The only way this relates to Rust's async model is that all futures in Rust are cancellable, so the reactor needs to handle the possibility that interest in a syscall is cancelled or the reactor is incorrect. But its completely possible to implement an io-uring reactor correctly under Rust's async model, this is just a requirement to do so.
I don't get why people say it's incompatible with rust when rust async libraries work IOCP, which follows the similar model as io-uring?
The main way people use IOCP is via mio via tokio. To make IOCP present a readiness interface mio introduces a data copy. This is because tokio/mio assume you’re deploying to Linux and only developing on windows and so optimize performance for epoll. So it’s reasonable to wonder if a completion based interface can be zero cost.
But the answer is that it can be zero cost, and we’ve known that for half a decade. It requires different APIs from readiness based interfaces, but it’s completely possible without introducing the copy using either a “pass ownership of the buffer” model or “buffered IO” model.
Either way, this is unrelated to the issue this blog post identifies, which is just that some io-uring libraries handle cancellation incorrectly.
Promise-based async has the advantage that you can desugar it down to C-compatible ABI.
[1] https://glyph.twistedmatrix.com/2014/02/unyielding.html [2] https://lamport.azurewebsites.net/pubs/state-machine.pdf (read the introduction)
Note that I don't know a lot about Rust, and I'm not familiar with the rules for Rust in the kernel, so it's possible that it's either not a problem or the problematic usages violate the kernel coding rules. (although in the latter case it doesn't help with non-kernel frameworks like SPDK)
Edit: I realize my comment might come off as a bit snarky or uninformative to someone who isn't familiar with Rust. That was not the intention. "Async Rust" is particular framework for abstracting over various non-blocking IO operations (and more). It allows terse code to be written using a few convenient keywords, that causes a certain state machine (consisting of ordinary Rust code adhering to certain rules) to be generated, which in turn can be coupled with an "async runtime" (of which there are many) to perform the IO actions described by the code. The rules that govern the code generated by these convenient keywords, i.e. the code that the async runtimes execute, are apparently not a great fit for io_uring and the like.
However, I don't think anyone is proposing writing such code inside the kernel, nor that any of the async runtimes actually make sense in a kernel setting. The issues in this article don't exist when there is no async Rust code. Asynchronous operations can, of course, still be performed, but one has to manage that without the convenient scaffolding afforded by async Rust and the runtimes.
Here by "async" I don't so much mean async/await versus threads, but these kernel-level event interfaces regardless of which abstraction a programming language lays on top of them.
At the 30,000 foot view, all the async abstractions are basically the same, right? You just tell the kernel "I want to know about these things, wake me up when they happen." Surely the exact way in which they happen is not something so fundamental that you couldn't wrap an abstraction around all of them, right?
And to some extent you can, but the result is generally so lowest-common-denominator as to appeal to nobody.
Instead, every major change in how we handle async has essentially obsoleted the entire programming stack based on the previous ones. Changing from select to epoll was not just a matter of switching out the fundamental primitive, it tended to cascade up almost the entire stack. Huge swathes of code had to be rewritten to accommodate it, not just the core where you could do a bit of work and "just" swap out epoll for select.
Now we're doing it again with io_uring. You can't "just" swap out your epoll for io_uring and go zoomier. It cascades quite a ways up the stack. It turns out the guarantees that these async handlers provide are very different and very difficult to abstract. I've seen people discuss how to bring io_uring to Go and the answer seems to basically be "it breaks so much that it is questionable if it is practically possible". An ongoing discussion on an Erlang forum seems to imply it's not easy there (https://erlangforums.com/t/erlang-io-uring-support/765); I'd bet it reaches up "less far" into the stack but it's still a huge change to BEAM, not "just" swapping out the way async events come in. I'm sure many other similar discussions are happening everywhere with regards to how to bring io_uring into existing code, both runtimes and user-level code.
This does not mean the problem is unsolvable by any means. This is not a complaint, or a pronunciation of doom, or an exhortation to panic, or anything like that. We did indeed collectively switch from select to epoll. We will collectively switch to io_uring eventually. Rust will certainly be made to work with it. I am less certain about the ability of shared libraries to be efficiently and easily written that work in both environments, though; if you lowest-common-denominator enough to work in both you're probably taking on the very disadvantages of epoll in the first place. But programmers are clever and have a lot of motivation here. I'm sure interesting solutions will emerge.
I'm just highlighting that as you grow in your programming skill and your software architecture abilities and general system engineering, this provides a very interesting window into how abstractions can not just leak a little, but leak a lot, a long ways up the stack, much farther than your intuition may suggest. Even as I am typing this, my own intuition is still telling me "Oh, how hard can this really be?" And the answer my eyes and my experience give my intuition is, "Very! Even if I can't tell you every last reason why in exhaustive detail, the evidence is clear!" If it were "just" a matter of switching, as easy as it feels like it ought to be, we'd all already be switched. But we're not, because it isn't.
Consider Linux async mechanisms. They are provided by a monolithic kernel that dictates massive swathes of what worldview a program is developed in. When select was found lacking, it took time for epoll to arrive. Then io_uring took its sweet time. When the kernel is lacking, the kernel must change, and that is painful. Now consider a hypothetical microkernel/exokernel where a program just gets bare asynchronous notifications about hardware and from other programs. Async abstractions must be built on top, in services and libraries, to make programming feasible. Say the analogous epoll library is found lacking. Someone must uproot it and perhaps lower layers and build an io_uring library instead. I will not say this is always less pain that before, although it is decidedly not the same as changing a kernel. But perhaps it is less painful in most cases. I do not think it is ever more painful. This is the essential pain brought about by stability and change.
Still in Rust community "safety" is used in a very specific understanding, so I don't think it is correct to use any definition you like while speaking about Rust. Or at least, the article should start with your specific definition of safety/unsafety.
I don't want to reject the premise of the article, that this kind of safety is very important, but for Rust unsafety without using "unsafe" is much more important that an OS dying from leaked connections. I have read through the article looking for rust's kind of unsafety and I was found that I was tricked. It is very frustrating, it looks to me as a lie with some lame excuses afterwards.
I chalk up the confusion to true believers playing the motte-and-bailey game. When the marketing meaning feels good, they use it all over the place. Then, when the substance doesn't live up to the hype, the fans retreat to the technical definition. This is not a new phenomena: https://news.ycombinator.com/item?id=41994254
With that said, while Rust does not guarantee it, it does have much better automatic memory cleanup compared to C++, because every value has only one owner, and the owner automatically drops/destructs it at the end of its scope.
Getting leaks is possible to do with things like Box::leak, or ref-count cycles, but in practice it tends to be explicit, rather than the programmer forgetting to do something.
But this specific case is not like that, my issue with the headline that it is a clickbait, and the article is written in a way, that you can't detect it early.
> anyone should do what they want in the end including saying this
I disagree. Any community, any group, any society has some rules, that exist for the good of the group. One of the most frequent rules is "do not lie". The headline de facto lies about the content of the article. I was mislead by it, and I'd bet that the most of rustaceans were misled also. Moreover, it seems to me, that it was a conscious manipulation by the author. Manipulation is also for the most groups of people is a bad thing, even if it doesn't rely on lies. It is bad not just for rustaceans. Here on HN you can see complaints about clickbaity titles very often, people are really annoyed by them. I'm relatively tolerant to clickbaits, but this was too much.
Having programmed in raw C, I know Rust is more like Typescript if you once try it after writing Javascript, you can't go back for anything serious in plain Javascript. You would want to have some guard rails better than having no guard rails.
Lifetimes are still used but not nearly as much.
lose?
This pattern causes issues all over the place: in C++ with headaches around destruction failure and exceptions; in C++ with confusing semantics re: destruction of incompletely-initialized things; in Rust with "async drop"; in Rust (and all equivalent APIs) in situations like the one in this article, wherein failure to remember to clean up resources on IO multiplexer cancellation causes trouble; in Java and other GC-ful languages where custom destructors create confusion and bugs around when (if ever) and in the presence of what future program state destruction code actually runs.
Ironically, two of my least favorite programming languages are examples of ways to mitigate this issue: Golang and JavaScript runtimes:
Golang provides "defer", which, when promoted widely enough as an idiom, makes destructor semantics explicit and provides simple and consistent error semantics. "defer" doesn't actually solve the problem of leaks/partial state being left around, but gives people an obvious option to solve it themselves by hand.
JavaScript runtimes go to a similar extreme: no custom destructors, and a stdlib/runtime so restrictive and thick (vis-a-vis IO primitives like sockets and weird in-memory states) that it's hard for users to even get into sticky situations related to auto-destruction.
Zig also does a decent job here, but only with memory allocations/allocators (which are ironically one of the few resource types that can be handled automatically in most cases).
I feel like Rust could have been the definitive solution to RAII-destruction-related issues, but chose instead to double down on the C++ approach to everyone's detriment. Specifically, because Rust has so much compile-time metadata attached to values in the program (mutability-or-not, unsafety-or-not, movability/copyabiliy/etc.), I often imagine a path-not-taken in which automatic destruction (and custom automatic destructor code) was only allowed for types and destructors that provably interacted only with in-user-memory state. Things referencing other state could be detected at compile time and required to deal with that state in explicit, non-automatic destructor code (think Python context-managers or drop handles requiring an explicit ".execute()" call).
I don't think that world would honestly be too different from the one we live in. The rust runtime wouldn't have to get much thicker--we'd have to tag data returned from syscalls that don't imply the existence of cleanup-required state (e.g. select(2), and allocator calls--since we could still automatically run destructors that only interact with cleanup-safe user-memory-only values), and untagged data (whether from e.g. fopen(2) or an unsafe/opaque FFI call or asm! block) would require explicit manual destruction.
This wouldn't solve all problems. Memory leaks would still be possible. Automatic memory-only destructors would still risk lockups due to e.g. pagefaults/CoW dirtying or infinite loops, and could still crash. But it would "head off at the pass" tons of issues--not just the one in the article:
Side-effectful functions would become much more explicit (and not as easily concealable with if-error-panic-internally); library authors would be encouraged to separate out external-state-containing structs from user-memory-state-containing ones; destructor errors would become synonymous with specific programmer errors related to in-memory twiddling (e.g. out of bounds accesses) rather than failures to account for every possible state of an external resource, and as a result automatic destructor errors unconditionally aborting the program would become less contentious; the surface area for challenges like "async drop" would be massively reduced or sidestepped entirely by removing the need for asynchronous destructors; destructor-related crash information would be easier to obtain even in non-unwinding environments...
Maybe I'm wrong and this would require way too much manual work on the part of users coding to APIs requiring explicit destructor calls.
But heck, I can dream, can't I?
If explicit destruction is desirable, IMO the C#-style `using` pattern (try-with-resource in Java, `with` in Python etc) makes more sense than `defer` since it still forces the use of the correct cleanup code for a given type while retaining the explicitness and allowing linters to detect missing calls (because destructability is baked into the type). `defer` is unnecessarily generic here IMO.
In JS, you still end up having to write try/finally for anything that needs explicit resource management, so I don't think it's a good example. They are adding `using`, though, and TS already provides it.
I think that automatic and outside-of-written-code destruction is generally a risky/bad idea for nontrivial (in-user-memory only) objects. Both GC-ful languages' finalization systems and the automatic-destruction side of RAII are problematic.
I agree that moving more logic into explicit acquire/perform/destroy blocks like TWR and 'with' is good. I wish we had a language (rather than, say, just specific libraries) that could require such handling for certain datatypes.
Now, if you track ownership, you could enforce this. But it seems like at this point you might as well just have proper destructors, since ownership is explicit in the declared type of the variable, and thus having an additional keyword like "defer" is redundant wrt making that destructor call explicit. You just need to make sure that syntax for variable declarations makes ownership or lack thereof very clear.
Separately from that, explicit destruction doesn't resolve many of the issues that you have mentioned with C++ specifically, such as exception safety. Again, in the example above, what should happen if A.close and B.close both throw? A naive implementation would propagate the exception from A and leak B. A fancy one will throw something like Java's SuppressedError, but that's still far from ideal. Similarly, with object state during incomplete initialization, the explicit destructors still have to handle all that somehow.
Having written a few libs for working with io_uring (in Ruby), cancellation is indeed tricky, with regards to keeping track of buffers. This is an area where working with fibers (i.e. stackful coroutines) is beneficial. If you keep metadata (and even buffers) for the ongoing I/O op on the stack, there's much less book-keeping involved. Managing the I/O op lifetime, especially cleaning up, becomes much simpler, as long as you make sure to not return before receiving a CQE, even after having issued a cancellation SQE.