I hope at some point it can be added to standard linux distributions like debians or red hats through their official yum/apt packages.
So far there is no red hat distro, and there's integration with ubuntu's snapcraft for some reason.
I realize there's dozens of distributions to support, but these are the two most foundational to my understanding, and it just speaks to the maturity and lack of system usage that the compiler is not released/vetted by OS distros.
Until that point it doesn't really make sense to pull it from the repos of distros that are packaging it right now anyways. E.g. it's packaged on Fedora but you most likely don't actually want to rely on that package for the moment.
https://github.com/ziglang/zig/wiki/Install-Zig-from-a-Packa...
Perhaps the foundational distros are stricter.
For the ones that are green they tend to be development versions (e.g. Alpine Edge, ALT Sisyphus, LiGurOS develop), rolling releases where there aren't necessarily those kind of package stability/interop guarantees in the first place (e.g. Arch, Manjaro, OpenSuse Tumbleweed), or not actually distros at all just alternative download mechanisms (e.g. Chocolatey, Chromebrew, Homebrew, Scoop). There are very few (such as Fedora 40) that just happen to be "broken clock" status for the moment because they are very fresh spins.
For the ones that are red, they are the examples of why you don't want to rely on the built in package at the moment. Even for the ones that are green "for the moment" (such as the Fedora 40 example) it's often still considered better to use the latest "master" copy of zig (depending what you're doing with it) than the last milestone release even then.
For example, PipeWire in Debian Stable is at 0.3.65 and probably won't get major updates until the next Debian release. However, that's only part of the picture: Its early presence in Debian has paved the way for updates via the Backports suite, which is available to Stable users with a flick of a switch. With Backports enabled, Debian Stable has access to PipeWire 1.2.4, released upstream less than two weeks ago.
This could be done for Zig as well. (Assuming Zig meets the Debian Free Software Guidelines. I think its recent move to a WASM blob for bootstrapping might complicate inclusion in Debian.)
What's more, there is precedent with regards to Debian: Go is backported to Stable via the golang metapackage. The main package for Zig can follow the same structure and also be a metapackage with versioned dependencies.
> I think its recent move to a WASM blob for bootstrapping might complicate inclusion in Debian.
As I understand, it would mean that Zig ends up in the non-free component rather than main or contrib.
Pipewire did/does not follow the strict semver versioning so you can't compare the two because the versions both start with "0":
- Pipewire had offered ~3 years of API and ABI compatibility on what it called "0.3.x", zig intentionally plans to break compatibility multiple times a year as it increments 0.x.
- Pipewire already planned to keep API/ABI compatibility between 0.x and 1.x, zig explicitly does not - they label the releases 0.x to signify these will break by 1.x
- Pipewire called 0.3.x stable for use but a rapid development version. zig is also under rapid development but they say they opposite in regards to stability guarantees.
I.e. the issue is more than "how do you get a newer version on stable Debian?" and more "it's an explicitly unstable package with no version compatibility guarantees, distros don't like packaging that in their stable repositories".
The easy answer to "be prepared" more or less is keep it in the dev repositories, like Alpine does, where there is no guarantee of stability or interoperability on updates. When zig declares itself stable (which, in its versioning scheme, will be 1.0) then it can just be added to stable without much work.
[1]: https://ziglang.org/news/migrate-to-self-hosting/ [2]: https://ziglang.org/news/300k-from-mitchellh/
We have local developer environment and we deal quite often with python, go, or some brew packages version not being correctly installed before starting the tools.
would that be a good usecase for zig?
But it is still pre 1.0 so expect new versions to break old-ish code. I'd say package managers should wait for the 1.0 release.
I would recommend managing the Zig installation through a tool like zvm (https://github.com/tristanisham/zvm). This lets you easily update to the latest dev version and switch between stable versions (similar to rustup or nvm).
The other install options are working too of course (install via brew - although in the past this was a bit brittle, or download and unpack prebuilt archives https://ziglang.org/download/) but those options are not as convenient for switching between the stable and dev version.
I can't quite put words to it, but this statement kind of struck me. There's just a certain basic decency behind it that deserves celebrating.
Anyway, they do whatever they want with their philanthropy in the end, but I found that was an odd phrasing.
TIL. Philanthropy is not big where I live so I don't know the ins and outs of it.
The Bible tells you not to talk about your donations!
> This is a way to support a cause just for the sake of supporting it.
For many causes the money matters, but the publicity does not. In this case Zig gains from it being better funded makes people more likely to have the confidence in its future to adopt it, and from the PR benefit (e.g. getting one more mention here).
On the other hand for something like a charity that helps the poor, we all know of the need already. Publicity does not help much - in fact I would be more likely to give to a small charity that does not get big donations than to one I know is getting big donations.
Not a Christian, but since no one else dug up the quote:
"Thus, when you give alms, sound no trumpet before you, as the hypocrites do in the synagogues and in the streets, that they may be praised by men. Truly, I say to you, they have received their reward. But when you give alms, do not let your left hand know what your right hand is doing, so that your alms may be in secret; and your Father who sees in secret will reward you." -- Matthew 6:2-4 (RSV)
But then you have:
"You are the light of the world. A city set on a hill cannot be hid. Nor do men light a lamp and put it under a bushel, but on a stand, and it gives light to all in the house. Let your light so shine before men, that they may see your good works and give glory to your Father who is in heaven." -- Matthew 5:14-16 (RSV)
So I guess according to scripture "it depends". I do believe Judaism and other religions have similar teachings for that matter.
I’d equate light to consciousness and good works to the radiance of consciousness.
Which backfired when their "connection" to Purdue Pharma (which they went to great lengths to not make a big noise about) became more well-known.
And then you have others who do it, so often that it is referred to as reputation-washing.
That said, while the former is more obviously laudable, the latter does serve the purpose of raising the status of being charitable, which can lead to more people being charitable.
"Sounds great! Let's fly our Cirrus SF50 Vision and hand deliver to Andrew Kelley."
zig currently uses LLVM but it wants to move away from that:
https://www.reddit.com/r/Zig/comments/18x1wce/is_it_true_tha...
Once zig compiles Linux there are basically 3 compilers that can do so gcc, clang (llvm) and zig (less and less llvm).
I expect there to be good tools to port C code to zig (much hard for C->Rust), especially with the advancements in LLMs lately. I would not be surprised if that would result in a zig-linux code base in the coming 5 years. Sure the C based kernel may be the hot bed for innovation for years to come.
It leaves me with a sour taste in my mouth when stylistic preferences are made mandatory; it is disrespectful of the coders for whom the language is a tool. It can also be a canary for deeper issues around community engagement and openness to different viewpoints (a problem for Zig, e.g. [0]).
A business with code uniformity requirements is more than capable of running a linter, and for my weekend projects, I don't give one toss about anyone's stylistic preferences but my own. Either Zig is a language for grown-ups, or it isn't. And if I'm going to be forced to code a certain way, why not just use Rust and get free memory safety out of it too?
But you can turn it off, can't you?
> deeper issues around community engagement and openness to different viewpoints (a problem for Zig, e.g. [0]).
Two things:
1. People who link that issue often forget about Andrew's comments (https://github.com/ziglang/zig/issues/16270#issuecomment-161..., https://github.com/ziglang/zig/issues/16270#issuecomment-161..., example: "I'm not going to simultaneously shoot myself and valuable community members in the face by yanking a load-bearing feature out from underneath us, without any kind of upgrade path."). I can understand some people disagreeing but it's not that big of an issue.
2. Personally I'm happy Zig has a BDFL. Even though that #16270 issue has some controversy, it's clear Zig has a consistent direction and goal. It's not design-by-committee and doesn't get stalled for years on the tiniest of issues while the community bikesheds for eternity.
Can I? How? There is no setting I can see in e.g. VS Code to disable format-on-save. (Not trying to sound snarky here, I'm legitimately open to advice on this.)
> it's clear Zig has a consistent direction and goal. It's not design-by-committee and doesn't get stalled for years on the tiniest of issues while the community bikesheds for eternity.
Anyone can have direction. 'There, towards the copse of stinging nettles!' The tricky part is figuring out a good direction, which sometimes requires pauses and stakeholder engagement. You know, that thing that the 'move fast and break things' crowd loves to deride as 'bikeshedding'. :)
In as far as Zig's direction appears to be 'we will rewrite LLVM, but better-er!', I do worry. I'd hate for Zig to end up like Elm.
Regarding competing with LLVM: I'm happy to see others try. Cranelift is a nice example of finding a niche that LLVM isn't filling, and I'm glad people didn't prematurely give up simply because LLVM already exists. Zig's goal is definitely ambitious, and there are risks. But in principle I'm happy to see someone pursuing these lofty goals because that's what ultimately creates incremental progress in the industry. If Zig fails... well, I'd still be happy they at least tried.
[1]: https://code.visualstudio.com/docs/getstarted/settings#_lang...
Ah, I see. The Zig module silently overrides the user's editor.formatOnSave setting by default; this is what I was missing. I need to specifically override Zig's override:
"[zig]": {
"editor.formatOnSave": false
},
Thank you!> Regarding competing with LLVM
In theory, I have no problem with this either, but in practice, this is a big gamble for Zig as a whole. A language lives and dies on perceptions, and currently Zig's killer feature is that it is an easy slot-in incremental replacement for existing C/C++ codebases. This plan intends to break that by default. (I realise AK has walked that back somewhat, it will remain an option, etc - but considering this whole thread is people telling me formatting must be strictly mandated, surely you'll grant the power of defaults and the risk in breaking them.)
Ultimately, I am open to being proven wrong, but I've seen some of the same patterns in Zig that have broken other newlangs. Killer features that go under-appreciated by the leadership, a focus on purity at the expense of practicality, 'trust the plan', etc. My fear would be that the hype tide will go out, as it always does, and Zig will be left without any obvious niche, somewhere mid-LLVM-rewrite. But hey, we'll see. I wish AK the best of luck with it.
And I don't think you can meaningfully compare this to constraints imposed by Rust, which aren't about where to place a curly brace etc, but about not being able to (easily) model some data structures and algorithms. You could argue that both represent a form of tax on your freedom as a developer, but even if so, it's an orders of magnitude difference.
I have no problem with standards. There are times where standards are useful. The thing about standards is that they can also be ignored where appropriate. It's notable no one is seriously pushing for Black to be made mandatory for Python, for extremely obvious and sensible reasons. Also, 'increasingly popular' is doing some heavily lifting in that argument. The vast, vast, vast (vast!) majority of Python code will never ever use Black, and that is fine.
A language, as we agree, is a tool, and it is a poor tool indeed that refuses to lend itself for use in creative ways.
> And I don't think you can meaningfully compare this to constraints imposed by Rust, which aren't about where to place a curly brace etc, but about not being able to (easily) model some data structures and algorithms. You could argue that both represent a form of tax on your freedom as a developer, but even if so, it's an orders of magnitude difference.
Very different - Rust's constraints actually serve a purpose beyond merely enforcing stylistic conformity. If I am to take on the added cognitive load of coding a certain way, I might as well actually get something out of it.
And the vast majority of Python code was written before Black was a thing. However, once it appeared, it spread through the ecosystem very quickly. At this point I wouldn't be surprised if most people using Black don't even know that they do so simply because they write their Python in VSCode, which suggests Black (and will install it for you) if you try to do Format Document or enable format-on-save.
I mean... the use of tabs or LF+CR / CR line endings was a compiler error, last I checked. So, yes, it is exactly like that. And this was a deliberate choice to introduce friction for people who don't hew to the author's stylistic preferences.
> However, once it appeared, it spread through the ecosystem very quickly.
Uh huh, sure, a trendy hipster linter that appeared in 2019 is now so standard that Python code is nigh unthinkable without it. We marvel in the museums at what Python used to look like! There will definitely not be another trendy hipster linter in a couple of years with totally different opinions! :P
You're welcome to decide that Zig isn't for you, but characterizing the project as "disrespectful" and immature seems extreme.
Zig is immature. That's not some conclusive judgment against its utility now and forever, it is simply a function of the amount of times the Earth has orbited the Sun since its creation.
Given how Zig is positioning itself vis-a-vis C, Rust, etc., it is somewhat baffling to me how little respect it seems to have for the opinions and capabilities of its end users.
People who want their hands held have better choices than Zig, and people who want the freedom to code how they wish... also have better choices than Zig. I think Zig is onto something, but hype cycles always fade, and if Zig hasn't matured by that point, there won't be any obvious niche for it to occupy.
> how little respect it seems to have for the opinions and capabilities of its end users
I think you misunderstand the objective here. As I understand it, it means that all code written in the language looks the same. This significantly improves readability and as we all (should) know, code is read much more than it is written. It's not about deducing the capabilities of the end users but more about reducing the cognitive load while having to read or write in the language itself.
If you still decide to write in one of these formatting-defined languages, it would probably be best to keep the repository and/or project private to avoid the barrage of "ran fmt on the code" pull requests sure to crop up. It would save all parties from a lot of frustration.
I have no problem with style guides or linters. I have a problem with a compiler that deliberately emits compiler errors on the use of \t, just as an example, because the author likes to throw Lego bricks under your feet if you refuse to obey his stylistic preferences.
I could explain where style guides can become a problem - usually in extremely low level code, emulation, legacy interop, etc. - and therefore need to be relaxed or ignored, but this would divert us onto a discussion of stylistic preferences, and that's not my chief concern here. My concern is the contempt for people's different needs and use cases, including edge cases, which is indicative of immaturity.
> If you still decide to write in one of these formatting-defined languages, it would probably be best to keep the repository and/or project private to avoid the barrage of "ran fmt on the code" pull requests sure to crop up. It would save all parties from a lot of frustration.
That's a rather patronising comment. I feel no frustration closing low effort PRs, and I'm honestly somewhat amused by this idea of living in terror of a 'barrage' of "I linted your code for you!"s.
My understanting is that everyone is suggesting to move to memory safe languages when possible, however Zig does not seem to have any.
Since zig is a new language my guess is that the main use would be brand new projects, but sholdn't this be done in a memory safe language?
It seems that the selling point of Zig is: more modern than C but simpler than Rust, so I understand the appeal, but isn't this undermined by the lack of memory safety?
Yes, in my opinion, but from Zig's success you can see some people are willing to trade safety for a simpler language. Different people have different values
Though to be fair you can also use zig in old C projects, moving things incrementally. I don't know how many projects do that Vs greenfield projects though
For example, you can't overflow buffers (slices have associated lengths that are automatically checked at runtime), pointers can't be null, integer overflows panic.
Not in all the ReleaseFast mode where both signed and unsigned overflow have undefined behaviour.
And there's also the aliasing issue, if you have fn f(a:A, b: b:*A) { b = <>; which value has 'a' when f is called with f(a,a)? } (not sure about Zig's syntax).
That said I agree with your classification (safer than C but isn't as safe as Rust)
This is probably not what you wanted, your code has a bug (if it was what you wanted, you should use the Wrapping type wrapper which says what you meant, not just insist this code must be compiled with specific settings) but you didn't have to pay for checks and your program continues to have defined behaviour, like any normal bug.
It is very rare that you need the unchecked behaviour for performance. Rare enough that although Wrapping and Saturating wrappers exist in Rust, even the basic operations for unchecked arithmetic are still nightly only. Most often what people meant is a checked arithmetic operation in which they need to write code to handle the case where there would be overflow, not an unchecked operation, Rust even has caution notes to guide newbies who might write a manual check - pushing them towards the pit of success - hey, instead of your manual check and then unsafe arithmetic, why not use this nice checked function which, in fact, compiles to the same machine code.
There's no need to provide a rationale because it's obvious, from a performance POV:
1) (a) UB on overflow > (b)wrapping on overflow
2) (b)wrapping on overflow > (c)trap on overflow.
So when you create a language you have to pick a default behaviour, Zig allow both (a) xor (c) with ReleaseFast and ReleaseSafe..
(1) is because this allows the compiler to do "better" optimisations, which unfortunately can create lots of pain for you if your code has a bug.
(2) is because these f.. CPU designers don't provide an 'add_trap_on_overflow' instruction so at the very least the overflow check instruction degrades the instruction cache utilisation.
Alas no, you've written a greater than sign but you'll find in reality it's often only the same. But you've significantly weakened the language, so you just made the language worse and you need to identify what you got for this price.
On the one hand, since you didn't promise wrapping in some cases you'll astonish your programmers when you don't provide it but that's what they expected, on the other since can't always get better performance you'll sometimes disappoint them by not going any faster despite not promising wrapping.
This might all be worth it if in the usual case you were much faster, but, in practice that's not what we see.
In any case, PLs don't have to blindly follow what the hardware does as the default. Many early PLs did checked arithmetic by default. Conversely, many instruction sets from that era have specific opcodes to facilitate overflow checking.
The reason why we got it in C specifically is because of its "high-level PDP assembly" origins.
I consider that to have been a mistake, and hopefully one we can change. Note that this is about defaults, you can build your own project as release with overflow panics. I'd wish the language had a mechanism to select the math overflow behavior in a more granular way that can be propagated to called functions (in effect, I want integer effects) instead of relying exclusively in the type system:
fn bar(a: i32, b: i32) -> i32 where i32 is Saturating {
a + b
}
fn foo(a: i32, b: i32) -> i32 where i32 is Wrapping {
// the `a + b` wraps on overflow, but the call to
// bar overrides the effect of the current function
// and will saturate instead.
a + b + bar(a, b)
}
With this crates can provide control to their callers on math overflow behavior without having to provide type parameters in every API with a bounds for something like https://docs.rs/num-traits/0.2.19/num_traits/.Personally I'm not as bothered about this as I was initially, whereas I'm at least as annoyed today by some 'as' casts as I was when I learned Rust -- if I could have your integer effects or abolish narrowing 'as' then I'd abolish narrowing 'as' in a heartbeat. Let people explicitly say what they meant, if I have a u16 and I try to put that in a u8, it will not fit, make me write the fallible conversion and say what happens when it fails. This strikes me as especially hazardous for inferred casts. foo as _ could do more or less anything, it is easily possible that it does something I hadn't considered and will regret, make me write what I meant and we'll avoid that.
Plans to address this were shared just last week: https://github.com/ziglang/zig/issues/5973#issuecomment-2380...
I'm relieved that they decided to remove this trap as it could really have been a nasty one (worse than integer overflow because you can just use ReleaseSafe)
Like say I have a really weird issue I can't seem to find locally, can I switch my production server to this different compilation mode temporarily to get better logs? Can I run my development environment with it on all the time?
it is not very good, as
const std = @import("std");
const print = std.debug.print;
fn foo() fn() *u32 {
const T = struct {
fn bar() *u32 {
var x: u32 = 123;
return &x;
}
};
return T.bar;
}
pub fn main() void {
print("Resultt: {}", .{foo()().*});
}
outputs 123 in debug[0] and 0 in ReleaseSafe[1] instead of giving a Runtime Error.I don't want to write Rust. I want to write Zig. It's like Python, but blazingly fast.
I think it remains to be seen if Zig is less safe than Rust in practice. In either case you have to write a lot of tests if you actually want your program to be safe. Rust doesn’t magically eliminate every possible bug. And if you’re running a good amount of tests in debug mode in Zig you’ll probably catch most memory safety bugs.
Still, if I was making something like a web browser I would probably use Rust
This is a common misconception, but the `unsafe` keyword in Rust does not disable any of the features that enforce memory safety, rather it just unlocks the ability to perform a small number of new operations whose safety invariants must be manually upheld. Even codebases that have good reason to use `unsafe` in many places still extensively benefit from Rust's memory safety enforcement features.
* Call unsafe functions
* do memory aliasing
* change the lifetime the compiler sees
That’s about it. The syntax and rules otherwise are still rust and violating those rules (eg aliasing in a way not allowed by rust) still results in UB. This can surprise some rust people even within popular crates and stdlib
This is another instance of the same misconception. For every Rust operation that can exist outside of an `unsafe` block, Rust enforces memory safety even when that operation exists inside of an unsafe block. In other words, Rust does not assume that all code inside of an unsafe block is safe; e.g. you can neither disable the borrow checker nor disable bounds checking merely by wrapping code in an unsafe block.
What this means is that you still receive the benefits of Rust's normal safety guarantees even in the presence of unsafe blocks. Instead, what unsafe blocks do is allow you to invent your own safety invariants to layer on top of Rust's ordinary semantics (which is also what you're doing in C and Zig).
Still, I think rust is safer than zig (ReleaseSafe) is safer than zig (ReleaseFast) is equally safe as unchecked rust.
Unsafe Rust is even less safe than C because the rules that must be manually upheld are stricter. For example in C you can create an invalid pointer and it's fine as long as you don't access it. In Rust you can't even create an invalid reference or you have already invoked unchecked undefined behavior.
There's no common misconception here. I think you're misunderstanding the quoted comment due to being overly pedantic.
I'm unclear what part of my comment would lead someone to such an extreme conclusion. As mentioned, the `unsafe` keyword is used to unlock new operations and create new safety invariants that must be manually upheld. Naturally, failure to manually uphold those new invariants would lead to memory unsafety. But an `unsafe` block introduces no unsafety by itself. Which is to say, if you take a working Rust program with no unsafe blocks, and then wrap the body of `main` in an unsafe block, this is a no-op; it does nothing.
> By this logic, C is safe, it also just has "safety invariants that must be manually upheld."
Certainly, this is true, and I'm not sure why anyone would think otherwise. The problem is not that it is theoretically impossible to write correct C; rather the problem is that it is empirically infeasible to do so at scale. By locking unsafe operations behind an unsafe block, Rust attempts to make it feasible to identify the areas of most concern in a codebase and focus attention on proving those areas correct manually.
> Unsafe Rust is even less safe than C because the rules that must be manually upheld are stricter.
Unfortunately this is another misconception, although it's understandable why one would think this. The rules for raw pointers in Rust are less strict than the rules for raw pointers in C, which is to say, manipulating raw pointers in Rust is safer than doing the same in C. The misconception here comes from the conflation of raw pointers with Rust's references, which do have more safety invariants to uphold, and for several years there were footguns to be found here due to language-level deficiences WRT the inability to avoid creating temporary references when working with uninitialized or unaligned memory. The good news is that this was addressed with the addition of std::mem::addr_of in Rust 1.51.
> For example in C you can create an invalid pointer and it's fine as long as you don't access it.
Unfortunately, this is incorrect, though it illustrates why raw pointer manipulation is more fraught in C than it is in Rust. In C, using pointer arithmetic to cause a pointer to point outside the bounds of an array (save for one element past the end) is undefined behavior, even if you never dereference that pointer. In contrast, this is not undefined behavior in Rust. As another example, comparing pointers from two different allocations with less-than/greater-than is undefined behavior in C, but this is not undefined behavior in Rust.
> There's no common misconception here. I think you're misunderstanding the quoted comment due to being overly pedantic.
I have seen this misconception arise regularly for years. If this is not what the parent commenter intended, then I apologize for misreading it. At the same time, I don't regret clarifying Rust's semantics for the benefit of people who may be unfamiliar with them. Surely it benefits us all to learn from each other.
If you view the locks on those operations as guard rails ensuring memory safety, GP's phrasing makes sense: The unsafe keyword disables them.
[0] In Rust, a smattering of those costs include:
- Explicit destruction (under the hood) of every object. It's slow.
- Many memory-safe programs won't type-check (impossible to avoid in any perfectly memory-safe language, but particularly annoying in Rust because even simple and common data structures get caught in the crossfire).
- Rust's "unsafe" is only a partial workaround. "Unsafe" is in some ways more dangerous than C because you don't _just_ have to guarantee memory safety; you have to guarantee every other thing the compiler normally automatically checks in safe mode, else your program has a chance of being optimized to something incorrect.
- Even in safe Rust, you still have a form of subtle data race possible, especially on ARM. The compiler forces a level of synchronization to writes which might overlap with reads, but it doesn't force you to pick the _right_ level, and it doesn't protect you from having to know fiddly details like seq_cst not necessarily meaning anything on some processors when other reads/writes use a different atomic ordering.
- Even in safe Rust, races like deadlocks and livelocks are possible.
- The constraints Rust places on your code tend to push people toward making leaky data structures. In every long-running Rust process I've seen of any complexity (small, biased sample -- take with a grain of salt), there were memory leaks which weren't trivial to root out.
- The language is extraordinarily complicated.
[1] Zig is memory-safe enough:
- "Defer" and "errdefer" cover 99% of use-cases. If you see an init without a corresponding deinit immediately afterward, that's (1) trivially lintable and (2) a sign that something much more interesting is going on (see the next point).
- In the remaining use-cases, the right thing to do is almost always to put everything into a container object with its own lifetime. Getting memory safety correct in those isn't always trivial, but runtime leak/overflow detection in "safe" compilation modes go a long way, and the general pattern of working on a small number of slabs of data (much like how you would write a doubly-linked list in idiomatic Rust) makes it easy to not have to do anything more finicky than remember to deallocate each of those slabs to ensure safety.
> I haven't yet seen a language where full memory safety didn't come at an extraordinary cost
But like you said yourself there are many types of applications where full memory safety is very important.
> Explicit destruction (under the hood) of every object. It's slow.
Care to actually support this with data? C++ is quite similar in this respect (Rust has a cleaner implementation of destruction) and generally outperforms any GC language because stack deallocation >> RC >> GC in terms of speed. There’s also a lot of good properties of deterministic destruction vs non deterministic but generally rust’s approach offers best overall latency and throughput in real world code. And of course trivial objects don’t get any destruction due to compiler optimizations (trivially live on the stack). And zig isn’t immune from this afaik - it’s a trade off you have to pick and zig should be closer since it’s also targeting systems programmers.
> - Many memory-safe programs won't type-check (impossible to avoid in any perfectly memory-safe language, but particularly annoying in Rust because even simple and common data structures get caught in the crossfire).
Actually most memory safe languages don’t have issues expressing data structures (eg Java). And rust has consistently improved its type checker to make more things ergonomic. And finally if you define rust as language + stdlib which is the most common experience those typical data structures are just there for you to use. So more of a theoretical problem than a real one for data structures specifically.
> Even in safe Rust, you still have a form of subtle data race possible, especially on ARM.
I agree that for the language it’s weird that this is considered “safe”. Of course it’s not any less safe than any other language that exposes atomics so it’s weird to imply this as something uniquely negative to Rust.
> Even in safe Rust, races like deadlocks and livelocks are possible.
I’m not aware of any language that can defend against this as it’s classically an undecidable problem if I recall correctly. You can layer in your own deadlock and livelock detectors that are however relevant to you but this is not uniquely positive or negative to rust so again weird to raise as a criticism of Rust.
> The constraints Rust places on your code tend to push people toward making leaky data structures. In every long-running Rust process I've seen of any complexity (small, biased sample -- take with a grain of salt), there were memory leaks which weren't trivial to root out.
I think you’re right to caution to take this with salt. That hasn’t been my experience but of course we might be looking at different classes of code so it might be more idiomatic somewhere.
> In the remaining use-cases, the right thing to do is almost always to put everything into a container object with its own lifetime
You can of course do that with Rust boxing everything and/or putting it into a container which reduces 99% of all lifetime complexity. There are performance costs of doing that of course so that may be why it’s no considered particularly idiomatic.
My overall point is that it feels like you’ve excessively dramatized the costs associated with writing in Rust to justify the argument that memory safety comes with excessive cost. The strongest argument is that certain “natural” ways to write things run into the borrow checker as implemented today (the next gen I believe is coming next year which will accept even more valid code you would encounter in practice although certain data structures of course remain requiring unsafe like doubly linked lists which should be used rarely if ever)
Comparing stack deallocation vs GC is kinda weird because it's not an either-or - many GC languages will happily let you stack-allocate just the same (e.g. `struct` in C#) for the same performance profile. It's when you can't stack-allocate that the difference between deterministic memory management vs tracing GC become important.
Also, refcounting is not superior to GC in terms of speed, generally speaking, because GC (esp. compacting ones) can release multiple objects at once in the same manner as cleaning up the stack, with a single pointer op. Refcounting in a multithreaded environment additionally requires atomics, which aren't free, either. What refcounting gives you is predictability of deallocations, not raw speed. Which, to be fair, is often more important for perception of speed, as in e.g. UI where a sudden GC in the middle of a redraw would produce visible stutter.
In practice, tail latencies are much harder to control in GC vs RC implementations which is what I was trying to communicate. This doesn’t matter just for UI applications but can also directly implicate how much load your server can service. Ref counting in a multithreaded environment can use atomics although biased ref counting is considered the state of the art to minimize that cost (ie RC on the owning thread, arc on shared threads).
As for releasing multiple objects at once, in practice I’ve yet to see that bear out in practice as a real advantage. The cost of walking the graph tends to dominate vs RC where you precisely release when unreferenced. And that’s assuming you even use RC - often times you at most RC at the outermost layer and everything internally is direct ownership. And if you really do need that, use an arena allocator which gives you that property without the need for a GC collection pause. There’s a reason there’s no systems language that uses GC.
> The issue with destructors being slow is actually a well-known problem with C++, particularly on process shutdown when huge object graphs often end up being recursively destructed for no practical benefit whatsoever (since all they do is release OS resources that are going to be released by the OS itself when process exits).
If you want fast shutdown just call _Exit(0) to bypass destructors of static, thread local, automatic storage duration. GC languages have a much worse problem of making it really easy to leak resources during the execution of a long running program. I’ll take that over a slow shutdown anytime, especially since in practice, unless you’ve written really bad code, that “slow shutdown” remains negligible.
There are a few system languages that uses GC, like Nim and D. Of course with the option to do manual memory management where necessary, and allocating things on the stack whenever possible. Nim also gives option for several diferent types of GCs and memory allocators, where each one can be more performant for different tasks. Maximum GC pause can also be configurable, at the cost of temporarily using more memory than you should until the GC manages to catch up.
Of course, you can always manually craft arenas and such to be faster and avoid fragmentation, at the cost of much more effort.
Nim and D are also bad examples as I’m not aware of any meaningful systems level programs that have been written in them - they have continuously failed to find a way to become mainstream (Nim is mildly more successful in that it’s managed to break into the 50-100 range of most popular languages but that’s already well into the tail of languages to the point where you can’t even tell the difference between 50 and 100)
You seem to not like any of them much, so I'll just briefly address a few of your points:
> Of course it’s not any less safe than any other language that exposes atomics so it’s weird to imply this as something uniquely negative to Rust
That wasn't the implication. Off-the-cuff, when you ask your average rustacean what they think "no data races in safe Rust" means, do you honestly think they will tend to write code treating atomics with an appropriate level of respect as they would in another language?
> Actually most memory safe languages don’t have issues expressing data structures (eg Java)
That was sloppy writing on my part. I left the implicit "without runtime overhead" in my head instead of writing it down.
> Memory leaks
This first one isn't a leak per se, but it's about the same from an end-user perspective [0]. Here's a fun example of that language complexity I was talking about (async not being very composable with everything else) as an example of a true leak [1]. Actix was still only probably/mostly leak-free starting from v3 [2].
Rust makes it easy to avoid UAF errors, but the coding patterns it promotes to make that happen, especially when trying to write fast, predictably performant data structures, strongly encourage the formation of leaks -- can't have a UAF if you never free.
[0] https://blog.polybdenum.com/2024/01/17/identifying-the-colle...
[1] https://www.google.com/amp/s/onesignal.com/blog/solving-memo...
[2] https://paper.dropbox.com/published/Announcing-Actix-Web-v3....
I agree, from what you would expect from Rust, atomics are a weird safety hole. But that’s just because the bar for Rust is higher but if we’re comparing across languages we must use a consistent bar.
> This first one isn't a leak per se, but it's about the same from an end-user perspective [0]
This kind of stuff pops up in every language (eg c++ vector and needing to call shrink_to_fit). Reusing allocations isn’t a unique problem to Rust and again, if you’re using the same bar across languages, they all have similar issues. I’m sure zig does too if you go looking for similar kinds of footguns, especially as more code starts using it.
> Rust makes it easy to avoid UAF errors, but the coding patterns it promotes to make that happen, especially when trying to write fast, predictably performant data structures, strongly encourage the formation of leaks -- can't have a UAF if you never free.
There’s so many cutting edge performant concurrent data structures available on crates.io that let you do cool stuff with respect to avoiding UAF and not leaking memory when you really need it. And other times you don’t need to worry about concurrency and then the leak and UAF concerns go away too. And again, I feel like a higher bar is being used for Rust and it doesn’t feel like Zig or other languages really offer more ergonomic solutions
Memory safety is a useful concept, but it’s not a panacea and it’s not binary. If the end goal was safety JS would have been fine. Safe rust is guaranteed memory safe which is a huge improvement for system programming but not necessarily the end-all-be-all. There are always tradeoffs depending on the application. I personally think having safety be easily achievable is more important than guaranteed. The problems we’ve had with C and C++ is that it’s been hard to achieve safety.
Zig is also a good choice if you care about safety - it simplifies things (by having a defer statement) and it's tooling is geared towards safety by having multiple targets that let you run your program in ways to catch memory safety issues during development. It is not enforced by the compiler, only at runtime in development/non-release-fast builds but still an improvement over C/C++.
Safety is a spectrum - C is less safe than C++, which is less safe than Zig, which is less safe than Rust, which is less safe than Java, which is less safe than Python. Undefined behavior and memory corruption are still possible in all of them, it's just a question of how easy it is to make it happen.
Java and Python both have access to unsafe operations (via sun.misc.unsafe/ctypes) but Java is multithreaded, which requires extra care, whereas Python is not.
Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe` as you note is also possible in other languages). The Rust compiler, when writing normal Rust code will prevent you from compiling code that uses memory incorrectly.
You can then solve the problem by figuring out how you're using the memory incorrectly, or you could just skip out on it by calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` for multi-threaded code, and have it effectively garbage-collected for you.
In any case, this is orthogonal to safety. Rust gives you better safety than Python and Java, but at the cost of a more complex language in order to also give you the option of high performance. If you just want safety and easy memory management, you could use one of the ML variants for that.
> Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe`
There is no "except if you" in this context. I'm talking about unsafe Rust, specifically. I'm not talking about safe Rust at all. Safe Rust is a very safe language, and equivalent in memory safety to safe Java and safe Python. So if that's your argument, you've missed the point entirely.
> In any case, this is orthogonal to safety.
No, it's not orthogonal - memory safety is exactly what I'm talking about. If you're talking about some other kind of safety, like null safety or something, you've again missed the point entirely.
> ... calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` ...
This whole paragraph is assuming the use of safe abstractions. If you're arguing that safe abstractions are safe, then, well... I agree with you. But I'm talking about raw pointers, so you're missing the point here.
Btw, Python also has unsafe APIs[1, 2, 3, 4] so this doesn't even differentiate these two languages from each other. Some of them are directly related to memory safety, and you don't even get an `unsafe` block to warn you to tread lightly while you're using them. Perhaps we should elevate Rust above Java and Python because of that?
[1]: https://docs.python.org/3/library/gc.html#gc.get_referrers
[2]: https://docs.python.org/3/library/ctypes.html
And now you're linking me docs talking about things I already explicitly mentioned in my past comments.
You are so confidently ignoring my arguments, and so fundamentally misunderstanding basic concepts, that this discussion has really just become exhausting. I hope you have a nice day but I won't be replying further.
Actually Rust is safer because its unsafe features must be surrounded by ‘unsafe’ keyword which is easy to look for, but you can’t say that about Java and Python.
You can do unsafe stuff using stdlib in either language, sure. But by this standard, literally any language with FFI is "not any less safe" than C. Which is very technically correct, but it's not a particularly useful definition.
As far as translation of Java or Python to safe Rust, sure, if you avoid borrow checking through the usual tricks (using indices instead of pointers etc), you can certainly do so in safe Rust. In the same vein, you can translate any portable C code, no matter how unsafe, to Java or Python by mapping memory to a single large array and pointers to indices into that array (see also: wasm). But I don't think many people would accept this as a reasonable argument that Java and C are the same when it comes to memory safety.
Much harder to write Rust than Python, but definitely safer.
(Rust vs Java is much closer, but Java's nullable types by default and errors that are `throw`n not needing to be part of the signature of the function lead to runtime errors that Rust doesn't have, as well.)
You can absolutely opt-out of lifetime management in Rust. It's not usually talked about because you sacrifice performance to do it and many in the Rust community want to explicitly push Rust in the niches that C and C++ currently occupy, so to be competitive the developer does have to worry about lifetimes.
But that has absolutely nothing to do with Rust's safety, and the fact that Rust refuses to compile if you don't provide it a proper solution there means it's at least as safe as Python and Java on the memory front (really, it is more as I have already stated). Just because it's more annoying to write doesn't affect it's safety; they are orthogonal dimensions to measure a language by.
If your goal is to ship to most users something that kind of works then there are certainly complex solutions that will do that.. If your goal is memory safety that's more like every device working as expected which is done with less bloat not more.
IMO, partially. But zig isn't done, so we probably can't judge that yet.
Now, zig does have good memory safety. It's not at the level of Javascript or Rust, but it's not like C either.
Last I checked -- a while ago now -- user-after-free was a major issue in zig. IMO, that has to be addressed or zig really has no future.
Javascript really is a memory safe language. But its runtime and level of abstraction doesn't work for "systems programming".
For systems programming, I think you want (1) memory safety by default with escape hatches; and (2) a "low" level of abstraction -- basically one step above the virtual PDP-11 that compilers and CPUs have generally agreed on to target. That's to let the programmer think in terms of the execution model the CPU supports without dealing with all the details. And as a kind of addendum to (2), it needs to interop with C really well.
Rust has (1) nailed, I think. (2) is where it's weak. The low level is in there, but buried under piles of language feature complexity. Also, it disallows some perfectly safe memory management patterns, so you either need to reach for unsafe too often, or spend time contorting the code to suit the solution space (rather than spending time productively, on the problem space).
Zig is weak on (1). It has some good features, but also some big gaps. It's quite strong on (2) though.
My hope for zig -- don't know if it will happen or not -- is that it provides memory safety by default, but in a significantly more flexible way than rust, and maintains it's excellent characteristics for (2).
Be careful not to believe your own hyperbole. Some people are loudly and persistently recommending other people to use memory safe languages. Rust may be quite popular lately but the opinions held by some subset of that community does not reflect the opinions of "everyone". It would be just as silly to say: "everyone is suggesting to move to OSS licenses".
> sholdn't [... new projects ...] be done in a memory safe language
Again, please be careful to understand where you are getting this "should". What happens exactly if you don't choose a memory safe language? Will the government put you in jail? Or will a small vocal community of language zealots criticize you.
Maybe you feel like you want to fit in with "real" programmers or something. And you have some impression that "real" programmers insist on memory safe languages. That isn't the case at all.
In my experience, making technical decisions (like what programming language to use) to avoid criticism is a really bad path.
C feels substantially different than Rust. It’s much smaller and less complicated. It’s technically statically typed, but also not in that it doesn’t really have robust non-primitive types. It’s a very flexible language and really good for problems where you really do have to read and write to random memory locations, rearrange registers, use raw function pointers, that sort of thing. Writing C to me feels a lot closer to Python sometimes than to Rust or C++. Writing algorithms can be easier because there is less to get in your way. In this way, there’s still a clear place for C. Projects that are small but need to be clever are maybe easier done in C than Rust. Rust is getting used more for big systems projects like VMs (firecracker), low level backends, and that sort of thing. But if I was going to write an interpreter I’d probably do it in C. Now, I’d do it in Zig.
However, unlike Rust Zig does reject C++'s attempt to hide some low-level details and make low-level code appear high-level on the page (i.e. it rejects a lot of implicitness), it is (at least on its intrinsic technical merits) suitable for the same domains C++ is suitable for. It's different in the approach it takes, but it's as different from C as it is from C++.
Rust is deeply complex.
I don't really like the syntax though. Python barely does Python like syntax right.
Is Zig a good alternative. I vastly prefer higher level languages like C#. Have has a special place in my heart, but it's not supported outside of a few game engines.
But I think the more interesting thing is that if you remove all features in C# that require heap allocations, the resulting subset is basically C with namespaces and generics, which is still useful, and certainly possible to compile efficiently even for very constrained platforms.
C# is quite a bit more than just enhanced C if you remove GC-reliant features as the generics and interface constraints enable a huge subset of features, alongside SIMD API, stack-allocated buffers and all sorts of memory wrappable in Span<T>s, which almost every method in the standard library that works on some sort of buffer accepts nowadays instead of plain arrays.
You can also manually allocate objects - there are multiple ways to go about it and even with the presence of GC, there is a "hidden" but supported API to register a NonGC heap that GC understands when scanning object references.
Though effective targeting is limited to much fewer count of platforms. Mono can target as low as ARMv6 but CoreCLR/NativeAOT mainly work with x86, x86_64, ARM and ARM64. For microcontrollers you are better off using Rust in my opinion. But for anything bigger .NET can be a surprisingly capable choice.
Is there a reason you are avoiding a simple compilation of Zig part of codebase into a dynamically or statically linked library and calling into it with P/Invoke? You only need a `Target` and maybe `None Include=...` items in .csproj to instrument MSBuild to hook the building process of Zig into dotnet build and co.
I'm just putting a disclaimer that using WASM is a very wrong kind of suggestion, would likely not work the way you expect it to (you would have to use WASM for .NET too which is in many places experimental and is a huge performance killer) and no one does it - there are appropriate ways to target multiple platforms in a solution that splits logic between .NET and C/C++/Rust/Zig/Swift/etc., especially that Zig offers nice cross-compilation toolchain. Mind you, the use case for this is accessing language-specific libraries and for performance the solution really is writing faster C# instead.
I want the editor to be usable in other GUI stacks, a C-compatible library is the only approach that makes sense here
https://github.com/dotnet/ClangSharp
I strongly caution against WASM suggestions in a sibling comment - I’m not even sure if the author has actually done any C# at all, given how ridiculous it is.
WASM is definitely a strange suggestion here, I didn’t take it seriously. I’m already using a C-compatible zig library approach. Some details of the use case here: https://news.ycombinator.com/item?id=41729059
not liking syntax is not enough reason not to use a language. It takes a few days to get over the unfamiliarity in syntax. concepts are much harder to learn.
Syntax is a big deal.
C# looks like Java because Microsoft wanted to court Java devs.
I will admit that I prefer higher level languages since I don't care much for memory management. I just want to build cool things.
Where is C# not supported? It’s an incredibly versatile language. You even have a bunch of features to go “low-level” if needed (not as low-level as C of course, you still have the CLR): Span<T>, ref returns, ref struct, function pointer, unsafe keyword
Have should be Haxe.
Really though I've found jobs similar to compiler developer for an ANSI standard compiler at big tech to include a lot of hazard pay for how disagreeable the job actually is compared to one with more freedom.