Indeed this is the first time I've heard of nsync, but Mike Burrows also wrote Google's production mutex implementation at https://github.com/abseil/abseil-cpp/blob/master/absl/synchr... I'm curious why this mutex implementation is absent from the author's benchmarks.
By the way if the author farms out to __ulock on macOS, this could be more simply achieved by just using the wait(), notify_one() member functions in the libc++'s atomic library.
A while ago there was also a giant thread related to improving Rust's mutex implementation at https://github.com/rust-lang/rust/issues/93740#issuecomment-.... What's interesting is that there's a detailed discussion of the inner workings of almost every popular mutex implementation.
The current Rust mutex implementation (which that thread does talk about later) landed earlier this year and although if you're on Linux it's not (much?) different, on Windows and Mac I believe it's new work.
That said, Mara's descriptions of the guts of other people's implementations is still interesting, just make sure to check if they're out-dated for your situation.
AFAIK one reason to switch was that mutexes on Linux and MacOS were not guaranteed to be moveable, so every rust's Mutex had to box the underlying os mutex and was not const-constructible. So this makes a considerable change.
std::sync::Mutex::new became const in 1.63, which was over two years ago.
Here's the 2024 rework of Windows Mutex: https://github.com/rust-lang/rust/pull/121956/
Edited to add: Since we're going back and forth I read the blame log, the Linux implementation, with which I'm most familiar, is still 85% Mara's although with some meaningful changes from 2024.
> The Burrows-Wheeler transform is an algorithm used to prepare data for use with data compression techniques such as bzip2
Possibly because it's C++ (as opposed to C)? I am speculating.
MSVC 2022's std::mutex is listed, though. (That said, GCC's / clang's std::mutex is not listed for Linux or macOS.)
absl::Mutex does come with some microbenchmarks with a handful of points of comparison (std::mutex, absl::base_internal::SpinLock) which might be useful to get an approximate baseline.
https://github.com/abseil/abseil-cpp/blob/master/absl/synchr...
I was talking about the wait() and notify_one() member functions for std::atomic. See https://en.cppreference.com/w/cpp/atomic/atomic/wait and https://en.cppreference.com/w/cpp/atomic/atomic/notify_one
We did not, however, run on one server for any length of time.
However, it has added logic to start chunking through the queue after 30 wakeups from a waiter. With that, waiting isn't indefinite, but it also isn't fair.
I have no idea how that compares to other implementation's fairness. I know the JDK recently abandoned fairness because of the overhead and complexity it added around mutex handling.
Probably a specifically Fair version of any concurrent primitive which can be fair is worth giving a distinct name, the way you'd offer both a stable and an unstable sort, knowing that the unstable sort will often (though not always) be faster but some people cannot tolerate an unstable sort so it's useless for them.
On the other hand, maybe it's an attractive nuisance and if you offered this you'd find most users took your FairMutex, and then were angry because of the inevitable problems from fairness, even though the unfair one was right there...
The importance of fairness varies with the algorithm. For instance, one that alternates between being CPU-bound and disk-bound may use N_workers >>> N_cores. These workers might not care about individual starvation at all.
At the other end, consider a strictly CPU-bound algorithm with N_workers==N_cores, and a rogue-ish worker that ends up hogging an important mutex by spinning around it (e.g. the one written by author of the benchmark.) If the other well-behaved workers briefly lock the mutex once prior to each job, but end up getting woken up only once every 30 locks (as this implementation apparently does,) you might end up with core utilization <<< 100%.
This in turn means that APIs that let you customize the behavior can win out even if they're not #1 on any benchmark.
It's only recently that I've been actively looking into different locking algorithms, just because I've been diving in head-first to a lot of pure concurrency and distributed computing theory, a lot of which is about figuring out clever ways of doing mutexes with different tradeoffs.
I've gotten a lot better with them now, and while I still personally will gravitate towards messaging-based concurrency over locks, I do feel the need to start playing with some of the more efficient locking tools in C, like nsync (mentioned in this article).
[1] Before you give me shit over this, generally the replacement code runs at roughly the same speed, and I at least personally think that it's easier to reason about.
One of the best books I've read on it is Java concurrency in practice [1]. It does an excellent job of dispelling these occultic beliefs and letting the reader know exactly when and how concurrency should be implemented. It is applicable to more languages than just java, especially since many have adopted large parts of the java memory model.
The worst things I usually find when reviewing concurrent code is people either not using locks when they should, using locks when they shouldn't, and having inconsistent data guards. I've seen people throw in random locks to guard local non-shared state which is just crazy town but "Multiple threads are running this code, so I'm adding a lock".
I certainly prefer message passing over shared state. However, it's a little baffling to me why it's so hard for devs to grasp how to properly maintain shared state. Instead of just learning the basic rules, it gets couched in "It's just too hard to understand so keep adding things until it works".
[1] https://www.amazon.com/Java-Concurrency-Practice-Brian-Goetz...
Probably because most people aren't aware that there are basic rules to be learned. I'd imagine the typical experience is, you're very familiar with single-threaded code, and now you're trying to let other threads work with your data. You have heard that there are many pitfalls, and that there are special-purpose tools like mutexes to avoid those, but you look at the examples and find them mostly baffling. "Why do they perform these incantations for this data but not that data, or in this place but not that place?" So you come up with some weird mental model and move on with your life, never aware that there are underlying principles for maintaining shared state.
Personally, I didn't understand mutexes very well at all, until I started looking into what the atomic memory orderings from C++ et al. were supposed to mean.
For me the biggest challenge when sharing state is that the only benefit I can see for parallelism is performance, so if I'm not gaining performance there is no reason to use parallelism. If I use coarse-grained mutexes then I end up with straight forward to reason about code but I lose the performance benefit and in fact can end up with slower than single threaded code.
If I use very fine grained mutexes then I end up with faster code that has very hard to find bugs that happen on very rare occasion.
And then on top of that even if you do write correct fine grained locking, you can still end up with slow code due to cache behavior such as false sharing and cache coherence.
So ultimately I disagree that writing parallel code is simple unless you're willing to give up performance in which case you may as well just stick to single threaded code or use parallelism among independent data. Writing correct parallel software that shares state and actually delivers substantial performance benefits is incredibly difficult, and I am skeptical that there is a set of simple rules that one can simply read about.
The actual rules are completely terrifying because they involve the physics of microprocessors. If you've watched Grace Hopper's lectures where she gives out physical nanoseconds (pieces of wire that are the same length as the distance light travels in a nanosecond, thus, the maximum possible distance data could travel in that time) you can start to appreciate the problem. It is literally impossible for the intuitive Sequentially Consistent model of how computers work to apply for today's fast yet concurrent processors. Light is too slow.
However generally people mean either Java's memory model or the C++ 11 (and subsequently 14, 17, 20) memory models used in languages such as C++, C and Rust. Those rules are less terrifying but still pretty complicated and the programming language promises to somehow provide an environment where these rules (not the terrifying ones) are all you need to know to write software. So that's nice.
It can be simple to write parallel code for a language designed to make that easy. Yes even if there's shared data. It only started to get trickier if the shared data is modified, so long as it isn't we can make copies of it safely and modern CPUs will do that without actual work by the programmer.
A language with a notion of threading and shared state is going to have something akin to read/write barriers built into the language memory model to tame the beast.
Correct synchronization is based on the happens-before relation; a data race is defined as a write and a conflicting read or write such that neither happens-before the other. Within a thread, happens-before is just regular program order. Across a thread, the main happens-before that is relevant is that an release-store on a memory location happens-before an acquire-load on that memory location (this can be generalized to any memory location if they're both sequentially-consistent, but that's usually not necessary).
The real cardinal rule of concurrent programming is to express your semantics in the highest-possible level of what you're trying to do, and find some library that does all the nitty-grityy of the implementation. Can you express it with fork-join parallelism? Cool, use your standard library's implementation of fork-join and just don't care about it otherwise.
It's C so you don't get a comprehensive set of bells, whistles and horns like the C++ standard library, but the actual model is the same. At a high level it's all the same as C++ 11, the details are not important to most people.
You may want to consider https://marabos.nl/atomics/ for an approachable overview that's still quite rigorous.
I'd suggest the book in my original comment, Java concurrency in practice.
> If I use very fine grained mutexes then I end up with faster code that has very hard to find bugs that happen on very rare occasion.
I agree this is a real risk if you are doing fine grained mutexes. But the rules are the same whether or not you want to follow them. If you have shared state (A, B, C) and you want to do a calculation based on the values of (A, B, C) then you need a mutex which locks (A, B, C). Certainly, that become a problem if you have calculations that just require (A, C) and you might want to avoid locking for B. In that case, you need a more complicated mechanism for locking than just simple mutexes which is certainly easy to get wrong. When the (A, B, C) actions happen you have to ensure that the (A, C) actions can't happen at the same time.
This isn't a complicated rule, but it is one that can be hard to follow if you are trying to do super fine grained locking. It's even trickier if you are going to abuse the platform to get correct results.
But fine v coarse isn't the problem I'm referring to when I say people get the simple rules wrong. Rather, than worrying about fine vs coarse grained locking, I very frequently see code where mutexes and concurrency primitives are just peppered everywhere and haphazardly. We might call that super coarse grained.
Aside from performance, another very common reason is to not lock the UI from the user. Even in UI-less programs, the ability to abort some operation which is taking too long. Another is averaging out performance of compute tasks, even in the case where it would be faster to handle them sequentially. Without some degree of parallelism these things are not possible.
Consider a web server. Without parallelism every single request is going to completely lock the program until its complete. With parallelism, you can spawn off each request, and handle new ones as they come in. Perceived performance for majority of users in this case is significantly improved even in the case of single processor system - e.g. you have 99 requests which each take a single second, and then one which takes 101 seconds. Total request time is 200 seconds / 100 requests = 2 seconds average per request, but if that 100 second request comes in first, the other 99 are locked for 100 seconds, so average is now > 100 seconds per request ...
This is not a good fit for parallelism, this is pretty much always accomplished using concurrency ie. async/await.
I feel like with a queue or messaging abstraction, it can be easier to debug. Generally your actual work is being done on a single thread, meaning that traditional debugging tools work fine, and as I've said in sibling comments, I also just think it's easier to reason about what's going on.
Sometimes though, the newbie is going to write the clearest documentation.
Then I had to explain my code to other devs, either before or after they broke it, and over and over I got the message that I was being too clever. I’ve been writing Grug-brained concurrent code for so long I’m not sure I can still do the fancy shit anymore, but I’m okay with that. In fact I know I implemented multiple reader single writer at least a few times and that came back to me during this thread but I still can’t remember how I implemented it.
It works pretty well, and I'm getting decent speeds, but I'm really scared someone is going to come and "fix" all my code by doing it the "normal" way, and thus slow everything down. I've been trying to comment the hell out of everything, and I've shared the PlusCal spec, but no one else on my team knows PlusCal and I feel like most engineers don't actually read comments, so I think it's an inevitability that my baby is killed.
Unfortunately Microsoft enshrined the situation (on Windows, on their compiler, on x86 and x86-64 only) that volatile primitive types are effectively atomics with Acquire-Release ordering. This is of course awkward when Microsoft tries to bring people to a non-x86 architecture and it can't just give them this because it would suck really hard, so finally they have to grow up and teach their developers about actual atomics.
!! Edited to fix: Previously this said Relaxed ordering, the ordering guaranteed by Microsoft is in fact Acquire-Release, hence it's expensive to provide for architectures where that's not the default.
(As a contrast, note that in Java the "volatile" keyword can be used for multithreading, but again this does not apply to C/C++.)
C/C++ also went further and adopted a few different notions of atomic variables, so you can choose between a sequentially-consistent atomic variable, a release/acquire atomic variable, a release/consume atomic variable (which ended up going unimplemented for reasons), and a fully relaxed atomic variable (whose specification turned out to be unexpectedly tortuous).
So it's not that you have a "release/acquire atomic variable" but you have an atomic variable and it so happens you choose to do a Release store to that variable, in other code maybe you do a Relaxed fetch from the same variable, elsewhere you have a compare exchange with different ordering rules
Since we're talking about Mutex here, here's the entirety of Rust's "try_lock" for Mutex on a Linux-like platform:
self.futex.compare_exchange(UNLOCKED, LOCKED, Acquire, Relaxed).is_ok()
That's a single atomic operation, in which we hope the futex is UNLOCKED, if it is we store LOCKED to it with Acquire ordering, but, if it wasn't we use a Relaxed load to find out what it was instead of UNLOCKED.We actually don't do anything with that load, but the Ordering for both operations is specified here, not when the variable was typed.
I should point out that for more than half of my professional career, those facilities did not exist, so volatile was the most portable way of implementing e.g. a spinlock without the compiler optimizing away the check. There was a period after which compilers were aggressively inlining and before C11 came out in which it could be otherwise quite hard to otherwise convince a compiler that a value might change.
It does kinda sorta work on x86 due its much-stronger-than-usual guarantees wrt move instructions even in the absence of explicit barriers. And because x86 was so dominant, people could get away with that for a while in "portable" code (which wasn't really portable).
TL;DR: The compiler can reorder memory accesses and the CPU can reorder memory accesses. With a few notable exceptions, you usually don't have to worry about the latter on non-SMP systems, and volatile does address the former.
The volatile qualifier makes any reads or writes to that object a side-effect. This means that the compiler is not free to reorder or eliminate the accesses with respect to other side-effects.
If you have all 3 of:
A) A type that compiles down to a single memory access
B) within the same MMU mapping (e.g. a process)
C) With a single CPU accessing the memory (e.g. a non-SMP system)
Then volatile accomplishes the goal of read/writes to a shared value across multiple threads being visible. This is because modern CPUs don't have any hardware concept of threads; it's just an interrupt that happens to change the PC and stack pointer.
If you don't have (A) then even with atomics and barriers you are in trouble and you need a mutex for proper modifications.
If you don't have (B) then you may need to manage the caches (e.g. ARMv5 has virtually tagged caches so the same physical address can be in two different cache lines)
If you don't have (C) (e.g. an SMP system) then you need to do something architecture specific[1]. Prior to C language support for barriers that usually means a CPU intrinsic, inline assembly, or just writing your shared accesses in assembly and calling them as functions.
Something else I think you are referring to is if you have two shared values and only one is volatile, then the access to the other can be freely reordered by the compiler. This is true. It also is often masked by the fact that shared values are usually globals, and non-inlined functions are assumed by most compilers to be capable of writing to any global so a function call will accidentally become a barrier.
1: As you mention, on the x86 that "something" is often "nothing." But most other architectures don't work that way.
The biggest part of mutexes and how to properly use them is thinking of the consistency of the data that you are working with.
Here's a really common bug (psuedocode)
if (lock {data.size()} > 0) {
value = lock { data.pop() }
lock { foo.add(value) }
}
The issue here is size can change, pop can change, and foo can change in unexpected ways between each of the acquired locks.The right way to write this code is
lock {
if (data.size() > 0) {
value = data.pop()
foo.add(value)
}
}
That ensures the data is all in a consistent state while you are mutating it.Now, what does make this tricky is someone well-meaning might have decided to push the lock down a method.
Imagine, for example, you have a `Foo` where all methods operate within a mutex.
This code is also (likely) incorrect.
value = foo.bar()
if (value.bat()) {
foo.baz(value)
}
The problem here is exactly the same problem as above. Between `foo.bar()` and `foo.baz()` the state of foo may have changed such that running `foo.baz(value)` is now a mistake. That's why the right thing to do is likely to have a `foo.barBaz()` method that encapsulates the `if` logic to avoid inconsistency (or to add another mutex).In java, the most common manifestation (that I see) of this looks like this
var map = new ConcurrentHashMap();
if (map.get(foo) == null)
map.put(foo, new Value());
Because now, you have a situation where the value of `foo` in the map could be 2 or more values depending on who gets it. So, if someone is mutating `Value` concurrently you have a weird hard to track down data race.The solution to this problem in java is
map.computeIfAbsent(foo, (unused)->new Value());
And computeIfAbsent can end up holding the lock for too long if the function is slow.
> And computeIfAbsent can end up holding the lock for too long if the function is slow.
How is this different from any other lock-holding code written anywhere?
And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object. If there’s no object to make the critical section is very small. But as the object sprouts features you start smashing face first into Amdahl.
Not true in any practical sense.
> And you end up having to trade single core performance for multi core by deciding to speculatively calculate the object.
What is the alternative you suggest? If you care about having the predicate actually hold, and you also don't want to have to hold the lock while constructing the object, then you're going to end up in an optimistic-concurrency scenario where you check the predicate under lock, compute the object, and check again before swapping the value in. You may end up having to throw your work away when you discover the predicate changed. Java is no better nor worse at doing this than anything else.
This is going to put a damper on any further conversation.
Even with coarsening and elision every synchronized function closes a lock on the enclosing object.
OpenTelemetry is a fairly recent library. Even if you ignore some test fakes (where, let's face it, who cares), it still uses it in a few places, and uses lock objects in others. I don't see much evidence of recursion going on with the former. But that's how things always start and later there's running and screaming.
I wonder where this patently bad idea of every object carrying its own publicly accessible mutex originated in the first place. Did Java introduce it, or did it also copy that from somewhere else? And what was the motivation?
.NET also has MethodImplOptions.Synchronized like Java does. However, the only place I have ever seen this attribute was on TextWriter.Synchronized implementation in CoreLib and nowhere else.
Java itself has `Lock` and `Condition`. In the end, most synchronization primitives do the same high-level actions and bound to end up having similar API.
As for `lock(this)`, much like with many other historically abused techniques that have become frowned upon - it's not bad per se if you own the type and know that it is internal and will not be observed outside of the assembly it is defined in, provided it is small enough. It's footgun-prone, but generally very few code paths will lock an arbitrary object instance at all, so most of the time it's something you see so rarely it has become "just write a comment why and move on" when using it. Of course this requires more deliberation and it's easier to default to blanket policies that ignore context. It can be difficult to get people to "use the appropriate tool" mentality.
.NET is also getting it's a separate `Lock` type, on top of all the existing synchronization primitives, to move a little further away from other legacy aspects of `lock`ing on object instances.
In practice this means that unless you can guarantee that you never, ever leak a reference anywhere, you don't know who else might be locking it. Which makes it impossible to reason about possible deadlocks. So the only sane way to manage it is to have a separate object used just for locking, which is never ever passed outside of the object that owns the lock.
And yes, this is absolutely bad design. There's no reason why every object needs a lock, for starters - for the vast majority of them, it's just unnecessary overhead (and yes, I know the monitors are lazily created, but every object header still needs space to store the reference to it). Then of course the fact that it's there means that people take the easy path and just lock objects directly instead of creating separate locks, just because it's slightly less code - and then things break. It's almost always the wrong granularity, too.
Thing is, I haven't seen this design anywhere outside of Java and .NET (which copied it from Java along with so many other bad ideas). Everybody else uses the sane and obvious approach of creating locks explicitly if and when they are needed.
Right. Synchronized is the key word here. The vast majority of code doesn't involve synchronized, and therefore the vast majority of objects don't have locks associated with them. That's quite important.
Those classes which do use synchronized were just going to create a ReentrantLock held for the duration of the call anyway, in which case it's all monitorEnter and monitorExit, regardless.
> This is going to put a damper on any further conversation.
Sadge.
Oops, I need to correct myself!
ReentrantLock doesn't depend upon monitorEnter/Exit, but rather AbstractQueuedSynchronizer and LockSupport, which ultimately delegate to Unsafe methods like park/unpark and CAS (*compareAndSet*). Don't know why I had that confused in my head.
In any case, the point holds that "synchronized" as a language feature has mostly a zero cost for code that doesn't use it. It's a red herring when discussing modern Java concurrency.
value = nil
lock {
if (data.size() > 0) {
value = data.pop()
}
}
if (value) {
foo.add(value)
}
{
mutex.lock();
defer mutex.unlock();
// Do mutex things
}
It's possible to get this wrong still, of course, but both the anonymous scope and the use of `defer` make it easier to get things right.Nothing can prevent poor engineering around mutex use though. I'd want a critical path for a concurrent hashmap to look like this:
{
shared_map.lock();
defer shared_map.unlock();
if (shared_map.getOrNull(foo) == null) {
shared_map.put(foo, new_val);
}
}
Where the SharedMap type has an internal mutex, and a way to check it, and all operations panic if no lock has been acquired. It could have `shared_map.lockAndGet(OrNull?)(...)`, so that the kind of problem pattern you're describing would stand out on the page, but it's still a one-liner to do an atomic get when that's all you need the critical path to perform.I don't think these invariants are overly onerous to uphold, but one does have to understand that they're a hard requirement.
foo.lock {
value = foo.bar() // foo.lock within this method is ignored
if(value.bat()) {
foo.baz() // foo.lock within this method is ignored
}
}
Also, to catch this bug early, you could assert foo is locked in `value.bat` or something. But that may or may not be feasible depending on how the codebase is structuredIn regards to Java, at this point I almost never use the `synchronized` keyword anymore and instead (if I can't easily map to some kind of queuing abstraction) use the ReentrantLock object simply because of the ability to have lock acquisition time out, and also letting you opt-in to fairness if you'd like. It's not as pretty but it's more flexible and as far as I'm aware it doesn't affect performance much.
For the most part, though, in Java, you can get away without (explicit) locks by simply abusing the built-in data structures. I know they're using their own synchronization techniques behind the scenes, but I trust those to be correct more than some ad-hoc stuff I'd write as an engineer.
Still there are a lot of things that can go wrong with mutexes: forgetting to unlock in the case of exceptions, priority inversion, recursive locking, deadlock, needlessly high contention, etc.
Java has had an excellent concurrency runtime with abstractions that are typically a better fit than a bare mutex for over 20 years now (c.f. Doug Lea). Synchronized still exists, because of Java's excellent backwards compatibility.
The problem with lock _hierarchies_ as a concept is that a lock really should represent serialization of access to a particular pool of data, and should make no assumptions that it being held implies some other lock's domain is also held. The code that results when people do not maintain this kind of rigor is quite terrible, but hierarchies tend to steer people into thinking that way because they imply recursively taking locks.
Stated differently: locks should be taken and released in a fixed order - so locks are ranked - but there should not be a model where all lower-ranked locks must be held for a given lock to be taken. The lock protects its domain and the ordering of take and release is to prevent deadlock, but there's no requirement for completeness.
Agreed. I find things like LMAX Disruptor much easier to reason about.
If I am allowed to use libraries, I end up using Vert.x for nearly everything. I think that their eventbus abstraction is easy enough to reason about, and even without using it simply using the non-blocking stuff it provides ends up being pretty handy.
Most of the mutex tragedies I have seen in my career have been in C, a useless language without effective scopes. In C++ it's pretty easy to use a scoped lock. In fact I'd say I have had more trouble with people who are trying to avoid locks than with people who use them. The avoiders either think their program order is obviously correct (totally wrong on modern CPUs) or that their atomics are faster (wrong again on many CPUs).
More or less, yeah. You can write an MPSC queue that doesn't explicitly use a lock (or even anything that looks like a lock).
It just abstracts this detail away for me, and I personally trust the libraries implementing these abstractions to be more correct than some ad hoc thing I write. It's an abstraction that I personally find a lot easier to reason about, and so my thinking is this: if my reasoning is more likely to be correct because of the easier abstraction, and the internal synchronization is more likely to be correct, then it's more likely that my code will be correct.
I don't do super low-level stuff at all, most of my stuff ends up touching a network, so the small differences between the built-in synchronized structures vs the regular ones really don't matter since any small gains I'd get on that will be eaten the first time I hit the network, so a considerably higher ROI for me is almost always figuring out how to reduce latency.
If I did C or C++, I'd probably have different opinions on this stuff.
Mutexes can be handled safely in C. It's "just another flavor" of resource management, which does take quite a bit of discipline. Cascading error paths / exit paths help.
You misspelled “fast as fuck” and “lingua franca of all architectures.”
Kind of. If you can architect such that each channel has exactly 1 reader and 1 writer, you can send messages in a single direction with no locks. The basic idea is that you have a circular buffer with a start index and an end index. The writer can write an element and increment the end index (as long as end index+1<start index which doesn't have to be done atomically), while the reader can just read an element and increment the start index (as long as start index +1 < end index). This strategy needs to use atomic operations (which are basically free when uncontested, which they will be as long as the queue has a few elements in it)
Shared-nothing message passing reflects the underlying (modern) computer architecture more closely, so I'd call the above a good move. Shared memory / symmetric multiprocessing is an abstraction that leaks like a sieve; it no longer reflects how modern computers are built (multiple levels of CPU caches, cores, sockets, NUMA, etc).
Viceversa if you do pure message passing you are not benefitting from hardware accelerated cache coherency and leaving performance (and usability) on the floor.
(like Alpine use of musl)
- uncontended, mildly contended, and horrifically contended
- short and long critical sections
- contention among small numbers of threads and large numbers of threads
- contention that happens when other locks are held recursively and those are also contended on (like, thread A wants a lock held by thread B, but thread B is trying to get a lock held by thread C)
Different lock algos work well or badly depending on how the locks are used, so it’s important to pick real programs as your benchmark rather than trying to cook a synthetic benchmark.
[edit - I forget to even mention one of the most important things, that locks that reform poorly under contention can have really negative systemic effects like hot spotting the memory network, and that would show up here too]
Think of this like the random input case for a sort benchmark. Do I want stuff like all-ascending, all-descending, zig-zag and so on? Sure, those are nice. But without the random input case the benchmark is not very helpful. I might sort a zig-zag, I might sort data that's already in ascending order, but I will sort random data, that's going to happen or else I would not need a sort function.
I'm not sure I agree with this assessment. I can think of a few cases where you might end up with a bunch of threads challenging the same mutex.
A simple example would be something like concurrently populating some data structure (list/dict/etc). Yes, you could accomplish this with message passing, but that uses more memory and would be slower than just having everything wait to write to a shared location.
Nope.
Message passing has allocation pressure and cache consistency pressure not present in using a shared message location. Especially as the amount of memory in question goes up, the benefit of a shared location increases in terms of the performance impact.
Sure, for something silly like writing to an int, there is negative benefit in a shared location, but when you start talking about a dictionary with 1 million entries, the shared location becomes much more of a benefit vs all the copying and allocating you'd have to do if you tried to do the same thing with message passing.
For some datastructures, it's the optimal way to move data around. For example, LMAX disruptor is about the fastest way to pass messages because of the shared memory and well tuned locks.
With dictionaries, you may have to do a bit of resizing and rehashing at the end. But in my benchmarks, it is still worthwhile.
Depends on the list implementation. I assume we are talking C++ lists? in that case yeah, I can see how you'd instead do a `splice` of sublists.
I was thinking more in terms of something like a `vector` in which case adding the sublists in requires copying values from one vector to the source vector. That (especially if done wrong) can involve allocating potentially more than once to drop the results in.
But, for the record, there are lock free algorithms that don't require a mutex to add values into a list. You basically CAS the tail pointer with your new value.
That being said, I concede that for a list a mutex is probably the wrong choice. It's a better choice in Java which has constraints that prevent (easily) splicing together 2 lists.
For the dictionary, implementation is key. A good segmented dictionary is going to be hard to beat.
For example, some thread safe dictionaries will segment their underlying key/value pairs which allows them to have concurrent reads and writes for a given segment which significantly improves performance.
Faster doesn't mean fast. You really, really want to not use a thread-safe dictionary.
For highly contended data structures, spinlocks (and nowadays explicit atomics) are likely better.
Most of us who ship fast locks use very large multithreaded programs as our primary way of testing performance. The things that make a mutex fast or slow seem to be different for complex workloads with varied critical section length, varied numbers of threads contending, and varying levels of contention.
(Source: I wrote the fast locks that WebKit uses, I’m the person who invented the ParkingLot abstraction for lock impls (now also used in Rust and Unreal Engine), and I previously did research on fast locks for Java and have a paper about that.)
Of course, with Cosmopolitan being open source and all, I could do these measurements myself, but still ...
> uses an optimistic CAS (compare and swap) immediately, so that locking happens quickly when there's no contention
Pro tip: if you really do know that contention is unlikely, and uncontended acquisition is super important, then it's theoretically impossible to do better than a spinlock.
Reason: locks that have the ability to put the thread to sleep on a queue must do compare-and-swap (or at least an atomic RMW) on `unlock`. But spinlocks can get away with just doing a store-release (or just a store with a compiler fence on X86) to `unlock`.
Spinlocks also have excellent throughput under most contention scenarios, though at the cost of power and being unkind to other apps on the system. If you want your spinlock to be hella fast on contention just make sure you `sched_yield` before each retry (or `SwitchToThread` on Windows, and on Darwin you can do a bit better with `thread_switch(MACH_PORT_NULL, SWITCH_OPTION_DEPRESS, 1)`).
The more common pattern in rt/audio code is "try to take the lock, but have an alternate code path if that fails". It's not that is never going to be contention, but it will be extremely rare, and when it occurs, it probably matters. RWLocks are also a common pattern, with the RT thread(s) being read-only (and still being required to fall back on an alternate code path if the read lock cannot be taken).
if (LIKELY(CAS(&lock, UNLOCKED, LOCKED))) return;
for (unsigned i = 0; i < 40; ++i) {
/* spin for a bounded a mount of time */
if (LIKELY(CAS(&lock, UNLOCKED, LOCKED))) return;
sched_yield();
}
... /* actual full lock algo goes here */
So, the reason to use spinlocks isn't that they are faster for short critical sections, but that they don't have to CAS on unlock - and so they are faster especially in the uncontended case (and in all other cases too).In other words, the modern rule of thumb is something like: if you're going to grab the lock so frequently that the uncontended lock/unlock time shows up as a significant percentage of your execution time, then use a spinlock. That probably implies that you are probably holding the lock for <100x or even <1000x context switch time, or something around there.
Realistically though, lots of lock free employs copy-on-write or rcu.
I have written quite the fair share of lock free code (back then when it was new), and generally enjoy writing it (to this day). Yet, for most folks it's just too hard to reason/understand/maintain it.
Just a note, lock free is a lesser requirement than wait-free.
Yeah, and maybe also consider changing your design because usually this isn't needed.
Every time I've seen this happen it's in code that scales really well to lots of CPUs while having a lot of shared state, and the way it gets there is that it's using very fine-grained locks combined with smart load balancing. The idea is that the load balancer makes it improbable-but-not-impossible that two processors would ever want to touch the same state. And to achieve that, locks are scattered throughout, usually protecting tiny critical sections and tiny amounts of state.
Whenever I've gotten into that situation or found code that was in that situation, the code performed better than if it had coarser locks and better than if it used lock-free algorithms (because those usually carry their own baggage and "tax"). They performed better than the serial version of the code that had no locks.
So, the situation is: you've got code that performs great, but does in fact spend maybe ~2% of its time in the CAS to lock and unlock locks. So... you can get a tiny bit faster if you use a spinlock, because then unlocking isn't a CAS, and you run 1% faster.
So I would only use this pattern for situations where the very fact that a NRT thread tries to acquire the lock already means that RT safety is not a concern anymore (e.g. a device has been disconnected)
For this reason, on Darwin you want to avoid spinlocks unless you know that all threads touching the spinlock are always running in the same QoS. Instead of spinlocks, your go-to for low-overhead locks there is os_unfair_lock, which is a spinlock variant that donates priority of the blocked threads to the running thread.
I know it can happen in theory. But theory and practice ain’t the same.
I worked for Apple when I shipped this too lmao
One particular app had a particularly nasty habit of losing priority because most of its work was done in a set of daemons and not in the UI process itself. The foreground app is allowed to use Performance cores and receive USER_INTERACTIVE priority for operations if it wants, but daemons cannot unless they have a mach IPC voucher stating that the UI is blocked on their work. But if you do the wrong thing with a semaphore or a spinlock, you get booted out of the thread group and now you’re running on Efficiency cores at USER_INITIATED priority and suddenly your latency numbers spike 10x on systems with asymmetric cores. I remember solving a particularly bad one which was basically doubling our request latency on phones with E cores.
The company had a pretty widely spread internal article trying to whack people over the head to stop abusing lock primitives in this way, but we just kept shipping code with this issue.
(Most of the inversion issues came from the use of semaphores though. The problem got 1000x worse in swift concurrency because using semaphores blocks an entire SC OS thread and you only get as many of those as you have cores. Most Apple Watches and HomePod only have 2 of them so you can easily deadlock. Which most people fixed by just… adding a timeout to their locks. It was horrifying.)
It sounds like this is relevant when developing user app code, not just the kernel or core libraries. Is there an external version of this article? I would be very interested to read more.
let valueIWant: String? = nil
let sem = DispatchSemaphore(value: 0)
someAsyncFunction() { result in
// this is called in another thread
valueIWant = result
sem.signal()
}
sem.wait()
assert(valueIWant != nil)
Note, this is mostly about semaphores, so it’s a bit offtopic from the original discussion here which is about spinlocks. But basically, never lock something in expectation of another thread unlocking it. Since semaphores are ownerless, the scheduler has no idea what thread will eventually signal the semaphore, so it can’t boost the priority of any threads you’re waiting on.Not to mention the thread explosion issue: If the task running the completion (ie. signal()) is submitted to a libdispatch queue, the thread pool implementation will spawn more threads if all of them are starved, to eliminate deadlocks, which can lead to thread explosion in addition to the priority inversion issue. This advice dates back to when libdispatch was invented (and the internal article was called The Semaphore Antipattern…)
But with Swift Concurrency it’s far, far worse, because the SC thread pool does not spawn new threads if all of them are blocked. Meaning the dispatch block that’s supposed to call the callback may never get run, and you deadlock. Most importantly, this is true even of plain ObjC libdispatch code written 15 years ago: If some ancient framework you’re calling into does the semaphore antipattern, libdispatch is allowed to assume that all Swift Concurrency threads will eventually make forward progress, and so it doesn’t spawn new threads to handle the work, even if all threads are blocked. So ancient code that uses this antipattern suddenly blows up if someone calls it (transitively!) from a swift concurrency context. I’ve squashed countless bugs on this issue alone. (I wrote my own article on this internally too, titled something like “Semaphore abuse: How to deadlock your process and require a device reboot”)
IMHO, the semaphore should be officially marked deprecated and trigger warnings… there are no appropriate uses for it. If you’re in a synchronous context and need to block on the result of async code, the best bet is to stop what you’re doing and file a bug to the people writing the async code and tell them to give you a synchronous API (or refactor your code to be async.) Never ever use semaphores to block on async code. Not even once.
Bare spinlocks are very bad in this way, unless you have dedicated hardware threads (with SMT you can pin all but one of the threads with the remaining thread being general purpose, though you may want a way to inform the scheduler which cores to currently prefer due to their pinned tasks being more idle).
This is something I've thinking about a lot over time, that the CAS is only there to atomically determine if there are any sleeping waiters on unlock and you have to do a futex_wake. I would really want some way to get away with non-fenced operations (at least on x86), but I don't know if it's just that nobody has figured out why, or there is a fundamental reason why that's not possible.
I think the issue is that if you ask the CPU to just store something (like in a spin lock), whether or not there’s a fence, it’s an operation with limited data flow dependencies so it’s easy for the CPU to execute. Even the fence can be handled using wacky speculation tricks.
But if you want to do something like, “store this value but only if the old value satisfies some predicate”, then there’s a load and the whole thing is dependent on the load. So you’re asking the CPU to load, then run a predicate, then store, and for that to be fenced, and atomic.
Strictly more work. I don’t think there’s any trick to make it faster than just the store release.
Well yes but on x86 that comes for free. The overhead of the full fence brought in by lock cmpxchg or lock xchg is in the order of ~10ns, which for an uncontended lock means that a mutex is almost 2x as slow as a spinlock.
A load acquire + store release would be a couple of ns (assuming everything in L1 etc...)
Consider the implementation of Dekker's algorithm for example.
[1] https://www.cs.bgu.ac.il/~hendlerd/papers/p168-expensiveSync...
This relaxed-load-CAS, coupled with some form of control or data dependency would be sufficient to implement your mutex. But would such a CAS be significantly cheaper than the existing CAS? If it was, you would be able to approximate the strong CAS semantics by adding lfence+sfence after the relaxed CAS. These fences are cheap, so if strong CAS was possible to be implemented this way with significant improvements, intel would have already done it.
Finally, it is practically possible to implement the sequence store(A);#StoreLoad;load (B) without an explicit fence by using the colocation trick: have A and B be adjacent in memory (on the same cacheline), store to A, then do a wider load on both A and B. Intel does not give multithread guarantees on this, but my understanding is that it works: the wider load fails to be store-forwarded and stalls the pipeline waiting for the previous store to be flushed. In practice this costs about as much as a fence, so in addition to being undefined, is not cheaper.
- On the wait side, do the CAS to set the "waiter present" bit. Down unlock, do a (relaxed) read of the lock word, and if "waiter present" isn't set, just do a release store to unlock (and go down some slow CAS-y wake path if a waiter is present). On the wait side, never do an un-timed futex wait; just do a series of timed waits, with increasing wait times (so that you still eventually fix things if you hit the unlucky race between the previous holder's check+store sequence). (You can also do some tricks with counters to let waiters do an unconditional sleep once they get their wait acknowledged).
- Split out the "waiter present" bit into its own byte, do a store-load sequence (with just a compiler reordering fence) to check for waiters, and have waiters either do a membarrier() syscall or wait "long enough" that they're sure they've gotten the same effect. (This gets tricky w.r.t. mutex lifetime though; you either need out of band lifetime knowledge or to use RCU or whatever and indirect through pointers).
Practically, neither was ever "better enough" for anything but microbenchmarks to be worth the complexity.
If you are doing this, given the cost of membarrier, you are optimizing for almost always uncontended locks. At this point you can make your lock default-owned by the first thread to lock it and have the owner lock and unlock be basically free until it is contended. This is basically the biased locking optimization that Java implements (or used to).
You don't necessarily want a biased lock for all this kind of stuff, because "sparsely contended" doesn't necessarily imply thread-associated. E.g. one place I was looking at this for was locks for pages of virtual memory in a heap; no thread "owns" any given heap page, but it was very uncommon to get unlucky and have two threads touching adjacent pages at the exact same time. These kind of "sloppy mutexes" get half the fast-path speedup of biased locks but without the heavily asymmetric performance costs. (At least, that was the theory; like I said it didn't really pan out to be that useful in practice).
Assuming `sched_yield` does something.
There's a futex congestion problem inside Wine's memory allocator. There are several levels of locks. If you're growing a buffer, in the sense of C's "realloc", and no buffer is available, memory allocation is locked during the allocation of a bigger buffer, copying of the contents, and release of the old buffer. "Push" type operations can force this. Two orders of magnitude performance drops ensue when multi-threaded programs are contending for that lock.[1]
Inside one of the lock loops is a call to "YieldProcessor".
static void spin_lock( LONG *lock )
{
while (InterlockedCompareExchange( lock, -1, 0 ))
YieldProcessor();
}
But the actual code for YieldProcessor is a NOP on x86:[2] static FORCEINLINE void YieldProcessor(void)
{
#ifdef __GNUC__
#if defined(__i386__) || defined(__x86_64__)
__asm__ __volatile__( "rep; nop" : : : "memory" );
#elif defined(__arm__) || defined(__aarch64__)
__asm__ __volatile__( "dmb ishst\n\tyield" : : : "memory" );
#else
__asm__ __volatile__( "" : : : "memory" );
#endif
#endif
}
}Wine devs are aware of this, but the mess is bad enough that no one has tackled it. This is down in the core of what "malloc" calls, so changes there could have unexpected effects on many programs. Needs attention from someone really into mutexes.
[1] https://forum.winehq.org/viewtopic.php?t=37688
[2] https://gitlab.winehq.org/wine/wine/-/blob/HEAD/include/winn...
Spinning on the CMPXCHG is also a bad idea. You should spin on the read and only then attempt the CMPXCHG.
You care about exchange vs read/load because of cache line ownership. Every time you try to do the exchange, the attempting CPU must take exclusive ownership of the cacheline (stealing it from the lock owner). To unlock, the lock owner must take it back.
If the attempting CPU instead only reads, the line ownership stays with the lock holder and unlock is cheaper. In general you want cache line ownership to change hands as few times as possible.
https://github.com/google/nsync/blob/c6205171f084c0d3ce3ff51...
Works fine on Microsoft Windows. Only fails on Wine.
OTOH, the usual way to lock via a busy loop/spin, does require some attempts then backoff with a random duration.
> __asm__ __volatile__( "rep; nop" : : : "memory" );
It might look like a NOP, but "REP NOP" is not a real NOP, it's the PAUSE instruction (which very old processors from the 1990s treated as a normal NOP).
I don't think this is more expensive than release certainly not because of the line where the write goes in most cache coherency protocols.
Granted this lock does wake up too many people but you did say usually uncontended.
That'll be hella slow
That syscall will be worse than a CAS just because it’s a syscall. Not to mention it’ll have to lock some kernel data structure just to figure out that there aren’t any threads to wake.
There are three states of the mutex: locked=1, unlocked=0, and sleeping=2. A thread comes along to a locked mutex, and atomically moves it to sleeping and sleeps via futex. The freeing thread unconditionally writes 0 and atomically sees the old value: if 2 wake all, if 1 no need to wake anybody.
Maybe that is equally expensive to a CAS, but I don't think it ultimately is.
Atomic RMW is almost as expensive as CAS. Exactly as expensive as CAS on some cpus.
Shouldn't you rather use pause instructions (_mm_pause, __yield, ISB, etc.) instead of thread switching? That's what I have seen in most spin lock implementations. Maybe it depends on the use case?
unbiasing/re-biasing a lock can be quite heavy, though. JVM did it by pausing threads (or waiting for GC to do so)
I should say, though, that if you're on Windows then I have yet to find a real workload where SRWLock isn't the fastest (provided you're fine with no recursion and with a lock that is word-sized). That lock has made some kind of deal with the devil AFAICT.
https://old.reddit.com/r/cpp/comments/1b55686/maybe_possible...
And now that I look at that again I realize I forgot to finish that up!
Python development is in total chaos on all social and technical fronts due to incompetent and malicious leadership.
A mutex that sleeps for a fixed amount (for example 100us) on lock failure acquisition will probably get very close to that behavior (since it almost always bunches), and "win" the benchmark. Still, that would be a terrible mutex for any practical application where there is any contention.
This is not to say that this mutex is not good (or that pthread mutexes are not bad), just that the microbenchmark in question does not measure anything that predicts performance in a real application.
And for all we know it’s absolute trash on real programs.
What an odd statement. I appreciate the Cosmopolitan project, but these exaggerated claims of superiority are usually a pretty bad red flag.
Name a single case where professional responsibility would require C code advertised as "rough around the edges" to be used anywhere near production. (The weasel words "starting to" do not help the logic of that sentence.)
I can definitely understand how OP could view this as a red flag. The author should amend it.
Technical achievement aside, when a person invents something new, the burden is on them to prove that the new thing is a suitable replacement of / improvement over the existing stuff. "I'm starting to view /not/ using [cosmo] in production as an abandonment of professional responsibility" is emotional manipulation -- it's guilt-tripping. Professional responsibility is the exact opposite of what she suggests: it's not jumping on the newest bandwagon. "a little rough around the edges" is precisely what production environments don't want; predictability/stability is frequently more important than peak performance / microbenchmarks.
Furthermore,
> The C library is so deeply embedded in the software supply chain, and so depended upon, that you really don't want it to be a planet killer.
This is just underhanded. She implicitly called glibc and musl "planet killers".
First, technically speaking, it's just not true; and even if the implied statement were remotely true (i.e., if those mutex implementations were in fact responsible for a significant amount of cycles in actual workloads), the emotional load / snide remark ("planet killer") is unjustified.
Second, she must know very well that whenever efficiency of computation is improved, we don't use that for running the same workloads as before at lower cost / smaller environmental footprint. Instead, we keep all CPUs pegged all the time, and efficiency improvements only ever translate to larger profit. A faster mutex too translates to more $$$ pocketed, and not to less energy consumed.
I find her tone of voice repulsive.
> Second, she must know very well that whenever efficiency of computation is improved, we don't use that for running the same workloads as before at lower cost / smaller environmental footprint. Instead, we keep all CPUs pegged all the time, and efficiency improvements only ever translate to larger profit. A faster mutex too translates to more $$$ pocketed, and not to less energy consumed.
It depends on the use case. If you can serve the same number of users / requests with fewer machines, then you buy and run fewer machines. (Yes, saving energy, but also saving on both capex and opex.)
Also, when you're talking about anything resembling interactivity (as you might in the context of, say, a webserver), you really don't want to run anywhere close to 100% average utilization. With unbounded queues, you end up with arbitrarily high wait times; with bounded queues, you end up serving 503s and 429s and other server errors.
That said, my experience with modern webservers is that you generally don't rely on mutexes for synchronizing most work across worker threads, and instead you try to keep your workloads as embarrassingly parallel as possible.
Indeed the first few bullet points of her writeup on how the lock works (compare and swap for uncontended, futex for contended) is already how everybody implements locks for about 20 years since the futex was introduced for exactly this. Win32 critsec from even longer ago works the same way.
I don’t think anybody can deny that jart is a smart person, but I think there is more to it than just language abbrasiveness which people take issue with.
An example is APE (which I otherwise feel is a very impressive hack). One criticism might be “oh, so I not only get to be insecure on one platform, I can be insecure on many all at once?”
The longer you spend in technology, the more you realize that there are extremely few win-wins and a very many win-somes, lose-somes (tradeoffs)
Why would you even post this here? Who do you think this is helping?
And it might help Justine improve her writing (and reach a larger audience -- after all, blog posts intend to reach some audience, don't they?). Of course you can always say, "if you find yourself alienated, it's your loss".
In general, I agree with your sentiment, but Justine is simply a different beast. She does tend to use hyperbolic language a fair amount, but she delivers so much awesomeness that I make an exception for her.
Getting "so fast" is not my first priority when I run things in production. That's stability, predictability, and reliability. Certainly performance is important: better-performing code can mean smaller infrastructure, which is great from a cost and environmental perspective. But fast comes last.
the absolute madman
Curious about this -- so what does C11 atomics use to implement? At least in Linux, C++11 atomics use pthreads (not the other way around).
1. Weak CAS. nsync always uses strong CAS upstream to make the portability abstraction simpler. Being able to use weak CAS when appropriate helps avoid code being generated for an additional loop.
2. Updating the &expected parameter. nsync upstream always manually does another relaxed load when a CAS fails. This isn't necessary with the C11 atomics API, because it gives you a relaxed load of the expected value for free when it fails.
Being able to exploit those two features resulted in a considerable improvement in nsync's mu_test.c benchmark for the contended mutex case, which I measured on RPI5.
Of course your CPU doesn't actually have four kilobyte atomics. So, this feature actually just wraps the type with a mutex. As a result, you're correct in this sense atomics "use pthreads" to get a mutex to wrap the type.
C++ also provides specializations, and indeed it guarantees specializations for the built-in integer types with certain features. On real computers you can buy today, std::atomic<int> is a specialization which will not in fact be a mutex wrapper around an ordinary int, it will use the CPU's atomics to provide an actual atomic integer.
In principle C++ only requires a single type to be an actual bona fide atomic rather than just a mutex wrapper, std::atomic_flag -- all the integers and so on might be locked by a mutex. In practice on real hardware many obvious atomic types are "lock free", just not std::atomic<Goose> and other ludicrous things.
I have no idea what you can possibly mean here.
Edit: Oh, you must have meant the stupid default for large atomic objects that just hashes them to an opaque mutex somewhere. An invisible performance cliff like this is not a useful feature, it's a useless footgun. I can't imagine anyone serious about performance using this thing (that's why I always static_assert() on is_always_lock_free() for my atomic types).
I still believe the FUD around spinlocks is overstated. For "normal" hpc code the number of threads should be <= the number of cores. In that scenario spinlocking will minimize the latency which likely is what you care about the most. It beats having your performance ruined by dvfs.
My betting is that its tricks are only always-faster for certain architectures, or certain CPU models, or certain types of workload / access patterns... and a proper benchmarking of varied workloads on all supported hardware would not show the same benefits.
Alternatively, maybe the semantics of the pthread API (that cosmopolitan is meant to be implementing) are somehow subtly different and this implementation isn't strictly compliant to the spec?
I can't imagine it's that the various libc authors aren't keeping up in state-of-the-art research on OS primitives...
is this sarcasm?
(I don't know any libc maintainers, but as a maintainer of a few thingies myself, I do not try to implement state-of-the-art research, I try to keep my thingies stable and ensure the performance is acceptable; implementing research is out of my budget for "maintenance")
If the rival thingies got a speed boost recently, and they were open source, you'd want to have a look at how they did it and maybe get a similar speed boost for yourself.
If in December you make a general purpose stable sort that's 25% faster than his, Orson Peters is probably going to read your code and try to apply ideas from it. But sorting is the thing Peters really cares about, the people who wrote the stable sort in say Microsoft's STL (their C++ standard library implementation) even if they still work there won't care enough to go back and change that unless told to do so.
It takes forever to change something in glibc, or the C++ equivalent.
There are many kinds of synchronization primitives. pthreads only supports a subset. If you are limiting yourself to them, you are most likely leaving performance on the table, but you gain portability.
glibc's malloc is tolerable but fails handily to more modern alternatives in overall speed and scalability (it fragments badly and degrades over time, not to mention it has a dozen knobs that can deeply impact real life workloads like MALLOC_ARENA_MAX). musl malloc is completely awful in terms of performance at every level; in a multithreaded program, using musl's allocator will destroy your performance so badly that it nearly qualifies as malpractice, in my experience.
musl doesn't even have things like SIMD optimized string comparison routines. You would be shocked at how many CPU cycles in a non-trivial program are spent on those tasks, and yes it absolutely shows up in non-trivial profiles, and yes improving this improves all programs nearly universally. glibc's optimized routines are good, but they can always, it seems, become faster.
These specific things aren't "oh, they're hyper specific optimizations for one architecture that don't generalize". These two things in particular -- we're talking 2-5x wall clock reduction, and drastically improved long-term working set utilization, in nearly all workloads for any given program. These are well explored and understood spaces with good known approaches. So why didn't they take them? Because, as always, they probably had other things to do (or conflicting priorities like musl prioritizing simplicity over peak performance, even when that philosophy is actively detrimental to users.)
I'm not blaming these projects or anything. Nobody sets out and says "My program is slow as shit and does nothing right, and I designed it that way and I'm proud of it." But the idea that the people working on them have made only the perfect pareto frontier of design choices just isn't realistic in the slightest and doesn't capture the actual dynamics of how most of these projects are run.
Building GNU Make with Cosmo or glibc makes cold startup go 2x faster for me on large repos compared to building it with Musl, due to vectorized strlen() alone (since SIMD is 2x faster than SWAR). I sent Rich a patch last decade adding sse to strlen(), since I love Musl, and Cosmo is based on it. But alas he didn't want it. Even though he seems perfectly comfortable using ARM's strlen() assembly.
> glibc's malloc is tolerable but fails handily to more modern alternatives in overall speed and scalability
The focus and attention I put into cosmo mutexes isn't unique. I put that care into everything else too, and malloc() is no exception. Cosmo does very well at multi-threaded memory allocation. I can pick benchmark parameters where it outperforms glibc and jemalloc by 100x. I can also pick params where jemalloc wins by 100x. But I'm reasonably certain cosmo can go faster than glibc and musl in most cases while using less memory too. You have Doug Lea to thank for that.
Every day is a good day working on cosmo, because I can always find an opportunity to dive into another rabbit hole. Even ones as seemingly unimportant as clocks: https://github.com/jart/cosmopolitan/commit/dd8544c3bd7899ad...
Wait, you don't mean your allocator is based on dlmalloc, do you?
(Why not upstream? ABI compatibility which is apparently a sufficient veto reason for anything in cpp)
Design of your hash-map, or any other algorithm or data-structure, is always and only dictated by your workload. Microbenchmark suite from xyz company/person will hardly represent that reality.
So the more generous answer would be it's not because of "ABI" meme but because std::unordered_map is just fine for the most cases out there. And this is what general purpose standard library is ought to be able to support.
The whole point of switching every single use of unordered map to node/flat hash map is because it is always better. It always uses less memory (much less memory per node).
Edit: what did I say that was disrespectful? I called you out for being wrong because you're very wrong
Synchronizing multiple CPU cores together is fundamentally slow, there’s no ways around it. They are far apart on the chip, and sometimes even on different chips with some link between. When measuring time with CPU cycles that latency is rather slow.
Possible to avoid with good old software engineering, and over time people who wanted to extract performance from their multi-core CPUs became good at it.
When you’re computing something parallel which takes minutes, you’ll do great if you update the progress bar at a laughable 5 Hz. Synchronizing cores 5 times each second costs nothing regardless of how efficient is the mutex.
When you’re computing something interactive like a videogame, it’s often enough to synchronize cores once per rendered frame, which often happens at 60Hz.
Another notable example is multimedia frameworks. These handle realtime data coming at high frequencies like 48 kHz for audio, and they do non-trivial compute in these effect transforms and codecs so they need multiple cores. But they can tolerate a bit of latency so they’re just batching these samples. This dramatically saves IPC costs because you only need to lock these mutexes at 100Hz when batching 480 samples = 10ms of audio.
They aren't always. On a NUMA machine some are closer than others; M-series is an example. The cores are in clusters where some of the cache is shared, so atomics are cheap as long as it doesn't leave the cluster.
That's not really the right way to do it. If you're actually using multiple cores in your code, you want to build a data dependency graph and let the cores walk that graph. Max node size ends up being something loosely like 10k items to be processed. You'll typically see hundreds of synchronization points per frame.
This is the base architecture of stuff like Thread Building Blocks and rust's rayon library.
A man and a statiscian are walking down the street when they see a 50€ bill. The statistician keeps walking but the man stops and says "hey, look at this cash on the floor?". But the statistician, uninpressed, says: "must be fake, or someone would have already picked it up". And continues walking. The other man grabs the cash and takes it.
So I’ve gotta ask: Is there a catch? Are these tools doing something evil to achieve what they’re achieving? Is the whole thing a tom7-esque joke/troll that I don’t get because I’m not as deep into compilers/runtimes? Or are these really just ingenious tools that haven’t caught on yet?
it's not that complicated; they're fat binaries (plus i guess a lot of papering over the differences between the platforms) that exploit a quirk of tshell:
> One day, while studying old code, I found out that it's possible to encode Windows Portable Executable files as a UNIX Sixth Edition shell script, due to the fact that the Thompson Shell didn't use a shebang line.
(https://justine.lol/ape.html)
so the answer is simple: i can't think of anyone that wants to ship fat binaries.
did you miss the part where they transitioned from x86 to arm64 in 2020?
So now you're saying because apple finished the transition fat binaries are useless again? What about other platforms?
yup that's exactly what i'm saying.
Is the transition really finished? I'm writing this on a x86_64 Macbook with a browser that is distributed as x86_64/arm64 universal binary.
Most people producing cross-platform software don't want a single executable that runs on every platform, they want a single codebase that works correctly on each platform they support.
With that in mind that respect, languages like go letting you cross compile for all your targets (provided you avoid CGO) is delightful... but the 3-ways-executable magic trick of APE, while really clever, doesn't inspire confidence it'll work forever, and for the most part it doesn't gain you anything. Each platform has their own packaging/signing requirements. You might as well compile a different target for each platform.
The more difficult problem with cross plattform apps are always native look and feel and calling native apis in an idiomatic and performant ways.
> This is an idea whose time has come; POSIX even changed their rules about binary in shell scripts specifically to let us do it.
> Jilles Tjoelker from the FreeBSD project played an instrumental role in helping me to get the POSIX rules changed to allow binary in shell scripts, which is what made this project possible.
but that's not true, as recently discussed here: https://news.ycombinator.com/item?id=41636569
A Win32 program will not run out of the box on either Linux or macOS. Neither will a Java app.
The nice thing about Cosmopolitan is that it "just works" as far as end user is concerned. But without firm support from the OSes involved, it is inevitably a hack with questionable long-term stability prospects.
What we really need is some kind of standardized low-level (think LLVM bitcode or wasm) architecture- and platform-agnostic binary format that can be JIT-compiled to the same native code that a C compiler would have produced from the source. And that is supported by all major OSes out of the box.
And stuff like https://github.com/jart/cosmopolitan/issues/1263 shows that just because it works today, it doesn't mean that it'll work tomorrow. What's the guarantee that the same won't happen on Linux or macOS eventually?
Don't get me wrong, APE is a hell of a hack, and I love it for that. But it can't be a truly stable platform without guarantees from the underlying OSes to keep all of that machinery working the same way. And if we could actually get them to agree on something like that, why not get them to agree on an actual portable executable format that is future-proof also wrt new CPU architectures?
(Note, by the way, that this is orthogonal to the whole notion of fat binaries that contain arch-specific code. That is still a useful optimization to have in practice even if one has portable bytecode, alongside said portable bytecode.)
- With Windows, APE is just PE and WIN32 so those are already guaranteed by Microsoft.
- With Linux, I've been working with kernel maintainers to build consensus around around a patch https://justine.lol/ape.patch that will let your APE binaries be loaded by binfmt_elf. This will provide you with a guarantee that they'll still work, even if someone has for instance registered a binfmt_misc entry like WINE that's intercepting MZ executables.
- With FreeBSD, I'm also working to build consensus here around another patch, that will help the kernel load APE natively as a normal ELF executable. That means you won't need to rely on the system shell to load your program into memory.
- I've been writing a specification for APE that documents this file format, its ABI, and its historical encodings. https://github.com/jart/cosmopolitan/blob/master/ape/specifi...
One day in the future, when we have the support of major OSes for the native loading of portable binaries, we'll look back at the APE shell script bootstrapping process similar to how we look back at self-extracting zip archives. When Phil Katz first invented PKZIP, it was useful to prepend the software to each archive. But nowadays it isn't necessary, since all OSes come with support for PKZIP built in.
That said, a fat binary format that works for both Windows and Linux is already very useful.
Case in point:
https://github.com/jart/cosmopolitan/blob/master/libc/sysv/s...
The system call numbers of all the unixlikes are bitwise packed into a single number. There is exactly one of those columns which is stable: the Linux one. Everything else is not part of the binary interface of their respective operating systems.
I've written about how Linux is special in this regard:
https://www.matheusmoreira.com/articles/linux-system-calls
It's a neat hack but I'm afraid it's in grave danger of falling victim to the Darth Vader of OS ABI stability.
https://lwn.net/Articles/806870/
Program to the API rather than the ABI.
When we see benefits, we change the ABI
more often than the API.
I have altered the ABI.
Pray I do not alter it further.
-- Theo de Raadt
They don't? Having one file to download instead of a maze of "okay so what do you have" is way easier than the current mess. It would be very nice not to have to ask users what platform they're on, they just click a link and get the thing.
We'll I'm used to not being most people, but I'd much rather be able to produce a single identical binary for my users that works everywhere than the platform specific nonsense I have to go through right now. Having to maintain different special build processes for different platforms is a stupid waste of time.
Frankly this is how it always should have worked except for the monopolistic behavior of various platforms in the past.
This whole discussion is about pointing out that it kinda doesn't.
Even that is not a big deal in most of cases, if you use zig to wrap CC: https://dev.to/kristoff/zig-makes-go-cross-compilation-just-...
https://github.com/elastic/golang-crossbuild docker images for macos and windows (not linux though)
https://github.com/rust-cross/rust-musl-cross docker images for linux (yes, it's possible to use the same one for go, you just need to put go binary there) so it's static with musl libc and sure to not have some weird issues with old/new libc on old/new/whatever linuxes
(it might be over the top and the zig thing might effectively do the same thing, but I never got it to compile)
oh and apple-codesign rust cargo to get through the signing nonsense
"Cosmopolitan has basically always felt like the interesting sort of technical loophole that makes for a fun blog post which is almost guaranteed to make it to the front page of HN (or similar) purely based in ingenuity & dedication to the bit.
as a piece of foundational technology, in the way that `libc` necessarily is, it seems primarily useful for fun toys and small personal projects.
with that context, it always feels a little strange to see it presented as a serious alternative to something like `glibc`, `musl`, or `msvcrt`; it’s a very cute hack, but if i were to find it in something i seriously depend on i think i’d be a little taken aback."
Logically having one thing instead of multiple makes sense as it simplifies the distribution and bytes stored/transmitted. I think the issue here is that it is not time tested. I believe multiple people in various places around the globe think about it and will try to test it. With time this will either bubble up or be stale.
I am only speaking for myself here. While cosmo and ape do seem very clever, I do not need this type of clever stuff in my work if the ordinary stuff already works fine.
Like for example if I can already cross-compile my project to other OSes and platforms or if I've got the infra to build my project for other OSes and platforms, I've no reason to look for a solution that lets me build one binary that works everywhere.
There's also the thing that ape uses clever hacks to be able to run on multiple OSes. What if those hacks break someday due to how executable formats evolve? What if nobody has the time to patch APE to make it compatible with those changes?
But my boring tools like gcc, clang, go, rust, etc. will continue to get updated and they will continue to work with evolving OSes. So I just tend to stick with the boring. That's why I don't bother with the clever because the boring just works for me.
I am worried about the scenario where the executable format changes just enough to break APE binaries. Like I said, APE uses very clever hacks to make itself work on multiple OSes. It is conceivable that executable formats change just enough that it breaks the clever hacks in APE binaries without breaking old ordinary binaries.
I don't know whether RedBean is production-ready now, but a year and a half ago, that was the catch.
Although in fairness, the Mozilla org choosing Justine/cosmo is basically the kind of sign-off from a 3rd party that OP was looking for.
We generally don't care if some mutex is 3x faster than some other mutex. Most of the time if I'm even using a mutex which is rare in itself, the performance of the mutex is generally the least of my concerns.
I'm sure it matters to someone, but most people couldn't give two shits if they know what they're doing. We're not writing code where it's going to make a noticeable difference. There are thousands of things in our code we could optimize that would make a greater impact than a faster mutex, but we're not looking at those either because it's fast enough the way it is.
Whether that in turn has any practical use beyond quickly trying out small models is another question.
If I went up to you in 2008, and said, hey, lets build a database that doesn't do schemas, isn't relational, doesn't do SQL, isn't ACID compliant, doesn't to joins, has transactions as an afterthought, and only does indexing sometimes, you'd think I was trolling you. And then in 2009, Mongodb came out and caught on in various places. So only time will tell if these are ingenious tools that haven't caught on yet. There's definitely a good amount of genius behind it, though only time will tell if it's remembered in the vein of tom7's harder drives or if it sees wider production use. I'll say that if golang supported it as a platform, I'd switch my build pipelines at work over to it, since it makes their output less complicated to manage as there's only a single binary to deal with instead of 3-4.
I see a stream of improvements to the vendored in nsync inside the cosmopolitan project [1]. Are you planning on upstreaming most of those too?
A separate question -- is using the upstream nsync as safe as using your fork?
[1] https://github.com/jart/cosmopolitan/commits/master/third_pa...
At least for what I use locks for, the uncontended case is like 10000x more common, I actually don't think I have any super heavy contention such as the case shown in the post, as this is simply something to be avoided--as no no matter how good the mutex this won't play well with the memory system.
If I have to sacrifice 50% of my efficiency to ensure that I never get called on Sunday at 3am to fix a broken system, no kidding, I'll make that trade every time.
Production is about _reliability_. And writing reliable code is 10x harder than writing "fast" code.
Interesting, I remember reading a detailed article where they found that there's a lot of severe contention in the Windows kernel, compared to Linux. I think it was when they were trying to parallelize Chrome builds?
(And I wouldn't characterize myself as being overly impressed... for my particular scenario I wrote, "if you have a poorly written app that's bottlenecked on a lock, then consider targeting Windows to make the best of a bad situation." A better approach, of course, would be to just improve your code!)
As gamedev I came to love slow mutexes that do a lot of debug things in all 'developer' builds. Have debug names/IDs, track owners, report time spent in contention to profiler, report ownership changes to profiler...
People tend to structure concurrency differently and games came to some patterns to avoid locks. But they are hard to use and require programmer to restructure things. Most of the code starts as 'lets slap a lock here and try to pass the milestone'. Even fast locks will be unpredictably slow and will destroy realtime guarantees if there were any. They can be fast on average but the tail end is never going to go away. I don't want to be that guy who will come back to it chasing 'oh our game has hitches' but I am usually that guy.
Use slow locks people. The ones that show big red in profiler. Refactor them out when you see them being hit.
I know its a tall order. I can count people who know how to use profiler by fingers on AAA production. And it always like that no matter how many productions I see :)
ps. sorry for a rant. Please, continue good research into fast concurrency primitives and algorithms.
You never want lock contention in a game if at all avoidable, and in a lot of cases taking a lock is provably unnecessary. For example: Each frame is divided into phases, and mutable access to some shared resource only needs to happen in one specific phase (like the `update()` function before `render()`, or when hot-reloading assets). With scoped threads and Rust's borrowing rules, you can structure things to not even need a mutex, and be completely certain that you will receive a stern compiler error if the code changes in way so you do.
When possible, I'd always love to take a compiler error over a spike in the profiler.
https://people.csail.mit.edu/mareko/spaa09-scalablerwlocks.p...
A bit over the top…