https://github.com/torvalds/linux/tree/master/tools/include/...
A sample use-case? I was developing an Erlang-like actor platform that should operate under Linux as well as a bare-metal microkernel, and all I needed is a light layer over syscalls instead of pulling the entire glibc. Also it provides a simple implementation for standard C functions (memcpy, printf) so I don't have to write them myself.
Maybe bootstapping a new language with no dependencies.
Is it faster? More stable?
OpenBSD allows making syscalls from static binaries as well. If Go binaries are static, it shouldn't cause any problems.
Do you have a source for this? My Google searches and personal recollections say that OpenBSD does not have a stable syscall ABI in the way that Linux does and the proper/supported way to make syscalls on OpenBSD is through dynamically linked libc; statically linking libc, or invoking the syscall mechanism it uses directly, results in binaries that can be broken on future OpenBSD versions.
> Do you have a source for this?
One article from 2019 about this can be found at https://lwn.net/Articles/806776/ (later updates https://lwn.net/Articles/949078/ and https://lwn.net/Articles/959562/). Yes, it does not have a stable system call ABI, but as long as your program was statically compiled with the libc from the same OpenBSD release, AFAIK it should work.
You don’t have to deal with C ABI requirements with respect to stack, or registers management. You also don’t need to do dynamic linking.
On the other hand all of that comes back to bone you if you’re trying to benefit from vDSO without going through a libc.
This is a big one. Linking against libc on many platforms also means making your binaries relocatable. It's a lot of unnecessary, incidental complexity.
ASLR is a weak defense. It's akin to randomizing which of the kitchen drawers you'll put your jewelry in. Not the same level of security as say, a locked safe.
Attacks are increasingly sophisticated, composed of multiple exploits in a chain, one of which is some form of ASLR bypass. It's usually one of the easiest links in the chain.
At least the vDSO functions really don't need much in the way of stack space: generally there's nothing much there but clock_gettime() and gettimeofday(), which just read some values from the vvar area.
The bigger pain, of course, is actually looking up the symbols in the vDSO, which takes at least a minimal ELF parser.
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
And yet that’s exactly one of the things Go fucked up in the past: https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/
1. No overhead from libc; minimizes syscall cost
2. No dependency on libc and C language ABI/toolchains
3. Reduced attack surface. libc can and does have bugs and potentially ROP or Spectre gadgets.
4. Bootstrapping other languages, e.g. Virgil
The few nanoseconds of a straight function call are absolutely irrelevant vs the 10s of microseconds of a syscall cost and you lose out on any of the optimizations a libc has that you might not or didn't think about (like memoization of getpid() ) and you need to take on keeping up with syscall evolution / best practices which a libc generally has a good handle on.
> No dependency on libc and C language ABI/toolchains
This obviously doesn't apply to a C syscall header, though, such as the case in OP :)
Not much of a big deal. These "optimizations" caused enough bugs that they actually got reverted.
https://www.man7.org/linux/man-pages/man2/getpid.2.html
Because of the aforementioned problems, since glibc 2.25,
the PID cache is removed: calls to getpid() always invoke
the actual system call, rather than returning a cached value.
Get rid of libc and you gain the ability to have zero global state in exchange. Freestanding C actually makes sense and is a very fun language to program in. No legacy nonsense to worry about. Even errno is gone.As a result of that brilliant design choice, every single language can make Linux system calls natively. It should be simple for JIT compilers to generate Linux system call code. No need to pull in some huge C library just for this. AOT compilers could have a linux_system_call builtin that just generates the required instructions. I actually posted this proposal to the GCC mailing list.
That's not what Linux is, though. It's a kernel. libc is a userspace library. The Linux developers could also make their own libpng and put their stable interface in there, but that's not in scope for their project.
> As a result of that brilliant design choice, every single language can make Linux system calls natively.
That is like saying it's a brilliant design choice for an artist to paint the sky blue on a sunny day. If Linux is a kernel, and if a kernel's interface with userspace is syscalls, and if Linux wants to avoid breaking userspace with kernel updates, then it needs a stable syscall interface.
> No need to pull in some huge C library just for this.
Again, I'm not sure why the Linux project would invent this "huge C library" to use as their stable kernel interface.
https://old.reddit.com/r/linux/comments/fx5e4v/im_greg_kroah...
The importance of this design should not be understated. It's not really an obvious thing to realize. If it was, every other operating system and kernel out there would be doing it as well. They aren't. They all make people link against some library.
So Linux is actually pretty special. It's the only system where you actually can trash the entire userspace and rewrite the world in Rust. Don't need to link against any "core" system libraries. People usually do but it's not forced upon them.
> if Linux wants to avoid breaking userspace with kernel updates, then it needs a stable syscall interface
Every kernel and operating system wants to maximize backwards compatibility and minimize user space breakage. Most of them simply stabilize the system libraries instead. The core libraries are stable, the kernel interfaces used by those core libraries are not.
So it doesn't follow that it needs a stable syscall interface. They could have solved it via user space impositions. The fact they chose a better solution is one of many things that makes Linux special.
No, they would not. I can say this with confidence because at any point in the last several decades, any OS vendor could have started to do so, and they have not. They have uniformly decided that having a userspace library as their stable kernel interface is easier to maintain, so that's what they do. The idea that the rest of the world hasn't "realized" that, in addition to maintaining binary compatibility in their libc, they could also maintain binary compatible syscalls is nonsensical.
The Linux kernel, on the other hand, doesn't ship a userspace. If they wanted their stable interface to be a userspace library, they'd need to invent one! And that would be more work than providing stable syscalls.
> So Linux is actually pretty special. It's the only system where you actually can trash the entire userspace and rewrite the world in Rust.
That's not rewriting the world, that would be a new userspace for the Linux kernel. You're still calling down into C, there's just one fewer indirection along the way.
> So it doesn't follow that it needs a stable syscall interface. They could have solved it via user space impositions.
They could have, but as Greg Kroah-Hartman pointed out, that would have just shifted the complexity around. Stability at the syscall level is the simplest solution to the problem that the Linux project has, so that's what they do.
It would be pretty funny if the kernel's stability strategy was in service of allowing userspace to avoid linking a C library, considering it's been 30+ years and the Linux userspace is almost entirely C and C++ anyway.
You must be thinking of https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/ which was about the vDSO (a virtual dynamically linked library which is automatically mapped by the kernel on every process), not system calls. You normally call into the vDSO instead of doing direct system calls, because the vDSO can do some things (like reading the clock) in an optimized way without entering the kernel, but you can always bypass it and do the system calls directly; doing the system calls directly will not use any of the userspace stack (it immediately switches to a kernel stack and does all the work there).
Golang has CGO_ENABLED=1 as the default for this reason.
[0]: https://github.com/golang/go/issues/61917 [1]: https://github.com/golang/go/issues/60797
So far I only knew about PHP undeprecating "is_a" function, so I guess this puts Go in good company ^^
[0] https://github.com/ziglang/zig/blob/ee9f00d673f2bccddc2751c3...
https://github.com/lone-lang/lone
It's a lisp interpreter with a built in system-call primitive. The plan is to implement everything else from inside the language. Completely freestanding, no libc needed. In the future I expect to be able to boot Linux directly into this thing.
Only major feature still needed for kernel support is a binary structure parser for the C structures. I already implemented and tested the primitives for it. I even added support for unaligned memory accesses.
Iteration is the only major language feature that's still missing. I'm working on implementing continuations in the interpreter so that I can have elegant Ruby style iteration. This is taking longer than expected.
This interpreter can make the Linux kernel load lisp modules before its code even runs. I invented a self-contained ELF loading system that allows embedding arbitrary data into a loadable ELF segment that the kernel automatically maps into memory. Then it's just a matter of reaching it via the auxiliary vector, The interpreter uses this to automatically run code, allowing it to become a freestanding lisp executable.
I wrote an article about how it works here:
https://www.matheusmoreira.com/articles/self-contained-lone-...
Kind of an understatement. The existence of an official interface obsoletes 3rd party projects like the one posted.
What you don't understand, because you don't work on Chrome, or Chrome sized projects, is that generic, lowest common denominator implementations cannot be optimal for all use-cases and at scale (Chrome-sized project) those inefficiencies matter. That's why this exists, that's why folly exists, that's why abseil exists, that's why no not everyone can just use boost, etc etc etc
/* SPDX-License-Identifier: LGPL-2.1 OR MIT */
It's technically not part Linux's headers either. It's published under the tools subdirectory, so it's something that ships along with the kernel, but not used by the kernel itself. Basically it's there as some people might find it useful, but could've as well been a separate repo.
Linux "C" code hard dependency on gcc/clang/("ultra complex compilers") is getting worse by the day. It should (very easy to say, I know) have stayed very simple and plain C99+ with smart macro definitions to be replaced with pure assembly or the missing bits for modern hardware programming (atomics/memory barriers, explicit unaligned access, etc), but those abominations like _generic (or static assert,__thread,etc) are toxic additions to the C standard (there are too many toxic additions and not enough removal/simplification/hardening in ISO C, yes, we will have to define a "C profile" which breaks backward compatibility with hardening and simplifications).
I don't say all extensions are bad, but I think they need more "smart and pertinent pick and choose" (and I know this is a tough call), just because they "cost". For instance, for a kernel, we know it must have fine grained control of ELF object sections... or we would get much more source files (one per pertinent section) or "many more source configuration macros" (....but there I start to wonder if it was not actually the "right" way instead of requiring a C compiler to support such extension, it moves everything to the linker script... which is "required" anyway for a kernel).
Linus T. is not omnipotent and can do only that much and a lot of "official" linux devs are putting really nasty SDK dependency requirements in everyday/everywhere kernels.
That said, on my side, many of my user apps are now directly using linux syscalls... but are written in RISC-V assembly interpreted on x86_64 (I have a super lean interpreter/syscall translater written in x86_64 assembly and a super lean executable format wrapped in ELF executable format), or very plain and simple C99+ (legacy or because I want some apps to be a bit more 'platform crossy'... for now).
Yes, that amount of complexity is obviously toxic... and saying otherwise is what will make you hard to be taken seriously, come on dude...
The non-arch-specific callers which use this are here, which also look relatively straightforward: https://github.com/torvalds/linux/blob/master/tools/include/...
I don't see any complex stack alignment or anything which reads to me like it would require "niche C compiler options", so I'm curious if I'm missing something?
Most libc functions, including the syscall wrappers and all pthreads functions, aren't safe to call in threads created by raw clone(). Anything that reads or writes errno, for example, is not safe.
I've had to do this a couple of times. One a long time ago was an audio mixing real-time thread for a video game, which had to keep the audio device fed with low-latency frames for sound effects. In those days, pthreads wasn't good enough. For talking to the audio device, we had to use the Linux syscall wrapper macros, which have been replaced by nolibc now. More recently, a thread pool for high-performance storage I/O, which ran slightly faster than io_uring, and ran well on older kernels and ones with io_uring disabled for security.
They all do the same thing (take you from A to B), but offer different levels of comfort, efficiency and utility :)
As one of my class projects, I built a Linux compatibility layer for the toy OS we had built, by adding a proper ELF loader and emulating syscalls. I really struggled to get glibc or even musl working, and so I ended up hand-coding some `-nostdlib` programs instead of being able to use coreutils. If nolibc really works as a minimal libc, would have been incredibly cool to be able to run coreutils on my OS!
For example, on a 64-bit arch, this code would be sus.
syscall(__NR_syscall_taking_6_args, 1, 2, 3, 4, 5, 6);
Quiz: why
PS: it's a common mistake, so I thought I'd save you a trip down the debugging rabbit hole.
This is a huge edgecase but is 8(%rsp) guaranteed to be readable memory
#include <sys/syscall.h>
#include <unistd.h>
#include <alloca.h>
#include <string.h>
void s(long a, long b, long c, long d, long e, long f, long g) {
}
int main(void) {
long a = 0xFFFFFFFFFFFFFFFF;
s(a, a, a, a, a, a, a);
syscall(9999, 1, 2, 3, 4, 5, 6);
return 0;
}
Now, strace shows: $ strace -e process_vm_readv ./a
process_vm_readv(1, 0x2, 3, 0x4, 5, 18446744069414584326) = -1 EINVAL (Invalid argument)
objdump -d a 117f: 48 c7 45 f0 ff ff ff movq $0xffffffffffffffff,-0x10(%rbp)
1186: ff
1187: 48 8b 7d f0 mov -0x10(%rbp),%rdi
118b: 48 8b 75 f0 mov -0x10(%rbp),%rsi
118f: 48 8b 55 f0 mov -0x10(%rbp),%rdx
1193: 48 8b 4d f0 mov -0x10(%rbp),%rcx
1197: 4c 8b 45 f0 mov -0x10(%rbp),%r8
119b: 4c 8b 4d f0 mov -0x10(%rbp),%r9
119f: 48 8b 45 f0 mov -0x10(%rbp),%rax
11a3: 48 89 04 24 mov %rax,(%rsp)
11a7: e8 94 ff ff ff call 1140 <s>
11ac: bf 36 01 00 00 mov $0x136,%edi
11b1: be 01 00 00 00 mov $0x1,%esi
11b6: ba 02 00 00 00 mov $0x2,%edx
11bb: b9 03 00 00 00 mov $0x3,%ecx
11c0: 41 b8 04 00 00 00 mov $0x4,%r8d
11c6: 41 b9 05 00 00 00 mov $0x5,%r9d
11cc: c7 04 24 06 00 00 00 movl $0x6,(%rsp)
11d3: b0 00 mov $0x0,%al
11d5: e8 56 fe ff ff call 1030 <syscall@plt>
Only 4 bytes are put on the stack, but syscall will read 8.It's tricky if one doesn't control types of arguments used in vararg.
They didn't claim to save work, they claimed to save hitting a bug, and having to debug it.
They said the word "vararg". They gave you everything.
They gave me everything to dismiss their claim.
Here's a free dollar. "Only one?"
They said the word "vararg". That is everything. You take that, and you say "oh shit, right, thanks for the heads up" or if you don't already know what's so special about that, you do know they obviously said that for some reason so you fucking google it.
Either way, they pointed you in the right direction, and that is helpful.
The further reading that you find so unbearable takes you exactly the same time to read something that has already been written and is just sitting out there to look up for free, as to read something you demand they write again on the spot bespoke for you.
And since as you say they aren't a proffessor or colleague you personally know and respect, why do you care if they write out a full article or just a pointer? You just said you don't trust a rando. You don't trust their full article anyway.
Once again, you assume the conclusion that their comment is helpful and correct and meaningful, and you work backwards to excuse their poor explanation that they justified with "let's say it's a quiz".
And if you don't like my reply, take your own advice and go away. Why do you care what I think of their phrasing? You're not going to get me to stop anyway, or the dozens or people who upvoted me.
Or keep swearing at me and getting downvoted, whatever floats your boat.
Need to cast them to long or size_t or whatever to prevent this.
The kernel actually signals errors by returning a negative error code (on most arches), which seems like a better calling convention. Storing errors in something like `errno` opens a whole can of worms around thread safety and signal safety, while seemingly providing very little benefit beyond following tradition.
Yes, we can do better. Yes, we probably should do better. But in some cases you really have to think through every edge case and in the end someone has to do it. So just be grateful for what we have.
The branch that actually touches the errno is unlikely to be executed. However I did experience a puzzling crash with a cross-compiled libc because the compiler was smart enough to inject a speculative load of errno outside of the branch. Fun times.
Sometimes you actually want to make sure that the exact syscall is called; e.g. you're writing a little program protected by strict seccomp rules. If the layer can magically call some other syscall under the hood this won't work anymore.
[0]: https://ziglang.org/documentation/master/std/#std.os.linux