Then again...
Frozen pipes are no joke.
I personally would.
I've read that sometimes wordy articles are mostly fluff for SEO.
In this case, the library that buffers in userspace should set appropriate timers when it first buffers the data. Good choices of timeout parameter are: passed in as argument, slightly below human-scale (e.g. 1-100 ms), proportional to {bandwidth / threshold} (i.e. some multiple of the time it would take to reach the threshold at a certain access rate), proportional to target flushing overhead (e.g. spend no more than 0.1% time in syscalls).
Also note this applies for both writes and reads. If you do batched/coalesced reads then you likely want to do something similar. Though this is usually more dependent on your data channel as you need some way to query or be notified of “pending data” efficiently which your channel may not have if it was not designed for this use case. Again, pretty common in hardware to do interrupt coalescing and the like.
Timeouts are usually done with signals (a safety nightmare, so no thanks) or an event loop. Hence my thought that you can't do it really transparently while keeping current interfaces.
And no, creating threads to solve this fringe problem in a spin loop with a sleep is not what I'd call "smart". It's unnecessary complexity and in most cases, totally wasted work.
This is a very bad reason to justify something. Especially introduce threads. Your response here is like saying "I don't know why people say it's so hard to write multi threaded programs, the thread create API is so simple." It completely misses the point why added complexity can be harmful.
> without rarely occurring bugs.
Except for a glaring thing like "what if fflush gets an I/O error in this background thread"?
> Granted, it would rub me the wrong way if libc did this by default,
This is exactly my point. It needs cooperation from the application layer. It wouldn't make sense to be transparent.
Complexity, readability, etc. is the argument people make when they've run out of arguments.
I/O errors could occur at any point, instead of only when you write. Syscalls everywhere could be interrupted by a timer, instead of only where the program set timers, or when a signal arrives. There's also a reasonable chance of confusion when the application and libc both set timer, depending on how the timer is set (although maybe this isn't relevant anymore... kernel timer apis look better than I remember). If the application specifically pauses signals for critical sections, that impacts the i/o timers, etc.
There's a need to be more careful in accessing i/o structures because of when and how signals get handled.
On the reading end, the error may occur at the attempt to read the pipe.
On the writing end, the error may be signaled at the next attempt to write to or close the pipe.
In either case, a SIGPIPE can be sent asynchronously.
What scenario am I missing?
My expectation (and I think this is an accurate expecation) is that a) read does not cause a SIGPIPE, read on a widowed pipe returns a zero count read as indication of EOF. b) write on a widowed pipe raises SIGPIPE before the write returns. c) write to a pipe that is valid will not raise SIGPIPE if the pipe is widowed without being read from.
Yes, you could get a SIGPIPE from anywhere at anytime, but unless someone is having fun on your system with random kills, you won't actually get one except immediately after a write to a pipe. With a timer based asynchronous write, this changes to potentially happening any time.
This could be fine if it was well documented and expected, but it would be a mess to add it into the libcs at this point. Probably a mess to add it to basic output buffering in most languages.
You could also relax the guarantee and set a timeout that is only checked during your next write. This still allows unbounded latency, but as long as you do one more write it will flush.
If neither of these work, then your program issues a write and then gets into a unbounded or unreasonably long loop/computation. At which point you can manually flush what is likely the last write your program is every going to make which would be a trivial overhead since that is a single write compared to a ridiculously long computation. That or you probably have bigger problems.
If you're already using an event loop library, I think it's reasonable for that to manage flushing outputs while waiting for reads, but I don't think any of the utilities in this example do; maybe tcpdump does, but I don't know why grep would.
grep buffers writes with no flush timeout resulting in the problem in the article.
grep should probably not suffer from the problem and can use a write primitive/library/design that avoids such problems with relatively minimal extra complexity and dependencies while retaining the performance advantages of userspace buffering.
Most programs (that are minimizing dependencies so can not pull in a large framework, like grep or other simple utilities) would benefit from using such modestly more complex primitives instead of bare buffered writes/reads. Such primitives are relatively easy to use and understand, being largely a drop-in replacement in most common use cases, and resolve most remaining problems with buffered accesses.
Essentially, this sort of primitive should be your default and you should only reach for lower level primitives in your application if you have a good reason for it and understand the problems the layers were designed to solve.
Yes, but you said
> In this case, the library that buffers in userspace should set appropriate timers when it first buffers the data
The library that buffers in userspace for grep and tcpdump is almost certainly libc.
It did not even occur to me that anybody would even think this was some sort of statement about whatever libc they use on Linux given that I said just “buffered accesses” with no reference to platform or transport channel.
I thought somebody might think I was talking about just writes, so I deliberately wrote accesses.
I thought somebody would make some sort of pedantic statement if I just said “should” so I wrote “should almost always”.
I thought somebody might think I was talking about write() in particular so I deliberately avoided talking about any specific API to head that off.
In my reply I deliberately said “blocking read/wait” instead of select() or epoll() or io_uring or whatever other thing they use these days to avoid such confusion that it was a specific remedy for a specific library or API.
But, alas, here we are. My pedantry was no match for first contact. You will just have to forgive my inability to consider the dire implications of minor ambiguities.
The point is to guarantee data gets flushed promptly which only fails when not enough data gets buffered. The timeout is a fallback to bound the flush latency.
If you flush before the buffer is full, you’re sacrificing throughput. Additionally the timer firing has additional performance degradation especially if you’re in libc land and only have a sigalarm available.
So when an additional write is added, you want to push out the timer. But arming the timer requires reading the current time among other things and at rates of 10-20Mhz and up reading the current wall clock gets expensive. Even rdtsc approaches start to struggle at 20-40Mhz. You obviously don’t want to do it on every write but you want to make sure that you never actually trigger the timer if you’re producing data at a relatively fast enough clip to otherwise fill the buffer within a reasonable time.
Source: I implemented write coalescing in my nosql database that can operate at a few gigahertz for 8 byte writes/s into an in memory buffer. Once the buffer is full or a timeout occurs, a flush to disk is triggered and I net out at around 100M writes/s (sorting the data for the LSM is one of the main bottlenecks). By comparison DBs like RocksDB can do ~2M writes/s and SQLite can do ~800k.
You only lose throughput in proportion to the handling cost of a single potentially spurious timeout/timeout clear per timeout duration. You should then tune your buffering and threshold to cap that at a acceptable overhead.
You should only really have a problem if you want both high throughput and low latency at which point general solutions are probably not not fit for your use case, but you should remain aware of the general principle.
Yes you’ve accurately summarized the end goal. Generally people want high throughput AND low latency, not to just cap the maximum latency.
The one shot timer approach only solves a livelock risk. I’ll also note that your throughput does actually drop at the same time as the latency spike because your buffer stays the same size but you took longer to flush to disk.
Tuning correctly turns out to be really difficult to accomplish in practice which is why you really want self healing/self adapting systems that behave consistently across all hardware and environments.
The proposed fix makes the contract a lot more complicated.
“The system is working as it was designed,” is always true but unhelpful.
So sure, it would maybe be a better UX to be able to combine things and have them work, but there is fundamental tension between building something that's optimized for moving chunks of data and building things that's interactive. And trying to force one into the other, in my humble opinion, is not the solution.
I think this position is user-hostile.
`vim` and `rogue` are fully user-interactive programs. The same is not true of `tail -f`, which by default appears to users as a stream of lines.
I understand why, at a technical level, `tail -f | grep` doesn't work in the way that's expected here. But it should! At least, when invoked from a user-interactive shell session -- in that context, a "chunk of data" is clearly expected to be a newline-delimited line, not a buffer of some implicitly-defined size.
It’s hard to argue that grep isn’t supposed to work like this when grep tries to work like this. It’s not a fundamental tension, it’s just that isatty(stdout) doesn’t always tell you when you’re running in an interactive terminal.
Or, even if it is true of pipes, then we need an alternate version of a pipe that signals not to buffer, and can be used in all the same places.
It's a real problem either way, it just changes the nature of the problem.
> The proposed fix makes the contract a lot more complicated.
How so? Considering programs already have to deal with terminals, I'm skeptical a way to force one aspect of them would be a big deal.
It's one of those things you get used to when you've used Unix-like systems long enough. Yes, it's better things just work as someone who is not a power user expects them to work but that's not always possible and I'd say it's not worth it to try to always meet that goal, especially if it leads to more complexity.
I would say that the platonic ideal of the pipe Unix mechanism has no buffering, and the buffer is only there as a performance optimization.
> What's `less | grep` or `vim | grep`... do we need to send input back through the pipe now?
Well, this is "interactive" in the timing sense. It still has one-way data flow. That's how I interpreted you and how I used the word.
If you meant truly interactive, then I think you're talking about something unrelated to the post.
The buffer in Unix (or rather C?) file output goes back to the beginning of time. It's not the pipe that's buffering.
Anyways, as soon as your mental model of these command line utilities includes the buffering then the behavior makes sense. How friendly it is can be debated. Trying to make it work with timers feels wrong and would introduce more complexity and deviate from some people's mental model.
I don't see any particular reason for a pipe to be more like a file than a terminal.
And I don't see why my mental model should be file-like and only file-like.
> Trying to make it work with timers feels wrong and would introduce more complexity and deviate from some people's mental model.
Oh, that's the specific proposed fix you meant. Okay, I can see why you'd dislike that, but I would say that forcing line mode doesn't have those downsides.
About the commands that don't buffer, this is either implementation dependent or even wrong in the case of cat (cf https://pubs.opengroup.org/onlinepubs/9799919799/utilities/c... and `-u`). Massive pain that POSIX never included an official way to manage this.
Not mentioned is input buffering, that would gives you this strange result:
$ seq 5 | { v1=$(head -1); v2=$(head -1); printf '%s=%s\n' v1 "$v1" v2 "$v2"; }
v1=1
v2=
The fix is to use `stdbuf -i0 head -1`, in this case.In any case, stdbuf doesn't seem to help with this:
$ ./a | stdbuf -i0 -- cat
#include <stdio.h>
#include <unistd.h>
int main(void) {
for (;;) {
printf("n");
usleep(100000);
}
}
Eg if I want to test out my greps on a static file and then switch to grepping based on a tail -f command
Once I have the final command, if I’m moving it into a shell script, then _maybe_ I’ll switch to file redirection.
cat foo.txt | bar | blah > out.log
vs. bar < foo.txt | blah > out.log
It looks more like what it is. Also, with cat you can add another file or use a glob, that's come in handy more than once.Furthermore, it means the first command isn't special, if I decide I want something else as the first command I just add it. Pure... concatenation. heh.
It's useful to know both ways, I suppose. But "don't use trivial cat" is just one of those self-perpetuating doctrines, there's no actual reason not to do things that way if you want.
< foo.txt bar | blah > out.log
I like that more, in a way, and less, in a way. The angle bracket is pointing off into nothing but throws `foo.txt` into `bar` anyway, so the control flow seems more messed up than in `bar < foo.txt`.
On the other hand it's structurally a bit more useful, because I can insert a different first stage very easily. But I still can't add a filename or change it to a glob, so cat is still more flexible.
So I'm going to stick to my trivial-catting ways, but thanks for the head's up.
I believe quite a few utilities actually do try to flush their stdout on receiving SIGINT... but as you've said, the other side of the pipe may also very well have received a SIGINT, and nobody does a short-timed wait on stdin on SIGINT: after all, the whole reason you've been sent SIGINT is because the user wants your program to stop working now.
> this post is only about buffering that happens inside the program, your operating system’s TTY driver also does a little bit of buffering sometimes
and if the TTY is remote, so do the network switches! it's buffering all the way down.
But also, the example is not a great one; grepping tcpdump output doesn't make sense given its extensive and well-documented expression syntax. It's obviously just used as an example here to demonstrate buffering.
I dunno. If doesn't make sense in the world where everyone makes the most efficient pipelines for what they want; but in that world, they also always remember to use --line-buffered on grep when needed, and the line buffered output option for tcpdump.
In reality, for a short term thing, grepping on the grepable parts of the output can be easier than reviewing the docs to get the right filter to do what you really want. Ex, if you're dumping http requests and you want to see only lines that match some url, you can use grep. Might not catch everything, but usually I don't need to see everything.
Well. Personally, every time I've tried to learn its expression syntax from its extensive documentation my eyes would start to glaze over after about 60 seconds; so I just stick with grep — at worst, I have to put the forgotten "-E" in front of the pattern and re-run the command.
By the way, and slightly off-tangent: if anyone ever wanted grep to output only some part of the captured pattern, like -o but only for the part inside the parentheses, then one way to do it is to use a wrapper like this:
#!/bin/sh -e
GREP_PATTERN="$1"
SED_PATTERN="$(printf '%s\n' "$GREP_PATTERN" | sed 's;/;\\/;g')"
shift
grep -E "$GREP_PATTERN" --line-buffered "$@" | sed -r 's/^.*'"$SED_PATTERN"'.*$/\1/g'
Not the most efficient way, I imagine, but it works fine for my use cases (in which I never need more than one capturing group anyway). Example invocation: $ xgrep '(^[^:]+):.*:/nonexistent:' /etc/passwd
nobody
messagebus
_apt
tcpdump
whoopsie
~ $ echo "foo bar1 baz foo bar2" | grep -oP 'foo \Kbar\d'
bar1
bar2
~ $ echo "foo bar1 baz foo bar2" | grep -oP 'foo \Kbar\d(?= baz)'
bar1
~ $ echo "foo bar1 baz foo bar2" | grep -oP 'foo \Kbar\d(?=$)'
bar2
Of course, it implies using a version of `grep` supporting the `-P` option. Notably, MacOS doesn't by default, although if -P is utterly needed, there are ways to install gnu-grep or modify the command used to achieve the same result.
Your way is perhaps more cross-platform, but for my (very personal) use cases, mine is easier to remember and needs no setup.Edit: worst case, piping to `cut` or `awk` can also be a solution.
> worst case, piping to `cut` or `awk` can also be a solution.
Yeah, I've used that too, and that's how I ended with writing the script down: constantly piping things through the second filter with yet another stupid regex that needs tinkering as well... isn't there a way to reuse the first regex, somehow? Hmm, don't the patterns in the sed's "substitute" use the same syntax as the grep does?.. They do! How convenient.
I think most programs will flush their buffers on SIGINT... But for that to work from a shell, you'd need to deliver SIGINT to only the first program in the pipeline, and I guess that's not how that works.
Otoh, do programs routinely flush if they get SIGINFO? dd(1) on FreeBSD will output progress if you hit it with SIGINFO and continue it's work, which you can trigger with ctrl+T if you haven't set it differently. But that probably goes to the foreground process, so probably doesn't help. And, there's the whole thing where SIGINFO isn't POSIX and isn't really in Linux, so it's hard to use there...
This article [1] says tcpdump will output the packet counts, so it might also flush buffers, I'll try to check and report a little later today.
[1] https://freebsdfoundation.org/wp-content/uploads/2017/10/SIG...
I checked, tcpdump doesn't seem to flush stdout on siginfo, and hitting ctrl+T doesn't deliver it a siginfo in the tcpdump | grep case anyway. Killing tcpdump with sigint does work: tcpdump's output is flushed and it closes, and then the grep finishes too, but there's not a button to hit for that.
$ make pa re ci
cc -O2 -pipe -o pa pa.c
cc -O2 -pipe -o re re.c
cc -O2 -pipe -o ci ci.c
$ ./pa | ./re | ./ci > /dev/null
^Cci (2) 66241 55611 55611
pa (2) 55611 55611 55611
re (2) 63366 55611 55611
So with "pa" program that prints "y" to stdout, and "re" and "ci" that are basically cat(1) except that these programs all print some diagnostic information and then exit when a SIGPIPE or SIGINT is received, here showing that (on OpenBSD, with ksh, at least) a SIGINT is sent to each process in the foreground process group (55611, also being logged is the getpgrp which is also 55611). $ kill -l | grep INT
2 INT Interrupt 18 TSTP Suspended
https://www.gibney.org/the_output_of_linux_pipes_can_be_inde...
This is an old problem, I encounter it often when working with UART, and there's a variety of possible solutions:
Use a special character, like a new line, to signal the end of output (line-based).
Use a length-based approach, such as waiting for 8KB of data.
Use a time-based approach, and print the output every X milliseconds.
Each approach has its own strengths and weaknesses, depends upon the application which one works best. I believe the article is incorrect when mentioning certain programs that don't use buffering, they just don't use an obvious length-based approach.
It's also worth mentioning a recent improvement we made (in coreutils 8.28) to the operation of the `tail | grep` example in the article. tail now notices if the pipe goes away, so one could wait for something to appear in a log, like:
tail -f /log/file | grep -q match
then_do_something
There are lots of gotchas to pipe handling really.
See also: https://www.pixelbeat.org/programming/sigpipe_handling.htmlI knew it had something to do with buffers and it drove me nuts, but couldn't find a fix, all solutions tried didn't really work.
(Problem got solved when we got rid of ruby in CI - it was legacy).
Node.js sets stdin to nonblocking. This is great because it means copy and pasting a shell script containing an npm install into your shell will work, since the description is reset between each program by your terminal. But when those same lines are executed by the bash interpreter directly, processes after npm will randomly fail by failing to read from stdin with a return value they never expected to see. Ask me how I know
sed -e '/pattern1/!d' -e '/pattern2/!d'
which generalizes to more terms. Easier to remember and just as portable is awk '/pattern1/ && /pattern2/'
but now you need to launch a full awk.For more ways see https://unix.stackexchange.com/questions/55359/how-to-run-gr...
tail -f /some/log/file | grep -E 'thing1.*thing2'
This will only match if the subpatterns, i.e. thing1 & thing2, are in this order, and also require that the patterns do not overlap.Ultimately even “no buffer” still has a buffer, which is the number of bits it reads at a time. Maybe that’s 1, or 64, but it still needs some boundary between iterations.
The latency of a 'syscall' is on the order of a few hundred instructions. You're switching to a different privilege mode, with a different memory map, and where your data ultimately has to leave the chip to reach hardware.
It's absurdly important and it will never not be.
Buffering generally is a CPU-saving technique. If we had infinite CPU, all buffers would be 1 byte. Buffers are a way of collecting together data to process in a batch for efficiency.
However, when the CPU becomes idle, we shouldn't have any work "waiting to be done". As soon as the kernel scheduler becomes idle, all processes should be sent a "flush your buffers" signal.
You'd only send signals to processes who had run any code since the CPU was last idle (if the process hasn't executed, it can't have buffered anything).
There could also be some kind of "I have buffered something" flag a process could set on itself, for example at a well-known memory address.
Unbuffered will gratuitously give you worse performance, and can create incorrect output if multiple sources are writing to the same pipe. (sufficiently-long lines will intermingle anyway, but most real-world output lines are less than 4096 bytes, even including formatting/control characters and characters from supplementary planes)
Line-buffering is the default for terminals, and usually what you want for pipes. Run each command under `stdbuf -oL -eL` to get this. The rare programs that want to do in-line updates already have to do manual flushing so will work correctly here too.
You can see what `stdbuf` is actually doing by running:
env -i `command -v stdbuf` -oL -eL `command -v env`
curl ... | grep -q
was giving me a "Failed write body error". I knew "grep -q" would close stdin and exit as soon as a match is found, and therefore I needed a buffer in front of grep but I was on a Mac, which to this day still doesn't come with "sponge" (or stdbuf and unbuffer for that matter), so I had to find a cross-platform command that does a little buffering but not too much, and could handle stdout being closed. So I settled on: curl ... | uniq | grep -q
To this day people are still confused why there's not a "sort" in front of uniq and the comment about this cross-platform buffer thing I put in the script.So we are on top of a large pile of optimizations the even make "grep | grep | grep" possible, but we are so spoiled we don't even notice until the pile of abstractions runs too deep and the optimizations doesn't work anymore!
In this specific case, it is however a strange thing to do. Why would you want to start two processes in order to search for two words? Each with its own memory management, scheduling, buffering etc.? Even if the buffering can be turned off, the other overhead will be noticeable.
If you wanted to search for ten words, would you start ten separate grep processes, each responsible for their own word?
No, you would ask grep to search for lines containing these both these two words in any order, like so:
grep 'thing1.*thing2\|thing2.*thing1'
This is probably what the article alludes to in the "grep -E" example (the -E is there in order to not have to escape the pipe character), but the author forgot to write the whole argument so the example is wrong.It would be practical to have a special syntax for "these things, in any order" in grep, but sadly that's one of the things missing from original grep. This makes it unnecessarily difficult to construct the argument programmatically with a script. This was one of the things Perl solved with lookahead and lookbehind buffers, you can use these to look ahead from the start of the line to see if all words are present:
grep -P '^(?=.*?thing1)(?=.*?thing2)'
This syntax is hard to read and understand, but it's the least bad when needed.(Related: How to search for any one of several words? That's much easier because each match is independent of each other, just give grep several expressions:
grep -e 'thing1' -e 'thing2'
which has none of the above problems.)