The Success and Failure of Ninja - https://news.ycombinator.com/item?id=23157783 - May 2020 (38 comments)
(Reposts are fine after a year or so! links to past threads are just to satisfy extra-curious readers)
A thousand times this! This puts into words something that's been lurking in the back of my mind for a very long time.
> The first chapter of the book claims, "The major problems of our work are not so much technological as sociological in nature". The book approaches sociological or 'political' problems such as group chemistry and team jelling, "flow time" and quiet in the work environment, and the high cost of turnover
[1] https://en.wikipedia.org/wiki/Peopleware:_Productive_Project...
I feel that the development of psychology and sociology has been lost on the workplace and it isn’t well applied. Executives want everyone to be widgets except themselves, even when study after study shows that for companies to perform optimally their workers must feel well compensated, well valued, balanced freedom in the workplace, chances for advancement etc.
In many respects you could apply psychology and sociology to how products should / could behave etc. as well, which I’m sure due to the monetary component some companies have taken seriously at least in some periods of their lifecycle, like Apple under Steve Jobs in his comeback
This is true even of (theoretically simple) things like retail jobs, because even if you're proficient in the basic skill set on day one, coming up to speed on the rhythm of a specific workplace still takes time.
I'm buggered if I can remember where I saw it, but there was a study once that showed that (in that specific instance, I have no clue as to whether or not it generalises) a minimum wage increase actually *saved* retail/service employers in the area money overall, just because the reduced churn meant that over the lifetime of an employee with the company the fact that said lifetime was longer meant they were getting enough more value per hour out of each employee to more than compensate for the higher cost per hour.
Of course the study could always have been wrong, but it didn't seem obviously so back when I looked at it and it at the very least seems plausible to me.
Of course. This maximizes their relative power within the company.
Some executives are focused on the health of a company as a whole but not many. To most of them the pie can be assumed to be a fixed size and their job is to take as much of it as possible.
Being able to overlay an 80x24 terminal over one of my eyes (and drive it with a bluetooth keyboard or whatever) would've been fantastic for me.
Unfortunately for me, this is enough of an outlier desire that it doesn't seem likely anybody will ever want to sell me that at a price point I can convince myself of.
Not sure I can convince myself to spend the money to find out, but they're definitely going in the direction I was thinking of.
Ta.
> Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations. — Melvin E. Conway, How Do Committees Invent?
ninja has ~26 kloc, ~3,100 commits, and only a quarter of them by the original author (although by loc changed their weight is higher). Interesting!
https://github.com/ninja-build/ninja/graphs/contributors
### Bunch of other comments ###
> users of ninja ... all Meson projects, which appears to increasingly be the build system used in the free software world;
So, AFAICT, that hasn't turned out to be the case.
> the code ends up being less important than the architecture, and the architecture ends up being less important than social issues.
Well... sometimes. Other times, the fact that there's good code that does something goes a very long way, and people live with the architectural faults. And as for the social issues - they rarely stand in opposition to the code itself.
> Some pieces of Ninja took struggle to get to and then are obvious in retrospect. I think this is true of much of math
Yup. And the some of the rest of math becomes obvious when some re-derives it using alternative and more convenient/powerful techniques.
> I think the reason so few succeed at this is that it's just too tempting to mix the layers.
As an author of a library that also focuses on being a "layer" of sorts (https://github.com/eyalroz/cuda-api-wrappers/), I struggle with this temptation a lot! Especially when, like the author says, the boundaries of the layers are not as clear as one might imagine.
> I strongly believe that iteration time has a huge impact on programmer satisfaction
I'm pretty certain that the vast majority developers perform 10x more incremental builds than full builds. So, not just satisfaction - it's just most of what we do. It's also those builds which we wait-out rather than possible go look for some distraction:
OTOH, the article doesn't mention interaction with build artifact caching schemes, which lessen the difference between building from scratch and building incrementally.
> Peter Collingbourne found Ninja and did the work to plug it into the much more popular CMake ... If anyone is responsible for making Ninja succeed out there in the real world, Peter is due the credit.
It is so gratifying when a person you didn't know makes your software project that much more impactful! Makes you really feel optimistic again about humanity and socialism and stuff.
I am excellent at finding such things either hilarious or grounds to say "well, if you're going to be like that, I can't say I care about your opinion, piss off" and moving on to the next complaint in the hopes I can get useful feedback out of that one.
But there's a fair swathe of newbies where I have to step back and let other people help them instead, because if I try I'll end up accidentally driving them off and feeling like a dickhead afterwards :D
(I have tried and failed repeatedly at "Not Being a Bastard," so I've settled for leveling up in "Being a Self Aware Bastard" instead; at least that reduces how often I end up causing *un*intentional offence ;)
Because blenders don’t turn things into an ice cream texture
But regardless, I think those kinds of build systems are just wrong. What I want from a build system is to hash the content of all the transitive inputs and look up if it exists or not in a registry.
Good build systems have native support for these things.
I've built something similar, a Deno library called "TDAR"[1], and it works well, but it takes some work to wrap up all the command-line tools that expect to work in some mutable filesystem so that you can pretend you're calling pure functions.
[1] I haven't got around to pulling it out of the parent project[2], but I talked about it in this youtube video: https://youtu.be/sty29o8sUKI
[2] If you're interested in this kind of thing you could poke me to open up the source for that thing. togos zero zero at gee mail dot comb
Also, “not the thing I wanted” doesn’t mean “wrong”, simply because there are other people in the world with different preferences
They're all hugely complex though.
For local build only, I think SCons and Waf both use hash for changes detection.
My opinion is that a build system should figure out on its own how to build files, that is its job. The last thing I want to do is to define targets or dependencies. All of this is already implicit from the code itself and is useless busywork. I should just point it to a file or directory and that is it.
I prefer to just build my own build systems, bespoke to each project or environment, that just does what it should, no more and no less, leveraging the conventions in place and neatly integrating with team workflows (debugging, sanitizers, continuous integration, release, packaging, deployment, etc.)
I find that when you do that, there isn't much value in using any of the tools, they just add noise or make things slow. Running a graph of compiler and linker commands in parallel is fairly trivial and can be done in 20 lines of Python. The hard part is figuring out where the dependencies live, which versions to pick, and how the code implies those dependencies; for which the tools do nothing.
I've been on both end of this situation and would rather not do it again, so I'll use whatever is the de-facto standard, but you do you.
And if it's doing everything from scratch, it's more likely to be simple and self-contained, making it easier to maintain.
The name is great!
PS. It's possible to make it even faster if we implement this: https://github.com/ninja-build/ninja/issues/2157 But you explained in the article that the tool intentionally lacks state, even tiny hints from previous runs.
Anyone knows if it happened? Has the google research on latency been published?
But it's a nice article. The idea that giving-up on waiting for a delay has a simple exponential distribution is something that I never thought. (And now I'm fixed on understanding why... Something must have biased me against it.)
> You must often compromise between correctness and convenience or performance and you should be intentional when you choose a point along that continuum. I find some programmers are inflexible when considering this dynamic, where it's somehow obvious that one of those concerns dominates, but in my experience the interplay is pretty subtle; for example, a tool that trades off correctness for convenience might overall produce a more correct ecosystem than a more correct but less convenient alternative, if programmers end up avoiding the latter.
I was amused by this line:
> But Windows is still a huge platform in terms of developers, and those developers are starved for tools.
As a primarily Windows dev I feel that it is poor Linux devs who are starved for tools! Living life without a good debugger (Visual Studio) or profiler (Superluminal) is so tragic. ;(
It does feel like in recent years the gap between the two platforms is increasingly minimal. I definitely like all the Rust utilities that generally work crossplatform for example.
It’s certainly not perfect. But “seconds to step a single line” is not normal. Certainly not what I experience. Even when debugging very large code bases like Unreal Engine.
I know UNIX pretty well, since being introduced to Xenix in 1993, used plenty of variants, and yet my main use of WSL is to run Linux docker containers and nothing else.
I can't make for any other explanation. I can't think on any interaction with it that I would describe as "good". I can think of a few "minimally ok", but debugging isn't one of them. (But at least on the 2022 the debugger isn't full of bugs anymore. Maybe that's what this is about.)
I don’t doubt there are warts, but for you what’s missing or sub-par from VS that is better elsewhere? What debuggers do you consider better? Gdb is also excellent, but in a different way. Gdb is programmable and that maybe makes it more powerful. (I don’t know if VS debugging is scriptable, I think it wasn’t last time I tried.) But gdb’s learning curve, lack of UI (even with tui), and lack of discoverability is a major impediment to it’s use. You mentioned interaction, and interaction is what holds back gdb.
Also, it can fail loudly if it loses the control of the target process or if the target process fails before it finishes connecting. Also, it should not run after the UI reports that it finished.
What do you mean about so slow there are semantic implications? How does execution speed change meaning? Can you give an example? And are you talking about VS specifically, or just debugging in general? Gdb can be extremely slow when debugging too, and besides that, simply turning on symbols and turning off optimizations can be a major reason for slowdowns.
For the connection issues, I rarely if ever see that with VS. Usually I’m launching my executable from the debugger. I’m not generally doing remote debugging, or attach-to-process debugging — is that what you’re talking about? Certainly all debuggers have those kinds of issues with remote debugging or attaching to processes. Are these issues better in some other debugger you use? If so, I’m certainly curious to hear about it, I would love to learn & use something that’s superior.
Gdb is horrible, vs debugger is quite good but has strange limitations
Android, which uses it for some large component of the system that I've never quite understood
Ninja is really a huge part of AOSP, the build system initially used makefiles. Things got complex really fast with a custom declarative build system (soong) and a failed/aborted migration to bazel. Google developed kati (https://github.com/google/kati) which converts Makefiles to ninja build files (or should I say file), which really is huge: λ wc -l out/build-qssi.ninja
3035442 out/build-qssi.ninja
Going from makefiles/soong to ninja is painful, it takes several minutes even in a modern machine but it simply flies once ninja picks it up.The advantage in android is that the different build systems will generate ninja, so they can interoperate.
OP said that ninja is small enough to be implemented in your favorite programming language. I wonder if there is step by step tutorial to create your own build system?
> ... Where other build systems are high-level languages, Ninja aims to be an assembler.
> ... Ninja is intended to be used with a separate program generating its input files.
> ... Ninja is pretty easy to implement for the fun 20% of it and the remaining 80% is "just" some fiddly details.
There are many ninja generators out there already [1] but writing a simple, custom one shouldn't be too hard [2] and could make sense for some projects.
BTW, ninja is great but I wish the configuration file had used a more standard format, easier to parse and generate from any language. JSON would have been a better option I think, given the abundance of tooling around it.
--
1: https://github.com/ninja-build/ninja/wiki/List-of-generators...