This is written like a Jeopardy answer. I just don't know what the question is.
Can anyone enlighten me?
The key difference is that, in the Web platform's case, there's not actually a better alternative on offer. Even with these awkwardnesses, it's still a better app delivery platform than desktop or mobile OSes, because it's dramatically less fragmented, has a more convenient "installation" story (https://xkcd.com/1367/), and has a better security model (at least compared to desktop OSes). So people need to write rich web apps with arbitrary behavior in it, which requires it to be arbitrarily customizable.
Contrast an enterprise app, where the lesson of the "inner-platform effect" idea is that code changes to the outer platform aren't as costly as you think, compared to unmaintainable configuration that interacts in complex ways with the platform primitives. So it's best to allow only customization simple enough to not pose maintainability challenges, and eat the cost of an outer-platform code change whenever you need anything more complicated. But Web developers don't have the option of getting browsers to add new code every time they want to add a complex new feature to their app, so browsers need to support a rich enough set of primitives that those features are already possible.
The other way to resolve the tension would be to get rid of the document-library features and instead double down on being an operating system, perhaps based on WebAssembly and <canvas> instead of HTML+CSS+JavaScript, like Flutter for Web uses. But of course people are using the document library, and in some cases it's the easiest way to do something, even at the cost of a little bit of redundancy at intermediate levels of customization.
What SPA critics typically want, of course, is for most sites to be satisfied with less feature-richness so as to fit more easily into the document-library model. But the platform has to support everyone's use cases, not just those of people who like HN's minimalist style. (I can't find it now, but there was a great comment on HN awhile ago that said something like: "A lot of HN users basically wish the internet was like how it was in the 90s, except with broadband. But in this respect, we're unusual; most users like features and slick UIs.")
We lack a mechanism for picking sane new features. Browsers add new stuff all the time. Most of it is horrible. [Say] Adding a js assembly has to be the most stuborn way of tollerating new langages. you may do it, as long as the new lang is js!? You can have butter as long as it is yogurt.
I dont like pyton. I wouldn't be upset by <script type="pyton"> add some of the dom tools to it and people will have a ton of fun. Might even be useful.
This gets even more extreme now that you can have wasm on a canvas... The language that you're compiling from doesn't understand the semantics of a back button either!
> "Our editor is written in C++ and cross-compiled to JavaScript using the emscripten cross-compiler... Pulling this off was really hard; we've basically ended up building a browser inside a browser."
Now sure, Figma's an exception, but it's an illustrative one. For most single-page apps, it's an interesting question. Is the web browser a monolithic platform where if you reimplement any of its layout engines etc. you're reinventing the wheel? Or, is it a set of libraries that can be chosen from at will, that of course happen to all work together to provide sane defaults, but by no means are required or expected to all be put into use simultaneously?
I tend to think of the web platform as the latter. Just because there's something in the "standard library" so-to-speak doesn't mean I'm forced to use it - the real question is whether it's something stable that won't force me and the team to yak-shave to maintain it. Mature JS/TS libraries are no worse than the browser in this regard!
This at first had me rip out everything i use to be proud of. Now i have single documents that do only one thing. I put these in folders that like the files have titles that describe what they do.
The whole thing looks like I started coding a week ago.
It all just works. It will continue to work. If something ever breaks it will be only that page. You will be able to see what is wrong instantly. If you paste the page into an llm it will most likely guess correctly what is wrong.
Doing anything in HTTP/1 would make you reload your whole page slowly. Then AJAX came along and allowed you to build things that were more interactive and responsive. Gmail was a game-changer.
Since then HTTP/2 came along, and I feel like the industry has blindly continued on the HTTP/1->AJAX trajectory, without stepping back and re-evaluating how much HTTP/2 (and later) can do for us.
The problem is that unless your use case is very limited and is guaranteed to stay that way, supporting more and more language constructs will quickly turn your code into a mess.
Compiler design as we learn it (lex/parse, syntax tree, semantic checks, transforms, lowering to codegen) is _the_ solution to the problem of dealing with computer programs as inputs and outputs. Trying to do something less is like solving a dynamic programming problem without knowing dynamic programming: it will only work for a restricted set of inputs.
It starts innocently, e.g. doing some template files and replacing some simple values, then you start to have to do more replacements and more "smart" parsing and then at some point it's too late, as the article suggests.
TBF, I did put together a transpiler from PHP to JS, but I didn't build it, just found the different pieces that luckily fit together and hacked around it enough that it could run in the browser.
Keeping them locked up in the "scary CS" closet only ends up stunting your growth.
Racket at least provides most of the tooling required (eg dealing with details of namespacing, syntax, runtime/compile-time/evaluate-time distinction, continuations and garbage collection), but if you've ever tried to introduce it into an organization of significant size this very same result of yak-shaving will be considered a liability.
And you can step up a level above in the same ecosystem as well, my current project is written in Scala and directly shares its data structures with the Graal Python interprer that I've embedded inside it, so other people can write the stuff they want in Python.
Granted, if you've got a really simple language then maybe there's an upper bound on how bad it can be. But in practice people tend to also want to grow the language as they go.
Don't get me wrong; there's a lot than can wrong if you over-engineer the compiler from the beginning, including ending up with a hot mess anyway.
BTW if you can't write down, even in three sentences, the requirements for said compiler from the beginning, congratulations, you have no idea what you are doing.
You can find the text at many websites. Here is one from 1997: https://www.inf.ed.ac.uk/teaching/courses/seoc2/1997_1998/ar... It's a fun read.
> Do not worry at this time about acquiring the resources to build the house itself. Your first priority is to develop detailed plans and specifications. Once I approve these plans, I would expect the house to be under roof within 48 hours.
..nowadays would be more like:
> The MVP should be move-in-ready ASAP at which point we shall move into the house and live there while you complete the remaining requirements.
Then it goes on: you need additional tools like Helm or Kustomize to generate your YAML files. I am not aware whether anybody has ever tried to generate Kubernetes config files from Dhall[1] input files.
Binary optimization through stripped static-embedding and packing is a level of evil even I find upsetting. lol... =3
Debugging broken Yocto builds can be a nightmare.
You end up stepping into the build directory environment and manually running the generated code. You try some fixes in it, and then guess on which fragment in what bitbake file to backport that fix into.
direct link https://dx.tips/oops-database
Hardware people tend to say "Everything is a machine".
Compiler people tend to say "Everything is a compiler".
Database people tend to say "Everything is a database/relational database management system".
Operating system people tend to say "Everything is an operating system".
There are many broadly-applicable paradigms in modern computer science. Linguistic abstraction, i.e. the definition of a Domain Specific Language (DSL), is a very powerful technique that is often the right choice (but don't apply it to read an *.ini file! That is called overkill/distraction). Abelson and Sussman's "Structure and Interpretation of Computer Programs" book (SICP) is the gold standard book to teach you the various forms of abstraction.
Lots of developers will find it much more interesting, challenging, rewarding and just plain fun to develop something from scratch, even when there are better things that already exist.
They'll cleverly manipulate and convince the boss, against the better discretion of their elder developers, that they can do it, and if they're one of the better developers, the boss won't want to risk losing them so they'll agree to the escapade.
Then said escapade turns into a shambles, as predicted by the elder devs, and the developer who created the mess simply quits and moves to some other job, in search of more fun and greener pastures. Any developer with decades of experience has probably seen this same pattern multiple times.
I'm not saying you're wrong, I don't doubt that many people have the opposite experience. It just makes me feel a bit alien when I read comments like this.
And I agree with you. The problems with third party dependencies are way worse than any in-house complete disaster.
But that happens almost certainly because everybody is severely biased into adding dependencies. Make people biased into NIH again, and the homebrew systems will become the largest problems again.
When I make a request to chatGPT to show me a example of something with javascript and node - it always brings me a solution with a external libary needed. So I have to add "without external dependencies" - then it presents me a nice and clean solution without all the garbage I don't need.
So apparently adding yet another libary seems normal for many people, otherwise this behavior would not replicate in an LLM.
So when I just need a simple connection to a web socket - I don't need that external baggage. But for high performance graphics, yeah, I won't write all the WebGL code by hand, but use Pixi (with the option of writing my own shader where needed).
I am gladly writing my own left-pad.
I am gladly using something like three.js if I need it.
The problem are the extreme stances. Don't add any dependency is as stupid as pulling in dependencies for every second line.
You are unaware of a core part of JavaScript history, which is why you don't understand why "I'm not importing a library to do left pad" is not only a proper example, but THE BEST example.
String.padStart was added in 2017.
This happened in 2016 https://en.m.wikipedia.org/wiki/Npm_left-pad_incident
It is, however, almost never justified to bring in a dependency for something that's in the standard library. The correct solution for that, in JavaScript-for-the-web, is a shim. left-pad is not a suitable example.
A better example would be https://www.npmjs.com/package/ansi-red:
/*!
* ansi-red <https://github.com/jonschlinkert/ansi-red>
*
* Copyright (c) 2015, Jon Schlinkert.
* Licensed under the MIT License.
*/
'use strict';
var wrap = require('ansi-wrap');
module.exports = function red(message) {
return wrap(31, 39, message);
};
But while this makes a point, does it really make the original point? This ansi-red library has at least two footguns, right off the bat.(Unless the LLM is heavily biased towards certain code, maybe because blog entries promoting their libary got too much weight, but looking at a random js source of a webpage - they do mostly ship tons of garbage).
Python perhaps isn’t quite as bad as JS in this regard, but people still have a tendency to pull in numpy for trivial problems that stdlib can easily solve, and requests to make a couple of simple HTTP calls.
If you need connection pooling, keep-alive, or any of the other features that requests adds, then sure, add it.
We have used mainstream technologies like just Quarkus or Spring Boot and plain React with Typescript and the absolute bare minimum of dependencies.
I have worked with a number of good devs over the years but it is amazing how productive these teams have been. (Should probably also mention that we were also lucky to have great non technical people on those teams.)
To that effect, Rust (2015) is 9 years old, Go and Node are 15 years old. While Python (1991) is 33 years old. Just putting things in a different perspective
Also for attrition to be the cause, you'd need a lot more seniors dropping off than juniors.
If you had asked me in 2010, I would have said that the median software developer lasts 5-10 years in the industry. A lot of people left the field after the dot-com bubble burst. The same happened again in a smaller scale in the late 2000s, at least in countries where the financial crisis was a real-world event (and not just something you heard about in the news). But now there has been ~15 years of sustained growth, and the ratio of mid-career developers to juniors may even be higher than usual.
There’s a bunch of filters. Many people quickly realize they don’t enjoy development, next is openings in management. One of the big ones is at ~40 you’re debt free, have a sizable nest egg, and start thinking of you really want to do this for the next 20 years?
A part of this is the job just keeps getting easier over time. Good developers like a challenge, but realize that the best code is boring. Tooling is just more robust when you’re doing exactly the same things as everyone else using it, and people can more easily debug and maintain straightforward code. So a project that might seem crazy difficult at 30 starts to just feel like a slog through well worn ground.
Having significant experience in something also becomes a trap as you get rewarded for staying in that bubble until eventually the industry moves on to something else.
In fact I already have my retirement planned - a small near flat in Argostolli, a walk down to the coffee bars on the harbour and a few hours adding code and documentation to a foss project of my choice before heading to the beach with grandkids.
Now affording retirement might be interesting but not having coding in it will be like not having reading and writing
The first paragraph probably applies to 1-10% of developers worldwide...
I also don’t mean early retirement. Still, combine minimal schooling, high demand, reasonable pay, and the basic financial literacy of working with complex systems adds up over time.
But could say the same about Python pre-1995 or so.
My biggest problems with Rust, though, are Cargo and Crates.io, not the language.
There is a fundamental philosophical disagreement I have with the NPM style of package management and this method of handling dependencies. Like NPM, Crates.io is a chaotic wasteland, destined for a world of security & license problems and transitive dependency bloat.
But honestly I'm sick of having this out on this forum. You're welcome to your opinion. After 25 years of working, with various styles of build and dependency management: I have mine.
But I actually think it's more inspired by people coming from the NodeJS ecosystem than people coming from C++.
Rust, unlike Go, was largely developed in public. It also changed significantly between it's initial design and 1.0 so it feels like "cheating" to count pre-release versions.
Still, a decade is a significant milestone.
We should banish the word “import” in favor of “take a dependency on someone else’s code, including the stability of the API, the support model, willingness to take patches, testing philosophy…”
Reputation is a rough proxy; inspecting the code can help. But when the thing you built your house of cards on falls over, you often can’t fix your house, and have to build a new house.
Obviously this applies more to utility code than it does to entire languages. But even there, Apple has broken their Swift syntax enough to release tools that upgrade your code for you…and that’s the best case scenario.
I’m not suggesting the earlier argument about NIH (not invented here) syndrome doesn’t exist. But I’ve certainly never seen in the scale that the earlier posted claimed. If anything, I see people getting less inclined to reinvent things because there’s so much code already out there.
But maybe this is a domain specific problem? There does seem to be a new JavaScript frontend framework released every week.
I'm not sure what's the proposal?
- Don't use an OS. Write your own. Linux? Boring.
- Design your own CPU.
- ext3 or xfs? Nah write your own.
- Write your own database.
- Ethernet. Too boring. Create a new physical layer for networking.
- Naturally create your own programming language. That'll make it much easier to find people when you have to expand the team.
Seriously, build vs. buy and NIH has always been with us. There's a time to build and there's a time to buy/reuse. Where are you adding value? What's good enough for your project? What's strategic vs. tactical? How easy is it to change your mind down the road? What are the technical capabilities of the team? How do different options impact your schedule/costs? How do they impact quality? In the short term? In the medium term? In the long term.
Let's face it. Nobody builds everything from scratch. The closest is companies like Google who due to sheer scale benefit from building everything from hardware to languages and even for them it's not always clear whether that was the right thing for the business or something they could afford to do because they had lots of money.
Build the things that add value. Don't build something that just works. That's why we have the old saying don't reinvent the wheel. If you have a working wheel, while re-inventing it might be fun, it's usually not the best use of time. In the time you've saved build cool things that add value.
that said, it was not easy!!
The stance for in-house built tools and software is a much more balanced act than that. One that prioritises self-reliance, and foments institutional knowledge while assessing the risks of making that one more thing in-house. It promotes a culture where employees stay, because they know they might be able to create great impact. It also has the potential to cut down the fat of a lot of money being spent on third parties.
Let's be real, most companies have built Empire State Buildings out of cards. Their devs spend most of their time fixing obtuse problems they don't understand, and I'm not talking about programming, but in their build processes and dependency hell.
It's no wonder that the giants of today, who have survived multiple crisis, are the ones who took the risk of listening to those "novice" enthusiastic engineers.
Don't kill the enthusiasm, tame it.
I'm not sure I agree the giants of today are built on the work of enthusiastic novices. Amazon and Microsoft have always had a ton of senior talent. Meta started with novices but then a lot got reworked by more experienced people.
You might get by with sheer enthusiasm and no experience but often that leads you to failure.
But let's take on the testing framework question. When I work in Go or in Python I use the testing frameworks that are part of that ecosystem. When I worked with C++ I'd use the Boost testing framework. Or Google's open source testing framework. Engineers I work with that do front-end development use Playwright (and I'm sure some other existing framework for unit tests).
Can't you do your own thing? sure. You'd be solving a lot of problems that other people have already solved. Over time the thing you did on your own will require more work. You need to weigh all of that vs. using something off the shelf. 9 times out of 10, most people, should use tooling that exists and focus on the actual product they want to build.
That said I work for a large company where we build a lot of additional in-house tooling. It makes sense because: a) it's a specialized domain - there's nothing off the shelf that addresses our specific needs. b) we are very large and so building custom tools is an investment that can be justified over the entire engineering team. We still use a ton of stuff off the shelf.
I don't see what the parent was saying. I think most of the time people choose to use existing bits for good reasons. When I started my career you pretty much had to do most stuff yourself. These days you almost always can find a good off the shelf option for most "standard" things you're trying to do. If you want to write your own testing framework for fun, go for it. If you're trying to get something else done (for business or other reasons) it's not something that usually makes sense. That said it's not like we have a shortage of people trying to do new things or revisit old things, for fun or profit. We have more than ever (simply because we have more than ever people doing software in general).
For example, most use of macros, especially when you invent your own language implemented as macros.
Me, I've eliminated nearly all the use of macros from my old C code that is still around. The code is better.
I suspect that heavy use of macros is why Lisp has never become mainstream.
Macros gave C efficient inline functions without anything having to be done in the compiler.
Doing things like "#define velocity(p) (p)->velocity" would instantly give a rudimentary C compiler with no inline functions a performance advantage over a crappy Pascal or Modula compiler with no inline functions, while keeping the code abstract.
And of course #if and #ifdef greatly help with situations where C does not live up to its poorly deserved portability reputation. In languages without #ifdef, you would have to clone an entire source file and write it differently for another platform, which would cause a proliferation due to minor differences (e.g. among Unixes).
Ah, speaking of which; C's #ifdef allowed everyone to have their own incompatible flavor of Unix with its own different API's and header files, yet get the same programs working.
An operating system based on a language without preprocessing would have hopelessly fragmented if treated the same way, or else stagnated due to discouraging local development.
Thanks in part to macros, Lisp people were similarly able to use each other code (or at least ideas) in spite of working on different dialects at different sites.
Using the macro preprocessor to work around some fundamental issues with the language is not what I meant.
I meant devising one's own language using macros. The canonical example:
#define BEGIN {
#define END }
We laugh about that today, but the 80's people actually did that. Today's macros are often just more complicated attempts at the same thing.The tales I hear about Lisp is that a team's code is not portable to another team, because they each invent their own macro language in order to be able to use Lisp at all.
Besides, what people wrote was:
#define BEGIN {
not: #define BEGIN <<?
The source is very clear and readable.
Macros whuch simply transliterate tokens to other tokens without performing a code transformation do not have a compelling technical benefit. Only a non-technical benefit to a peculiar minority of users.
In terms of cost, the readability and writeability are fine. What's not fine is that the macros will confuse tooling which processes C code without necessarily expanding it through the preprocessor. Tooling like text editing modes,identifier cross-referencers and whatnot.
I've used C macros to extend a language with constructs like exception handling. These have a syntax that harmonizes with the language, making then compatible with all the tooling I use.
There's a benefit because the macro expansions are too verbose and detailed to correctly repeat by hand, not to mention to correctly update if the implementation is adjusted.
If anything software seems to be unusually favourable to experience because the first 5 years of learning how to think like a computer is so punishing.
The worse trash fires were the homebrewed systems, but maybe that's because I could dig in and see how bad they were.
But I'd actually agree with you - as bad as those were, I'd rather them than a shitty 3rd party something. At least I can theoretically do something about the in-house one, and, all the ones I've seen were smaller in scope than any SaaS product.
You have unskilled, sloppy developers? The homebrew project AND the third-party integration will turn out a mess.
Also, your mileage may vary based on the niche you are working on - in case of, say java, the initial setup of the build system may not be "fun", but it will just work from then on.
Shelling out to an AST parsing library that happens to be slow? Well, shit, that sucks. Guess your compilers just slow now.
In my experience, younger developers will push both (in-house and external) directions at once, actually. Building out complex edifices with sharp corners over a maze of transitive dependencies that few understand.
It's the same thing: A fantasy that a framework will solve the problem, combined with a fantasy that they can develop said framework. It's an urge we all suffer from but some of us have learned the hard way to be careful about. (And others who are great at self-promotion have been rewarded for it by naive investors and managers.)
Finding simple solutions takes humility and time.
This attitude is by far the most common among "enterprise developers", and one of the big differences between people building things from preexisting building blocks vs -- as witnessed from my 8 years at Google later -- people who think they're smart enough to build everything from the ground up, and do so, using primitive blocks and custom compilers created by similarly hubristic engineers who came before them.
Ymmv, but this has been my experience over the past 25 years.
It’s just an uncommon attitude to most enterprise teams, it has less to do with the language and more to do with the part of the industry. I wish more teams knew the tools they already have at their disposal.
My experience is that in the pursuit of not reinventing the wheel, I am frequently told to use a dependency that doesn’t allow us to solve the whole problem, or prevents us from making making the user experience fast or cannot be made to understand our data model. It’s all well and good to use a tool that exists, but using the wrong tool just because it exists is madness. Even worse is when dependencies are deprecated or our use cases become unsupported. Honestly I would prefer to just build everything above the database layer in house, that way we at least know what we can and can’t deliver, and have some chance of fixing things when they break.
If upstream won't cooperate with you, then fork. It's still usually better to start from a battle-hardened codebase than it is from complete scratch.
I feel like the collapse of the tech bro coincided with the masses going to programming, which changed the culture from promoting innovation and development, into simply following whatever best practices someone had already written, turning programming from a creative job into yet another repetitive office job. This is also, in my opinion, the true reason why salaries collapsed. Most business don't need creative specialists, they need code monkeys, and most people aren't creative specialists, they're code monkeys. So why would the salary worthy of a creative specialist even be talked about over here?
And your second paragraph sounds like sour grapes. I have no idea what "the collapse of the tech bro coincided with..." means. Most programmers are working on CRUD apps. How creative do you need to be?
I think a smart solution would be to teach the average programmers the new concepts. Many of them would probably be happy to learn, and the company would benefit from having everyone know a bit more and use better solutions. But for some reason, this usually doesn't happen.
No. Most people don't like being told that they're wrong, drama ensues. They say they want to learn, but in reality, they don't.
Because I can confirm what you said: Both experiences are real.
Brewing it up yourself can blow up in your face.
Reaching for external deps that solve the problem can blow up in your face.
Knowing which choice to make is _tricky_ and is hard to confirm. It doesn't sit well with programmers; either solution will _work_ (you can't write a unit test that 'fails' if you made the wrong choice here), and even if you're willing to accept highly suspect, Goodhart's law-susceptible metrics such as LOC, you still can't get anywhere because it's trading off more code you have to write and maintain without help from a larger community against having fewer lines _in total_ as part of the system.
I do not know of any way to do it right other than to apply a ton of experience. And it's really hard to keep yourself honest. Even if you're willing to wait 5 years and then spend some time looking back, how do you really know?
Anybody with a bunch of experience has seen enough homebrew stuff asplode in their face to be able to paint a picture with how utterly badly that choice could go. If you chose the 'build it on external deps' route you can easily tell yourself you did it right by painting a terrible picture of how it would have gone if you made the other choice.
But the reverse is just as true.
I think I'm really good at it. But, writing about it here, I don't have any real basis to make that claim. I look around at other dev shops that make products of similar complexity and it feels like they need 10x to 100x more resources, have more downtime, and have far larger dev teams. But no doubt bias is creeping in there too, and no 2 software products are 100% comparable in this sense.
I naturally trust homebrewers more because they tend to understand complex technical things better. Someone who can just glue libraries together is lost when I ask them to fire up a debugger and figure out why some interaction is not working. A hopeless NIH sufferer needs to be 'supervised' and their choices about what to write needs to be questioned, but, that's doable with supervision. "Just git gud and be technically proficient" not so much. But then maybe that's bias too - that leads to a codebase that is easier navigated when you're familiar with debuggers and reading code to understand it. Reaching for third party deps a lot leads to a codebase that is easier navigated when you're familiar with docs and tutorials. These are self fulfilling prophecies.
Vaunted is a vanity word.
I get what you mean based upon experience and I still prefer custom systems that eschew layers of syntax sugar. EE was way harder than managing a code base. The syntax patterns are of a finite set of values.
I won’t re-roll encryption libs but there’s a lot of “tooling” packages that just add syntax sugar to parse and cart around that came about in prior eras of sneakerware software that are baked into to development habits that no longer make sense.
I for one am excited about using ML to streamline the code base that comprised my preferred Linux system yet still builds to the usual runtime system. There’s a lot of duplication in code that models can help remove and a few tools can help unpack into machine state. EE brain informs me there’s no “code” in a running system. Just electricity. There’s way too much syntax sugar in the software ecosystem that’s just for parsing/marshaling/transpiling between syntax sugars. Bleh. It’s a big dumb monolith of glyph art that needs to be whacked back like a prairie that needs a controlled fire.
why are the junior calling the shots over the elder developers?
Developers should be measured by how many lines of great code they can delete, not how many lines of great code they create (<-- but don't take this literally of course, it's just making a point)
The caveat of, "off critical path" is a heavy lift, too.
My view was to give projects a form of risk budget. If possible, do things same way as last time. Any deviation is a risk. Can have rewards, sure. But if there was a known way to do it already, be budgeted to pivot back.
I’ve been that senior eng and you spend 130% of your time trying to find terrible decisions about to be made before it’s too late and reviewing 1,400-line PRs only to discover (and try to teach) that it could have been 40 lines. Enough junior devs without sufficient supervision can literally crank out endless quantities of negative-value work. And it’s a battle you’re constantly losing.
The safe, tried-and-true way to build typical web crap is to stick the one, blessed database in the middle, and then dangle all the dependencies off that. Everything is synchronous because it's simpler, and it's what the seniors grew up with.
And then one day you won't be able to run your new history feature, because it's locking up the database for too long and new transactions are timing out.
The juniors only get to run the show and introduce exotic, non-boring technology (asynchronicity, event-sourcing, eventual consistency, CQRS etc.) after the seniors have admitted defeat.
That’s decade old technologies. So is most of the functional niceties which are somehow finally hype nowadays by the way.
The idea that junior are somehow better able is as old as the field and every junior ends up seeing the light at some point.
It generally goes like this: junior thinks they are smart and have a good solution for an apparently new complex problem. They build it and it fails because of some unexpected edge case. They reluctantly go to see the senior expert with the problem who immediately understands the issue, tells them not to do what they just did - they tried this apparently good idea ten years ago - and proceed to give them a solution which will work. Junior dev finally realises that senior are just people who have been there long enough to already have done the obvious mistakes and they can gain considerable time by just learning from them.
Obviously it only works if your senior are actually senior and not slightly older junior with ego boosting title.
lol. This is definitely not my experience. Most problems are pretty boring and can be solved with a judicious use of boring tech. It isn't until you get to a sufficiently large scale where you need to introduce these techniques. And only if they actually further the goal.
Many times the existing system is doing something in a really dumb way because the original authors were organically growing as they explored the problem domain and that puts the system into a box of thinking, similar to your LLM spitting out trash because the context got polluted, or simply inertia. So the exotic tech is introduced to solve a symptom to the underlying issue without the crucial step of reconsidering the box the system sits in. The requirements at scale are now fundamentally different and therefore the solution should be reconsidered.
If the seniors involved completely miss this step then I question the breadth of their experience because this is common when crossing a scale threshold. Being old or having worked at the same company for 10 years doesn't automatically mean someone is truly skilled and could actually mean their experience is extremely limited.
Which part doesn't vibe with you? Are the seniors not stuck in their way of doing things? Or they are stuck in their way of doing things, but never admit defeat?
> So the exotic tech is introduced to solve a symptom to the underlying issue without the crucial step of reconsidering the box the system sits in.
What does this even mean?
> Being old or having worked at the same company for 10 years doesn't automatically mean someone is truly skilled and could actually mean their experience is extremely limited.
Right. And it's really hard to have a discussion with them, because they'll bring the discussion back to seniority, or popularity, or boringness of the technology, or say that the scale issue probably won't happen, or the race condition probably won't happen.
This is true; but after enough years in the industry you learn to correlate success with laziness. This is well-discussed and arguably obvious but on an emotional level it takes a long time to fully sink in. We were all once developers with outsized ambitions and awareness we can flee to greener pastures.
But at the same time since I spend almost ever waking hour of my spare time coding, the word lazy still isn't quite accurate in every way either.
I take it in its form from "lazy loading" - "don't do it until absolutely necessary" (which kinda devolves into YAGNI.)
That's a "big if".
Lots of times what's there is a nightmarish tangle of technical debt left by previous greenfield devs. The dev who gets to maintain and evolve this dreck is the sucker, scapegoated for ever slower development.
Canonical example: on-call AWS engineers working hellish overtime to close tickets on one of AWS's many terribad fragile codebases.
These senior devs often quit or are fired and leave the rest of the developers with their “good ideas”.
I have never seen a junior taking on something new that is inherently huge and complex. I have seen them go overboard with refactoring, because someone tricked them into thinking the Boy Scout rule is good, or that DRY is important, or that they need to think ahead and abstract/generalize for the future. Inevitably that is something they were “taught” by senior colleagues or teachers.
A corollary to this is the pandemic of phobia for NIH. A lot of developers really seem to prefer janky, undermaintained third party libraries with huge APIs over a quick home made hack to solve exactly the problem your team has, and you can maintain and test and just know everything there is to know about. Building your own stuff is good. It is the business we are in.
To drop an anecdatum into the fray, I am an older dev and I tend to see this from younger devs because, frankly, the elder devs are too tired and semi-burnt out from trying to stop the younger devs creating nonsensically pure houses of glass built on the Wise Words of The Bloggers using Technique du Jour where everything takes 4x as long and results in the most fragile and complex sugar-spun castles which collapse into unmaintainable slop after the first contact with enemy (customer) fire.
(I may or may not be bitter about this from previous jobs)
It takes much more energy to stop a bad idea / framework from taking root in a large org than to start one.
Sometimes it’s not even that they’re janky and undermaintained, it’s just the huge and unnecessary API. A good example is watching for file changes. inotify has been around forever, and is easy to reason about. The Python library inotify_simple [0] just wraps that. That’s it. It works extremely well, has no dependencies of its own, and provides nothing else. I once needed this functionality for a project, and had another teammate argue we should use watchdog [1] instead, because it had more stars, and more frequent commits. It took me longer than I thought it would to explain that sometimes, projects are complete and don’t need commits, and that we didn’t need or want any of the additional complexity provided by watchdog.
Another example is UUID generation. Python doesn’t yet natively do UUIDv7 generation, but if you read their source code and the RFC for UUIDv7, it’s fairly easy to write your own implementation. This was met with “please don’t write your own UUID implementation; use a library.” Baffling.
I agree with this one and I’d push back on it too.
In my experience, what will happen is you write your own UUIDv7 implementation because the language or stdlib doesn’t support it yet. Then the language eventually supports it, but this project is still using your homegrown implementation that is slightly different than the language’s implementation in an incompatible way. Lots of code is bound to your custom implementation and now it would be too much effort or cost to swap out your custom implementation for the standard one.
So now this project has a caveat: yes, it uses UUIDv7 but not the official UUIDv7 implementation, and you have to work around this land mine for the rest of the life of the project. All of these small “innocent” caveats like this add up and make working in projects like this a miserable experience.
For larger or more complex problems, especially those that don’t have an RFC behind them, I can definitely see the unwillingness to do so.
Building some brownfield CRUD for a run of the mill org? Starting from scratch will almost always go horribly, just pick whatever enterprise'y solution fits the task at hand and be done with it.
Working for one of the big orgs on something interesting, and have the backing needed for being able to throw person-years at a problem until it crumbles under the collective engineering effort? Building from scratch might be a good choice sometimes.
Personal learning projects, side projects and the like? If you won't have to maintain it long term or at least don't think you'll have significant amounts of time or effort you can spare for that, then from scratch is okay (your own game engine to learn about the internals? your own implementation of something S3 compatible? maybe your own CMS for the hell of it?), otherwise consider treating it as a brownfield project (e.g. if you want to make and finish a game, or just store some files, or maybe just run a blog where the focus is on the content not how you made the thing it's running on).
What's my reasoning for this? Code is typically written to solve a particular problem. In business context, that typically means finishing some Jira issues and having deliverables. In large enough open source projects that typically also means having instructions on how to run and administer the thing, proper test coverage given the larger amount of various people working on it. Thus, the bus factor becomes larger and it won't be as much of a miserable experience of code archaeology as when the dev who wrote some custom CMS for a project at work leaves and literally only some code without even proper CI/CD is left behind, no proper comments, no ADRs, no code examples that aren't coupled to the logic, no documentation or even summary of the project, maybe an empty template for a README, no decoupling between the technical bits and the business rules (or just tight coupling in general), because again, they only wanted to ship. And even if they had better intentions, there were still deadlines and they were still one person (or a small team) that can't compete with any of the large multi-year projects out there.
Often in shops where just cranking out new features and/or bug fixing is the goal of management, the software continues to degrade endlessly due to all the things you mentioned, because spending time in those areas isn't something the boss finds justifiable expenditure of developer time. Once all the developers who originally wrote the code have left or been fired then the deterioration in code quality can start to go down rapidly until some kind of "cleanup" effort is undertaken, where ZERO new features are created, but things are just cleaned up.
In projects with millions of lines of spaghetti code sometimes this cleanup is completely impossible, because a total rewrite would be easier.
The mark of a 10x engineer IMO is getting the build vs buy question right consistently. My experience is that teams get it wrong often in both directions
https://www.joelonsoftware.com/2001/10/14/in-defense-of-not-...
This reveals the cost of dependencies (that often are ignored).
I hope that we in the future will have a more nuanced discussion on when it's okay to add a dependency and when you should write from scratch.
Example: For a parallel data processing pipeline they wanted to build a REST interface for submitting "jobs" to a cluster which would parallelize with MPI, instead of just using xarray+dask.
Another example: They wanted to store tabular data product metadata Postgres with URIs pointing to NetCDF files on disk, instead of just putting everything inside NetCDF.
That's precisely it. A motivated engineer is almost always going to outperform a bored engineer/one who quits. Morale is miles more important than chasing after efficiency.
"Boss" here is likely making the correct decision after weighing the ups and downs.
A common thing to hear in frontend these days is “you’re just rebuilding X” “you mean you want to rebuild X” where X is some trivial flavor of the month library.
All of the famous tools, databases and frameworks you see today was built by someone who said they can do it better and then they built a community around it.
In that matters, for my professional work have a healthy long term life, I usually select on-top development style, and sometimes, if I have really good reasons to do, from scratch is the option. Of course, for learning and personal projects, from scratch is always a very fun choice!
The template language intentionally only handles static chunks of HTML, escaping of values, and a little safety guards.
Everything else (including the usual template language behavior like iterating over a collection/stream, such as from a database query result) is done with arbitrary normal Racket language, which the template feature's implementation doesn't have to know about nor handle specially.
https://www.neilvandyke.org/racket/html-template/
More recently (for employability reasons, or under-resourced startup pragmatics), doing Python with Flask, JavaScript with SvelteKit, and Swift with SwiftUI, I still miss the clean simplicity and available power that I had with Scheme/Racket.
https://github.com/CharlieDigital/js2c
Takes an input snippet of JSON and builds up classes as serialization targets.
Don't get me wrong. I think many language design points should be used more. But starting from scratch makes a ton of sense. Skip the parsing stage and build up supported AST style constructs of your own.
Done simply, this is basically the command pattern. Keep execution separate from declaration and you should be fine?
Sure, you may want a parser for a dedicated serialization language some day. Hard to think you need start there?
But starting with the full AST of an existing language feels like a terrible idea. In any world.
- https://arcan-fe.com/2022/10/15/whipping-up-a-new-shell-lash...
- https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger...
I am not using it as a daily driver, because, emacs, but I keep an eye on it because, well, emacs.
It's not perfect, but it's clean and moves in this lua direction.
Looking back it was the VBScript/JScript functionality that caused me the most problems. Especially when I migrated the whole app from C++ to .Net.
1. We have a python application
2. We need a configuration format, we pick one of the usual (ini/toml/yaml/...)
3. We want to allow more than usual to be done in this config, so let's build some more complex stuff based on special strings etc.
Now the thing they should have considered in step 3 is why not just use a python file for configuration? Sure this comes with pitfalls as you now allow people who write the config to do similar things than the application, but you are already using a programming language, why not just use it for your overly complex configuration? For in house stuff this could certainly be more viable than writing your own parser.
Declarative configs are preferable for keeping your options open as to who or what consumes them. For cases where config as code is truly necessary, the best option is to pick something that's built for exactly that, like Lua (or some other embedded scripting language+runtime with bindings for every language).
Simply being tied to one language is rarely a bad thing - at a certain point in a company size growth, having a common language and set of tools (logging, dbase wrappers etc) acts as a force multiplier beyond individual team leads preferences.
I would be interested in exactly what scaling issues you hit but I would ask if Inwere financing the company if overcoming scaling problems in python would cost less and lead to better cadence than a migration to C++
It’s been a while, so take the numbers with a grain of salt, but where we might have needed 10 pods across several nodes to process a measly 100 req/s, we can easily handle that with a single pod running a web application written in rust, with plenty of room to spare. I suspect some of it is due to the GIL: you need to scale instances rather than threads to get more performance in Python.
Anyway, at some point the cost of all those extra nodes adds up, or your database can’t handle the absurd number of concurrent connections all your pods are establishing, or whatever.
It isn't a magic solution.
I never had any project were a toml config wasn't enough.
One of the most unhinged pieces of software I have ever seen was the one from a fintech I worked with. Visual programming, used by business specialists. Zero abstraction support, so lots of forced repetition. No synchronous function call, so lots of duplications or partitioning to simulate it. Since there were two failed versions, there are three incompatible versions of this system running in parallel and migration from one to the other must be done manually.
The problem is about 90% of the business rules were encoded into this system, because business people were in a hurry. People wanted a report but didn't want to wait for Business Intelligence? Let's add "tags" to records so they appear on certain screens, and then remove them when they shouldn't anymore.
In the end the solution was adding "experts" to use it, but the ones who actually knew or learned any programming would just end up escaping to other companies.
Imagine a config file with type checking and control flow. You have it-- it's your programming language. you just need to load the code at runtime, like erlang.
I've written a dozen different programs that might be considered compilers; some very simple, others very complex and whose life continued once I left the organization. Writing a functional compiler that provides the needs of the organization where existing tooling doesn't takes discipline and focus on what you actually want to accomplish. I don't know what "defining a struct inside a loop" might mean and this strikes me as, very obviously, having no clue what you actually want to build.
Perhaps the issue is not building a compiler but rather the lack of focus to begin with.
"Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
Reminds me of the pain of intentionally building a compiler for Java 2 (subset) to MIPS compiler by writing out each AST node class by hand. And, I did it twice, once in C++03 with bison and flex and again in Java 2 with CUP and JFlex... each was developed to build and run as a host portably across Solaris (sparc), Linux (x86), HP-UX (68k), SGI (MIPS), and Windows (x86) with compiled with targets run on the SPIM emulator. It did have dead code, dead string, and dead variable elimination, but that was as far my optimization passes went. I recall the only build tool I used for each was the portable subset of make without GNU extensions.
Speaking of reinventing the wheel, in 1998, I built a flexible almost framework for a "portable" generic installer using Java 2, JWT (native GUI controls), and JNI on Windows to create a program group and desktop shortcut icon. The hilarious part was shipping a full JRE on a CD. It took forever to load but the additional time seemed impressive for expensive, niche software in a way similar to the now fake "loading..." delayed progress bar.
</old-guy-high-school-glory-days-and-nobody-today>
SSA is a nice ish format for representing program code, but it's not the only choice and may or may not be appropriate for your domain. For example, if your language describes data instead of control flow, imo SSA is a bad choice.
I have done this and if you take care to do things right, you won't need to bother with these hacky corner cases.
Long story short, we thought we were making a new type of N-dimensional spreadsheet, but after 3 semesters of work one of the advisors at MIT told us we need to meet his colleague, and that guy informed us we had a working compiler for a hybrid of Lisp and C.
Anyone here know where I can find it?
I find ChatGPT to be of great help to explore the area, find relevant keywords or the name of the research domain. Sometimes you really need to know exactly what you are looking for before you can find the link to that one super helpful github library that solves you problem. The of course the next step is figuring out if you want to take on the dependency or not...
I have wasted hours searching for an (analytical) inverse kinematics library for robotic arms. There are tons of slow non analytical libraries out there, and some horrible ones like ikfast that is a effectively a code generator that spits out c that can be compiled with python bindings. I eventually did find https://github.com/Jmeyer1292/opw_kinematics, which someone ported rust (for which it was easy to create python bindings).
Compilers are interesting, but there is literally no proof that they are optimal for any of their popular applications. Which is what I think you are trying to imply by this narrative you have constructed of people constantly reinventing compilers. This is just the same propagandist argument lispweenies make to claim that their language is special.
But generically, a compiler is the exact kind of thing you want when you're doing "Take this data structure and transform it into this other data structure". In a traditional compiler, we usually deserialize the first data structure from a string (parsing), call that data structure a CST, validate the data structure (syntax & type checking), do the transform, then serialize the output.
This kind of validate and transform pattern is all over programming though. And it's pretty easy to test with things like property tests. So yeah, we should build little compilers more as abstraction boundaries in our code.
WORK-n: write an interpreter
WORK-n+1: add indirection
...