Other than non-trivial academic samples, the odds of a program needing to change over its lifetime or large, and it's current apparent correctness has little to do with someone else adapting it to the ever changing environment.
The number of times I've heard "it seems to work and we don't dare change it" is far too many
What they mean is: "we don't understand it and we don't have good tests, so there is a high probability that it doesn't work and that doing even the most trivial and seemingly harmless modification would cause an issue to surface, so we don't dare to change it else we wouldn't be able to pretend that it works anymore and might have to fix a lot of issues that we would have a hard time to even understand"
But _hard_? No.
I think you (and many software developers) are using the word "hard" to mean "intellectually challenging", as in "Leetcode Hard". But things that require a lot of effort, time, and coordination of people are also hard, just in a different way.
Imagine a codebase with a wart. And yes, without enough tests. Let's say the wart annoys you and you want to fix it. But first you have to convince your employer to let you spend 6 months backfilling missing tests. In the meantime they will pay your salary but you will not work on the features they want. You will be working on fixing that wart. Convincing management: easy or hard?
OK, so you got them convinced! Now you can't just fix the wart. First you have to slog through a big refactor and write a bunch of tests. Staying positive while doing this for 6 months: easy or hard?
Do you stop other teams from writing more code in the meantime? No, so does the new code come with tests? How do you make sure it doesn't depend on the old "warty" interface? You need a compatibility layer. You need to convince other managers to spend their teams' cycles on this. Easy or hard?
OK, the refactoring is done. You release the new software. But, despite all your efforts you overlooked something. There's a bug in production, and when a post mortem is done - fingers point at you. The bug wasn't introduced in pursuit of a new feature. It was part of an effort to solve an obscure problem most people at the company don't even understand. To them, the software worked before, and it doesn't work now, and it's always those nerds tinkering with stuff and breaking things. Convincing these people to let you keep your job: easy or hard?
Perf review time. Your colleague shipped a new feature. You shipped... that thing that broke prod and nobody understands. Getting a raise: easy or hard?
And that is why these warts fester. The end.
At some point I was working on a piece of software we knew inside out, had good tests for and often ran through hand curated stress tests for benches, analysis or just exploratory testing, so we had a high confidence in it and in our ability to modify it quickly and successfully.
Some day executives were visiting and we had to do a demo of our system interacting with another one. Until the last minutes we were happily modifying our code to make the demo better. A guy from other system's team saw that, freaked out, and went straight to our boss, who then laughed with us at how scared the guy was. It turned out his team was not at all that confident in their system.
If it's in a professional setting, it's most likely to not be a hard problem, but actually an impossible one.
Some people take that as being scared, but it's more like "you have to have made this work and tested it before putting it in."
So John is missing the role of software architect here. Science, art, and development - 3 roles. Not all visits to the stratosphere are misadventures.
I frankly don't believe in the "software architect" as a separate role. I've worked with "architects" who are clearly just BS artists because they know the jargon but have no skill to back it up and make difficult technical decisions regarding tradeoffs.
I think it's absolutely insane that we live in a world where many people with an "architect" title don't write code, and sometimes have never written code in their life! That would be like a world full of chess coaches who don't play chess! They just read BS articles like "Skewers are the new forks" or whatever.
The ones with “architect” in the title move on to the next project as soon as the previous is started…
There was a curriculum correction in the years afterwards I think, but so many students had zero concept of version control, of how to start working on a piece of software (sans an assignment specification or scaffold), or how to learn and apply libraries or frameworks in general. It was unreal.
I’m working on a history project at the moment which has reconstructed the version history of the US constitution based on the secretarial records and various commentaries written during the drafting process. At the moment we’re working on some US state constitutions, the Indian constitution, Irish peace process and the Australian constitutional process. We only have so many historical records of the committee processes, but it turns out to be more than enough to reconstruct the version history of the text.
- they were manually tracking the "versions" of documents by copy-pasting them in some folders - and even this was only done for each "release" of each document; inbetween two releases all the changes were made to files shared on Onedrive (possibly concurrently by two people, sometimes leading to conflicts with the loss of days of work) - at every release the changes since the last release had to be looked up manually every time and included in a document; this was very time consuming. - informations were duplicated in multiple documents, with no way to relate them; every change to one of them had to be manually replicated to the others.
I would argue that a correctly versioned document should not have these issue. A dedicated software should track all the changes, including individual ones inbetween releases. It should also provide a way to list them, possibly relative to some milestore (like the last release). Data should be kept in a format that's easy to automatically compare for changes, and hopefully in a deduplicated way so that changes need to be made only in one place. If that's not possible I would argue there should be a software that checks for inconsistent informations and prompts for them to be synchronized.
In the software development world this has mostly been solved by version control systems like git and continuous integration to ensure that the codebase is in a consistent state after every change.
I was already pretty disillusioned with my undergrad program but that was really the icing on the cake.
I had never used any vcs before and neither had anyone on any team, but man was it worth it. The ability to have one place with the latest code and not emailing zip files around was great, but so was being able to easily roll back to a known good version if we caused an issue, compare changes, etc. By the end of it we all agreed it would have been impossible to do as well as we did if we didn’t do version control.
(This was a cross disciplinary engineering curriculum with ME/CE/CS, ours was slightly more software-heavy than other teams but everyone had some amount of software. Version control wasn’t taught and most teams just didn’t even consider it. It was a very different time from today.)
even for one-off scripts, I'll often throw them into a VCS because why not!
Nowadays version control is just so easy it’s easy to forget how good we have it. Not just in getting started locally but pushing to a public service that everyone can share (even private repos are free on GitHub nowadays, it’s a complete no brainer.)
The key that gave my team an advantage was the humble ASSERT. If the robot got off track, it would stop in place, blink a light, and show a line number on a digital display.
I've been working in and around Windows for a long time, and I'd say asserts and crash dumps are the two things that allow us to improve our quality given that we're still mostly using C/C++.
Funny enough, there was no version control at our uni (a pretty good one, but not primarily technical), and that OS we tweaked for the course was the current version of Tanenbaum's Minix that Linus transformed into Linux. 20 minutes for a recompile and test loop to fix that stupid mistake in the semaphore logic was painful, but that's life on a 286.
It took real passion to want to bang through that learning curve. It really weeded out the folks who were just looking for an engineering job, at least for the handful (4) of people I knew in the program.
Wanting an engineering job means that engineering is such an important part of your life that you desire that your job (i.e. many hours each day) centers around it.
The breed of people that you mentioned to be weeded out were not looking for an engineering job, but for some well-paid (often management) job that formally requires engineering qualifications, but where the daily business has barely to do anything related to engineering.
The talented and hard working folks got in and found that studying algorithms at the beginning of the 3rd year from a textbook was doable, but designing and implementing a significant software system (or tweaking an operating system) in the 4th year is a whole other level.
It's just that software design and engineering is really a unique beast. I mean, it is the most difficult engineering on the planet, because every single other industry and discipline depends upon it.
Watching Practical Engineering on YouTube is a pretty illuminating experience for me as it as it shows the extreme care that goes into projects from the outset, how much planning is involved, how much we’ve learned from centuries of experience building things, and how even despite all of this, things can still fail spectacularly. And when it does, there are independent reports done by thorough investigators, who find the real root causes and carry the knowledge forward for future projects. It makes me sad that Software isn’t treated this way. Yes, we get things off the ground fast, but we don’t learn from our mistakes, we don’t conduct thorough investigations into failures, and often people just release things and move on to the next job.
Software may be a more complicated and “difficult” discipline but it sure isn’t treated like it.
I was summer programming in an IT department in the late 80's; it was under the auspices of the comptroller, simply because the org didn't know where else to put it. Management was still figuring out which department would have the budget/cost stuff allocated to it. You can forget about engineering excellence or even semblance of IT knowledge.
Everything since then has been the (in)organic growth of IT in the situation that, for the vast majority of companies, IT is simply a cost whose benefit is hard to quantify, especially to the money guys, who are always the ones in charge.
And those build times were nothing compared to waiting for a long SQL process.
It's all really just determining whether the data flowed properly or not. From a C64 BASIC program to an OS component to a database's tables, it's all just data flowing from one place/form to another.
But there is a corollary, I think. A sign of good software development is that the program hasn't been extended in "unnatural" ways. That speaks to the developer's discipline and vision to create something that was fundamentally relevant to begin with.
Everyone in my PhD cohort that couldn't write code worth a shit stayed in academia and everyone the could went into industry because the money was way better. So it's quite natural.
// See thread above about UBI vs. tech nonsense jobs. Any industry driving seven figures per head has room for a quite a few before people notice.
Then periodically there is a discussion on Hacker News that boils down to "all of the other engineering disciplines can make reliable predictions and deadlines; why can't software?" or "why is this company's code so shoddy?" or "why are we drowning in technical debt?".
Perhaps the these are all related?
To me, a computer scientist is someone who studies computation. They probably have the skills to figure out the run times of algorithms, and probably develop algorithms for solving arbitrary problems.
A software engineer is what I would call someone who can estimate and deliver a large software application fit for purpose.
CS programs have gotten better at teaching real SWE skills, but the median CS grad still has ~zero real SWE experience.
However, as the terms are currently used, I see Computer Scientist as analogous to Electrical Engineer. On the other hand, it seems to me that Software Engineer is used to suggest that developers don't need to know the theory behind computation.
Therefore, I currently think that the way "Software Engineer" is used respresents a lot of what's wrong with current software development.
Software engineers are analogous to the other engineers. Computer scientists are analogous to the physicists.
Or take chemicals. When the question is "how are the outer-shell electrons distributed", you hire a chemist. When the question is "how do we make the stuff in multi-ton quantities without blowing up downtown", you hire a chemical engineer.
Part of the answer to your question is that schools are producing computer scientists and not software engineers. (It's not the whole answer, but it's part of it.)
Effectively, companies treat Software Engineers like Electricians not like Electrical Engineers.
Hacking together an internal tool with laravel? Doing vanilla CRUD for a client’s web app? Probably not! No amount of comp sci knowledge will help you configure the millionth nested layer of Wordpress plugins.
So much “software engineering” is just plumbing. Connecting things to other things with a little bit of business logic in between. Honestly my job is plumbing, most of the time.
You'd find those also have more trouble with predictions when machinery needed for the goal hasn't been delivered before.
While most software jobs today are a bespoke configuration of a solved problem, the practice of software development is new enough to remember when most software was to solve a physical problem in a new digital way. Discovering/inventing are predicted on the search space being unknown in advance, making discovery and invention unlikely to be estimated accurately.
Note, though, most software today has already been writting, and the lack of predictable delivery is because the process doesn't rigorously enforce a "first, apply known/solved software" approach.
If, in software, materials use was third party inspected and certified as it is in physical or electrical engineering, you'd find software get more predictable.
That ideal can be attained and you can be in control in application development but often we are not. When you are in control the conventional ideas anout project management apply.
As you get very big you start having new categories of problems, for instance a growing social system will have problem behaviors and you’d wish it was out of scope to control it but no, it is not out of scope.
Then there are projects which have a research component whether it is market research (iterate on ideas quickly) or research to develop a machine learning system or develop the framework for that big application above or radically improved tools.
A compiler book makes some of those problems look more regular like application programs but he project management model for research problems involves a run-break-fix trial of trying one thing and another thing which you will be doing even if you are planning to do something else.
Livingston, in Have fun at work says go along with the practices in the ocean you swim in (play planning poker if you must) but understand you will RBF are two knobs on run-break-fix: (a) how fast you can cycle and (b) the probability distribution of how many cycles it will take. Be gentle in schooling your manager and someday you might run your own team that speaks the language of RBF.
Unit tests put a ratchet in RBF and will be your compass in the darkest days. They enable the transition to routine operation (RBF in operations is the devil’s antipattern!)
They are not a religion. You do not write them because a book told you so, you write them for the same reason a mountain climber wears a rope and if you don’t feel that your tests are muda, waste as they say in Japan.
When I was in physics grad school I had a job writing Java applets for education and did a successful demo of two applications at an CS conference in Syracuse and was congratulated for by bravery. I was not so brave, I expected these programs to work every time for people who came to our web site. (Geoff Fox, organizer of the conference, did an unsuccessful demo where two Eastern European twins tried to make an SGI supercomputer show some graphics and said “never buy a gigabyte of cheap RAM!”)
Rather: Computer scientists advance in their careers by writing papers, software developers do advance in their careers by becoming managers.
:-(
Anyway, by far I think the biggest hurdle in our industry right now is pseudo-jobbers like project managers, business process owners, scrum masters, various architects and what not. Not everyone is a waste of time, some of them do excellent work and function as exponential productivity catalysts. The vast majority of them, however, spend so much time engineering the process, architecture, whatever that their teams never ship on time or within budget. In this sense I think “correct programs” is hard to value. Because often the “incorrect large program” that doesn’t scale, will be much more valuable for a business than a “correct program” which never even gets the chance because it took to long to get out there.
Plus they're useful for sabotaging your competitors' TTM.
Even if you oppose consumerism and building stuff or whatnot, that wasted effort could be directed towards making products more sustainable or making better recycling supply chains or building nuclear power plants or whatnot.
If someone can get by on $500/month, and not feel the need to make the effort to live better, it would be a service to all to give that to them.
And on the other hand, a small fallback that makes it easier for individuals going through a rough spot to come back also helps the overall economy and individual businesses.
Finally, if we had a basic program like this started, the number could go up with overall economic productivity. Slow at first, but as the AI/machine economy takes off, especially with practical accessibility to off planet resources, a tiny fraction of the economic output would make everyone rich by today's standards, just at the time when human labor's economic value heads to zero.
There definitely exist some people who do these sort of jobs in a way provides a negative/roadblock only type of contribution. This isn’t to say nobody produced a positive contribution in these jobs. And it isn’t to say these people who produce negative value are, like, bad (everybody needs to eat and they didn’t ask to get born into a capitalist society).
But, there are definitely some folks who we’d be better off paying to not do anything.
An even better idea: simply don't hire such people.
They aren’t dumber than the folks who really want to do engineering and make neat stuff, just differently motivated. And if those two groups get in an office politics battle, the group that isn’t distracted as much by engineering wins, right?
I am also very motivated to convince you to give me, say, 10,000 EUR or USD, will you hand me over the money? ;-)
Seriously: if you see no value in giving me this money, you clearly won't do that. Also: if the person you could hire does not bring more value for the company than he/she costs you, you won't hore the person, no matter if he/she is motivated or not.
Even worse, they may in some way be compensated based on the number of people or teams they manage, in which case the incentives of useless hires and hiring managers are unfortunately all too aligned.
Wiser than the average wisecrack.
Just want to share my experience at the large, well-known tech company where I work: good product/project managers are worth their weight in gold, and I've never worked with a bad one.
Ours work at the intersection of backend, frontend, design, and product and help those of us who work in just one of those areas to coordinate, cooperate, and collaborate. I can't imagine trying to build such an enormous product or suite of products without them.
God bless them, every one.
As a matter of fact, there was only a single lecture I took (and which I didn't need to take) where we needed to use computers for the weekly exercises.
I taught myself C++ by writing games with SDL2.
The first game -- snake took about a couple of hundred lines and I put everything in one .CPP file. And I felt pretty good.
The second game, well, I forgot what it is, not Tetris nor Breakout, but it was complex enough that I realized that I need to put code into header files and source files.
The last game of that project was a Ultima-spinoff. I even used the same sprite sheet. The complexity completely drowned me. I was constantly asking myself how should I arrange things properly so I don't need a lot of global variables that every file can see, because naturally the Game class needs to see and know all other classes, and the Renderer class needs to see and know many other classes too, etc.
Eventually I dropped the project. A few years ago I picked it up again and figured out something that is close to Entity - System (not ECS just ES). I didn't complete the project but then firmly believe that it was the right architecture, and I was simply too burnt out to complete it.
This year I learned about ECS from pikuma. I think it's over complicated for small-medium games. Anyway I'm trying to say that I agree that writing a 10,000 line project is way more complicated than 10 1,000 line projects.
In the technically superior Fabric ecosystem, you can add fields directly to the entity class using bytecode modification instead of having HashMap overhead on everything. Accessing them is a bit roundabout, but in a way the JIT compiler can optimize.
Designing and implementing large and correct systems is a matter of growing them, from small, trusted pieces into larger interconnected systems of systems, with ever greater care, knowing that the entire thing can collapse at any time if the wrong decisions are made or have been made.
I was fortunate enough to have figured this out for myself, and whenever I met a CS grad in my early career it was obvious that the production of actual software terrified them.
Meanwhile I'd learned how to build (and how not to build), working programs in C including a simple OS on an M68k chip on a VME bus. I struggled with my final year project because it became too theoretical and CS-ish (trying to write a Prolog to SQL interpreter), so my grade took a hit, but I am really glad I entered industry with useful, practical skills that employers valued.
There's always going to be a place for pure CS, I'm glad it exists as a discipline, but more kids should understand the difference, and more colleges should be offering degrees that teach people how to software is built (and how to build software yourself), not just write papers.
Software Engineering is to programming as Civil Engineering is to construction.
In construction, the random guy pulled in to swing a hammer may cut something wrong once or twice and waste a little time and money, but he's never going to be trusted to design the way the support beams hold up the whole house, so the damage is very limited. Software is not like that, we expect everyone to do some amount of design and engineering, whether or not they have any ability to do so. If as much software dev was driven by very low-level grunt work as is the case in construction, LLMs would already have revolutionized the field a lot more than they have; as it is, we've probably got another couple years to wait.
Software Engineers are usually quite capable programmers, while construction needs a lot more people _doing_ the construction than working as Civil Engineers, even if Civil Engineers wanted to join in.
That's not quite my point, though: we don't expect physicists to be good at Civil Engineering or construction, and we ought not expect Computer Scientists to be good at Software Engineering or programming. Having some understanding of physics makes a Civil Engineer better at their job, similarly for Computer Science and Software Engineering. And both construction and programming can be undertaken in isolation, but you're unlikely to be successful in your larger project unless you have Civil Engineering or Software Engineering expertise.
He can write quite complex logic but he tries to make everything as generic and flexible as it can possibly be and so his code is very hard to read as it invents a lot of concepts. He had to refactor it many times due to bugs or unforeseen technical limitations. Took him months to write. On the other hand, I wrote a script which does a similar thing as his but for a different environment (over a network with added latency, bandwidth limits, network instability) and only covers the essential cases but it only took me about a day to write and has been working without significant flaws since the beginning. Only had like 1 bug for a minor edge case. Also, my code is very short relative to his.
This experience reinforces a rule for coding that I've had since the past 5 years or so. It's basically:
"If you can't explain what your code does, from start to finish, in simple language to a non-technical person (who has the required business domain knowledge), and in a way which answers all questions they might have, then your code is not optimal."
Abstractions should always move you closer to the business domain, not towards some technical domain which you invented and only exists in your head.
The part about "start to finish" is important but doesn't mean "every line". You should have abstractions which may be beyond the non-technical person's understanding but you should be able to explain what each abstraction does at a high level without jumping into each function. You should be able to walk through any major feature by walking through your code, without having to jump around many files and without having to pause constantly to explain abstractions as
However, your code should anticipate a range of possible future requirements changes... But then it shouldn't try to be a silver bullet either. Experience helps a lot as it allows you to see hurdles and limitations ahead of time so you know exactly how much silver you can put in your bullet for any given part of your code.
Writes large correct programs - https://news.ycombinator.com/item?id=2556270 - May 2011 (32 comments)