haha
If AI can do a job, awesome. I saved energy.
But if I fail to check properly .. that's on me, as it will cost me later.
So really the problem is a lack of code review... but I seem to recall that AI is decent at code review too. It won't spot state machine bugs, but SQL injections, no problem.
As with all things "AI", it's a great assistive tool for helping make decisions but it's not something to outsource those decisions to entirely.
To keep the metaphor, it's an M60: ammo hungry, easily damaged or jammed, still very popular.
Coding, just like woodworking, is about creating products and solutions. Craftsmanship, precision, and knowing your tools is possibly how you make better software more elegant, easier to maintain, etc.
Not everyone needs a rocking chair that can hold up for 40 years, sometimes an upside down bucket works just fine.
There's a name for seeing the past through rose-colored glasses, isn't there?
"In the old days" developers had various degrees of skill and care, as they do "in the new days".
A related point: MIT dropped the SICP curriculum in 1997, the reasoning being:[1]
> Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems. > > Today, this is no longer the case. Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy.) The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?'”. The “analysis-by-synthesis” view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant. Nowadays, we do programming by poking.
As software becomes more powerful, it must become more complex. And thanks to the internet, we have tons of pre-built solutions out there. Now, much of the problem is combining them together. When doing this, you can't know or care about every line. I totally agree, we must treat a lot more systems scientifically, like an object under observation.
> "In the old days" developers had various degrees of skill and care, as they do "in the new days".
Yes, but "various" isn't a fixed ratio. While there have always been unskilled and careless developers, the ratio in the past could have been better and the ratio now could be worse.
Personally I've seen the decline over my career, in my little corner. But I wouldn't attribute it to generative AI, rather:
1. The rise of offshoring ("Why hire a skilled American developer when you can hire ten of the cheapest possible offshore developers instead?"). Skilled and careful developers don't want to get paid the cheapest wages, so this increases the proportion of unskilled and careless developers that you have to deal with everyday.
2. Programming being perceived as a well-paying, desirable job. That changes the kind of people who try to pursue it. When it was seen as mostly the domain of looked-down-on nerds, you got a greater proportion of people who where passionate about it. If it's seen as the last of the good-paying jobs, you'll get more people doing it for the money who don't really like it. This is also a factor in the offshoring situation above.
It's totally plausible that generative AI will accelerate the trends and make them even worse, but the trends didn't start with it.
Folks,if you're smart, keep you AI usage secret and use it to reclaim time for yourself by completing your objectives early. The only reward for a job well done is more work.
you've lost me here.
Caring and attention to quality have always been in short supply. Or supply _and_ demand when I think of some of the startups I've worked for.
That this comes at the cost of "understanding" needs supporting evidence. Most devs I know only know 1 or 2 programming languages and their own special silo corner of the tech stack.
You're not paid to be not lazy or to learn in the most fulfilling way possible. You're paid to ship software that works.
Not only can I corroborate this experience already in my workplace, even in my personal projects I can feel the effect.
The most striking example was I was learning a new language (fennel) and wanted to translate some existing code (Lua) as an initial exercise. It got the first few files fine, and I did feel like a double-checked them to make sure I understood, but only when it failed to make a conversion that worked properly did I realize I hadn't been learning really and still had no idea how to fix the code. I just had to write it by hand to really get it in my head and that 1 hand-written file have me more insight than 10 ai translated ones.
O, it looked better, had better design too because the ai just took the style of the existing file and transposed in whereas I was able to rethink the other files using the strengths of the new language instead
> Coding used to be about craftsmanship, precision, and knowing your tools inside and out.
It still is. Assistance from LLM tools helps me know my tools inside out better.
AI is really good at CSS and nesting React components, but it fails at proper state management most of the time. The issue is that it lacks mental models of the application state and can not really reconstruct them just by looking quick at a bit of context.
But I do love AI generated React components + CSS. Saves me a lot of time.
Also, “lazy” in coding often means you’ve automated something and made it efficient. So I don’t view lazy as bad.
Less careful is a concern. Not everyone is great at reviewing code. However we’ll be using AI for code reviews and security audits soon (obviously some are already). I suspect code quality will improve with AI use in many domains.
So the solution to the problems that genAI brings is more genAI? I'm very skeptical about that.
Yes and no. It depends. People are different. Good devs always care because they have totally different values than those who doesn't care. Humans 101. What else is new ?
See I can also write opinionated garbage.
They vote up each other's projects, downvote and ban opposition.
These people are now attracted to the new "AI" tools, which are a further asset in their arsenal.
i.e. the age of the average linux kernel maintainer is rising to what now 50s ?
There's too many factors to judge this right but my feeling that a combination of lack of work ethic, depression, media bs has led people to believe that they don't need to be competent in what they're doing because of x reason.
The answer couldn't be farther from the truth. Actually implementing AI requires you to understand and troubleshoot hard problems anywhere on the stack from the emotions of user experience to the scale of electrity.
I'm just not interested in contributing to Linux. I understand that quality matters and that Linux is an important piece of software, but I'd rather spend my time studying/contributing to seL4 and Genode or Plan9 or Inferno, one of those silly alt-kernel operating systems, or even Haiku. Even for Linux projects like NixOS, I'm more interested in ports to a BSD than working on the main project.
I've seen similar sentiments among a lot of people my age and younger, the problem isn't that we're stupid, we just have different ideas about what's worth contributing to.
The primary danger here is a substitution effect in which developers who would previously have thought carefully about a given bit of code no longer do, because the AI takes care of it for them. Both anecdotally and in my own experience, developers can still discern between situations where they are the expert, in which case it is more efficient to rely on their own code than AI suggestions because it lowers the chances of error, and situations where they are operating outside their expertise or simply writing boilerplate, in which case AI is the optimal choice.
I challenge anyone to produce a well-documented study in which the average quality of a large codebase declines as the correlates of AI usage rise. Until I see that, I will continue to read "decline of care" posts as "kids these days."
To the point I've seen people here argue against striving for quality workmanship, in favour of efficiency.
But like any other trade, some of us are craftsmen who really care about our work.
If you're on the outside looking in, it's easy get the impression that AI is eating the industry but the truth is much more nuanced. Yes devs use AI tools to help them code but as other commenters have pointed out, it just breaks down when you're deep in the trenches with large and complex codebases.
If you're starting a greenfield application then AI can most certainly help cut down on the repetitive tasks but most code out there is not greenfield code and implements lots of business logic for which AI is next to useless unless it's trained on that specific problem.
Where i personally see AI eating up tech is at the lower end, with many folks getting into tech are increasingly relying on AI to help them. I feel like long term this will only exacerbate the issue with the JR -> SR pipeline. Senior folks will only be more and more sought after and it will be hard for many Juniors that grew up on AI assistants to make that jump.
Which is fine - developing the feeling of what to say and when takes time and experience. And sometimes it can help, after all - or spark a useful learning opportunity about why a particular llm recommendation looks useful but isn't. Though it takes a bit of energy to walk the line of teaching and encouraging growth without dampening enthusiasm, and spending that energy is its own opportunity cost.
But it does feel a bit different than before - I don't recall seeing as much less-helpful "here's what a stack overflow post said" messages in threads years ago. It did and does happen, but I think it is much more common with LLMs.
Thankfully, that is just a case of someone actively trying to not be lazy; trying to help, using the resources at hand. Decades ago that would have meant looking in some old docs or textbooks or specs, years ago that would have been googling, and now it's something else.
I think the accessibility and availability of answers from LLMs for such "quick research" situations is the culprit, rather than any particular decline in developer behaviors themselves. At least, in this case. I'm sure I would have seen the same rate of "trying to help but getting in the way" posting years ago had stack overflow or google had a way of conjuring answers to questions that simply didn't exist in its corpus.
I think the "in the old days" sentiment from the author somewhat divides things into a before/after AI situation, but IMO it has been a gradual gradient as new tooling and information sources have become available. AI is another one of those, and a big jump, but it doesn't feel like a novel change in "laziness" yet.
Though, there's been some big pushes towards relying more and more on various AI code reviewers and fuzzers in my area. I feel like that is an area where the author's concerns will come more and more into play - laziness at the boundary layers, at the places of oversight; essentially, laziness by senior devs and leadership.
Look at the election, this is not a country of intelligent people.
If you think the upcoming programmers aren't outsourcing all their work to chatgpt, I got a bridge to sell you.