• Cheer2171 3 hours ago |
    sed -i 's/Copilot/outdated Stack Overflow answers and random github repos/g'
  • drops 3 hours ago |
    >becoming

    haha

    • lukan 3 hours ago |
      Humans are lazy by 'design'. It is called being efficient. Only do what is necessary, not waste energy to do more than that, that wasn't needed.

      If AI can do a job, awesome. I saved energy.

      But if I fail to check properly .. that's on me, as it will cost me later.

  • Mathnerd314 3 hours ago |
    > Always Review AI-Suggested Code

    So really the problem is a lack of code review... but I seem to recall that AI is decent at code review too. It won't spot state machine bugs, but SQL injections, no problem.

    • simonw 3 hours ago |
      AI code review can help spot potential problems, but it won't do the most important part of code review for you which is to evaluating if the thing actually works and if the architectural approach is right for the project.

      As with all things "AI", it's a great assistive tool for helping make decisions but it's not something to outsource those decisions to entirely.

    • ben_w 3 hours ago |
      AI code review is better than no code review, and it's also better than a brief glance at the git diff of a codebase too big for the human equivalent of a context window where we all know we shrug and type "lgtm" before clicking accept… but it's definitely not a silver bullet either.

      To keep the metaphor, it's an M60: ammo hungry, easily damaged or jammed, still very popular.

  • mbesto 3 hours ago |
    > Coding used to be about craftsmanship, precision, and knowing your tools inside and out.

    Coding, just like woodworking, is about creating products and solutions. Craftsmanship, precision, and knowing your tools is possibly how you make better software more elegant, easier to maintain, etc.

    Not everyone needs a rocking chair that can hold up for 40 years, sometimes an upside down bucket works just fine.

  • shortrounddev2 3 hours ago |
    The quality control practices are not up to the devs, they're up to management. Management wants to cut corners or perform incomplete, automated tests rather than human driven qa. It's about cutting costs by cutting corners
  • lucianbr 3 hours ago |
    > In the old days, developers had to really know their stuff. Coding wasn’t just a checklist—it was a craft, and every line was written with care.

    There's a name for seeing the past through rose-colored glasses, isn't there?

    "In the old days" developers had various degrees of skill and care, as they do "in the new days".

    • dbmikus 3 hours ago |
      We could complain about Stackoverflow, Google, automatic memory management, compiler optimization, the fact that a broken program doesn't crash your entire OS, ...

      A related point: MIT dropped the SICP curriculum in 1997, the reasoning being:[1]

      > Sussman said that in the 80s and 90s, engineers built complex systems by combining simple and well-understood parts. The goal of SICP was to provide the abstraction language for reasoning about such systems. > > Today, this is no longer the case. Sussman pointed out that engineers now routinely write code for complicated hardware that they don’t fully understand (and often can’t understand because of trade secrecy.) The same is true at the software level, since programming environments consist of gigantic libraries with enormous functionality. According to Sussman, his students spend most of their time reading manuals for these libraries to figure out how to stitch them together to get a job done. He said that programming today is “More like science. You grab this piece of library and you poke at it. You write programs that poke it and see what it does. And you say, ‘Can I tweak it to do the thing I want?'”. The “analysis-by-synthesis” view of SICP — where you build a larger system out of smaller, simple parts — became irrelevant. Nowadays, we do programming by poking.

      As software becomes more powerful, it must become more complex. And thanks to the internet, we have tons of pre-built solutions out there. Now, much of the problem is combining them together. When doing this, you can't know or care about every line. I totally agree, we must treat a lot more systems scientifically, like an object under observation.

      [1]: http://lambda-the-ultimate.org/node/5335

    • tivert 34 minutes ago |
      > There's a name for seeing the past through rose-colored glasses, isn't there?

      > "In the old days" developers had various degrees of skill and care, as they do "in the new days".

      Yes, but "various" isn't a fixed ratio. While there have always been unskilled and careless developers, the ratio in the past could have been better and the ratio now could be worse.

      Personally I've seen the decline over my career, in my little corner. But I wouldn't attribute it to generative AI, rather:

      1. The rise of offshoring ("Why hire a skilled American developer when you can hire ten of the cheapest possible offshore developers instead?"). Skilled and careful developers don't want to get paid the cheapest wages, so this increases the proportion of unskilled and careless developers that you have to deal with everyday.

      2. Programming being perceived as a well-paying, desirable job. That changes the kind of people who try to pursue it. When it was seen as mostly the domain of looked-down-on nerds, you got a greater proportion of people who where passionate about it. If it's seen as the last of the good-paying jobs, you'll get more people doing it for the money who don't really like it. This is also a factor in the offshoring situation above.

      It's totally plausible that generative AI will accelerate the trends and make them even worse, but the trends didn't start with it.

  • warpeggio 3 hours ago |
    Capitalism incentivizes the lowest cost implementation to maximize margin. This isn't surprising through that lens.

    Folks,if you're smart, keep you AI usage secret and use it to reclaim time for yourself by completing your objectives early. The only reward for a job well done is more work.

  • rsanheim 3 hours ago |
    > In the old days, developers had to really know their stuff. Coding wasn’t just a checklist—it was a craft, and every line was written with care.

    you've lost me here.

    Caring and attention to quality have always been in short supply. Or supply _and_ demand when I think of some of the startups I've worked for.

    • disgruntledphd2 3 hours ago |
      I mean, given that copilot is just regurgitating common patterns, it's getting the bad practices from it's training data.
  • 8338550bff96 3 hours ago |
    I would rather have lazy devs that actually ship shit to production than slow devs that drag what I could get done in 1 day out for entire sprints because they only communicate with each other for 30min tops per-day to unblock each other.

    That this comes at the cost of "understanding" needs supporting evidence. Most devs I know only know 1 or 2 programming languages and their own special silo corner of the tech stack.

    You're not paid to be not lazy or to learn in the most fulfilling way possible. You're paid to ship software that works.

  • htrp 3 hours ago |
    You could've made the same arguments about oop, linters, and devops tools.
  • MantisShrimp90 3 hours ago |
    This.

    Not only can I corroborate this experience already in my workplace, even in my personal projects I can feel the effect.

    The most striking example was I was learning a new language (fennel) and wanted to translate some existing code (Lua) as an initial exercise. It got the first few files fine, and I did feel like a double-checked them to make sure I understood, but only when it failed to make a conversion that worked properly did I realize I hadn't been learning really and still had no idea how to fix the code. I just had to write it by hand to really get it in my head and that 1 hand-written file have me more insight than 10 ai translated ones.

    O, it looked better, had better design too because the ai just took the style of the existing file and transposed in whereas I was able to rethink the other files using the strengths of the new language instead

  • simonw 3 hours ago |
    I was ready to disagree with this article - most of the first half is a rehashing of a paper about the first release of GitHub Copilot from 2021! - and then I got to the recommendations, and "Always Review AI-Suggested Code" and "Stay Sharp on Core Skills" are both good principles to hold on to when working with this stuff.

    > Coding used to be about craftsmanship, precision, and knowing your tools inside and out.

    It still is. Assistance from LLM tools helps me know my tools inside out better.

  • bhouston 3 hours ago |
    I am super pro-AI writing code, but honestly, I have had so many bugs in the AI-generated code if it gets even a little complex. And then you need to understand all for the code anyhow.

    AI is really good at CSS and nesting React components, but it fails at proper state management most of the time. The issue is that it lacks mental models of the application state and can not really reconstruct them just by looking quick at a bit of context.

    But I do love AI generated React components + CSS. Saves me a lot of time.

  • dwabyick 3 hours ago |
    There’s good points in this article, especially for new engineers who may not understand what the AI is writing.

    Also, “lazy” in coding often means you’ve automated something and made it efficient. So I don’t view lazy as bad.

    Less careful is a concern. Not everyone is great at reviewing code. However we’ll be using AI for code reviews and security audits soon (obviously some are already). I suspect code quality will improve with AI use in many domains.

    • JohnFen 3 hours ago |
      > However we’ll be using AI for code reviews and security audits soon

      So the solution to the problems that genAI brings is more genAI? I'm very skeptical about that.

  • danielovichdk 3 hours ago |
    These subjective posts are moronic and it should be mandatory by any author to state their sources by which they speculate. They are opionated, invalidated and often poorly written thoughts which is never backed up by any evidence at all.

    Yes and no. It depends. People are different. Good devs always care because they have totally different values than those who doesn't care. Humans 101. What else is new ?

    See I can also write opinionated garbage.

  • LkGah 3 hours ago |
    The problem started with GitHub. GitHub is optimal for mediocre people who know how to game the system, flood projects with trivial and useless PRs, give LGTMs to other developers in their friend circles and generally know how to maintain the illusion of progress and useful activity.

    They vote up each other's projects, downvote and ban opposition.

    These people are now attracted to the new "AI" tools, which are a further asset in their arsenal.

  • nisten 3 hours ago |
    no they're just becoming dumber.

    i.e. the age of the average linux kernel maintainer is rising to what now 50s ?

    There's too many factors to judge this right but my feeling that a combination of lack of work ethic, depression, media bs has led people to believe that they don't need to be competent in what they're doing because of x reason.

    The answer couldn't be farther from the truth. Actually implementing AI requires you to understand and troubleshoot hard problems anywhere on the stack from the emotions of user experience to the scale of electrity.

    • r14c an hour ago |
      I'm 33 and I have a competitive skill set that includes osdev, and application programming, among other things.

      I'm just not interested in contributing to Linux. I understand that quality matters and that Linux is an important piece of software, but I'd rather spend my time studying/contributing to seL4 and Genode or Plan9 or Inferno, one of those silly alt-kernel operating systems, or even Haiku. Even for Linux projects like NixOS, I'm more interested in ports to a BSD than working on the main project.

      I've seen similar sentiments among a lot of people my age and younger, the problem isn't that we're stupid, we just have different ideas about what's worth contributing to.

  • xianshou 3 hours ago |
    Thoughtless reliance on AI is a concern, but this post also hearkens back to a halcyon age that never existed. The places where developers use tab-complete now are exactly those where they would have previously copied from Stack Overflow, which suffers from the same issue of convenient but insecure code becoming widely adopted. If anything, LLMs are able to exercise some degree of quality control due to post-training improvements rather than sampling only from the middle of the training distribution, so the average suggestion should be better and less stale than the SO equivalent.

    The primary danger here is a substitution effect in which developers who would previously have thought carefully about a given bit of code no longer do, because the AI takes care of it for them. Both anecdotally and in my own experience, developers can still discern between situations where they are the expert, in which case it is more efficient to rely on their own code than AI suggestions because it lowers the chances of error, and situations where they are operating outside their expertise or simply writing boilerplate, in which case AI is the optimal choice.

    I challenge anyone to produce a well-documented study in which the average quality of a large codebase declines as the correlates of AI usage rise. Until I see that, I will continue to read "decline of care" posts as "kids these days."

  • rkagerer 3 hours ago |
    We've always been lazy.

    To the point I've seen people here argue against striving for quality workmanship, in favour of efficiency.

    But like any other trade, some of us are craftsmen who really care about our work.

  • _fat_santa 3 hours ago |
    I feel like this article was written by someone who doesn't code every day but instead looks at tech industry trends.

    If you're on the outside looking in, it's easy get the impression that AI is eating the industry but the truth is much more nuanced. Yes devs use AI tools to help them code but as other commenters have pointed out, it just breaks down when you're deep in the trenches with large and complex codebases.

    If you're starting a greenfield application then AI can most certainly help cut down on the repetitive tasks but most code out there is not greenfield code and implements lots of business logic for which AI is next to useless unless it's trained on that specific problem.

    Where i personally see AI eating up tech is at the lower end, with many folks getting into tech are increasingly relying on AI to help them. I feel like long term this will only exacerbate the issue with the JR -> SR pipeline. Senior folks will only be more and more sought after and it will be hard for many Juniors that grew up on AI assistants to make that jump.

  • anonymousab 3 hours ago |
    One thing I have noticed with some newer coworkers / fresh grads is that they seem much more willing to copy someone's issue from slack into gpt and regurgitate out a "here's what gpt says", which muddies the water a bit or is at least a bit unhelpful.

    Which is fine - developing the feeling of what to say and when takes time and experience. And sometimes it can help, after all - or spark a useful learning opportunity about why a particular llm recommendation looks useful but isn't. Though it takes a bit of energy to walk the line of teaching and encouraging growth without dampening enthusiasm, and spending that energy is its own opportunity cost.

    But it does feel a bit different than before - I don't recall seeing as much less-helpful "here's what a stack overflow post said" messages in threads years ago. It did and does happen, but I think it is much more common with LLMs.

    Thankfully, that is just a case of someone actively trying to not be lazy; trying to help, using the resources at hand. Decades ago that would have meant looking in some old docs or textbooks or specs, years ago that would have been googling, and now it's something else.

    I think the accessibility and availability of answers from LLMs for such "quick research" situations is the culprit, rather than any particular decline in developer behaviors themselves. At least, in this case. I'm sure I would have seen the same rate of "trying to help but getting in the way" posting years ago had stack overflow or google had a way of conjuring answers to questions that simply didn't exist in its corpus.

    I think the "in the old days" sentiment from the author somewhat divides things into a before/after AI situation, but IMO it has been a gradual gradient as new tooling and information sources have become available. AI is another one of those, and a big jump, but it doesn't feel like a novel change in "laziness" yet.

    Though, there's been some big pushes towards relying more and more on various AI code reviewers and fuzzers in my area. I feel like that is an area where the author's concerns will come more and more into play - laziness at the boundary layers, at the places of oversight; essentially, laziness by senior devs and leadership.

  • StarterPro an hour ago |
    They are, and the glut of A.I. products will only make it worse.

    Look at the election, this is not a country of intelligent people.

    If you think the upcoming programmers aren't outsourcing all their work to chatgpt, I got a bridge to sell you.