Some of that extra time was a result of learning the craft of using LLMs in this manner. With each new technology stack, that excess amount of time decreased.
Now, let's be honest, you still have to have excellent development skills, and it helps immensely to have a good knowledge of software architecture and design. I say that to point out that non-developers could not achieve these results. In the right hands though, it can make you a lot more productive.
The problem with this is that you don't know what you don't know. I've actually used Claude to build a project in a framework that I haven't worked with before and it got me pretty far. Once I got that reviewed by someone who actually knew the framework it was terribly structured, not using higher level constructs to organize things, wasn't using idiomatic approaches - stuff I could easily confirm myself after going through the official docs.
I think using LLMs to skip learning from official sources is a bad move, you're bound to hit a wall with LLM (at the very least the training data cutoff could be before something changed in the framework), you'll have to deep dive into the docs and figure it out.
That is to say - sure you can pass the interview with LLM only - but you'll suck at the job. If you invest some effort upfront to learn from traditional sources and then use LLM to speed you up it's going to pay off really fast IMO. The more I know about what the output should be the more useful LLMs are.
You say the docs contain guidance on structuring, high-level constructs, and how to make things idiomatic. It would be an interesting test to hand the unfixed revision of the code to an LLM while also giving it the docs, and say “make any fixes to make this conform to standards of the framework and libraries”.
If it picks up the same things that’s great news for novice programmers and anyone new to a framework!
LLMs will improve at making these fixes over time - even if they’re currently bad at it that won’t last.
Also without reading the docs would I be able to spot why the idiomatic approach is better from single use case ?
You're basically depending on LLM to make choices for you - and in my experience so far it makes very suboptimal ones.
The focus on "terrible structure" misses the point - what matters is whether the system meets its requirements efficiently and can be maintained effectively. Have you measured any actual negative impacts on system performance, maintainability, or customer value? My experience suggests that starting with working code and iteratively improving it as patterns emerge often leads to better outcomes than delaying development for complete framework mastery.
The interview analogy is particularly misleading - success in a software engineering role isn't measured by framework knowledge, but by the ability to deliver value to customers effectively. Learning framework idioms can happen in parallel with productive development, especially when using LLMs to accelerate the initial learning curve.
Emphasis maintainable
The truth is that if you are not following framework idioms, you are very likely not delivering maintainable software
Except that using the original approach would make it hard to navigate the project after a few weeks of development - duplicate logic all over, inconsistency in duplicates, bugs. Rewriting the code to actually use the framework as intended let me progress much faster and reuse the code.
And as someone who has had gigs with 6 different languages/stacks at this point, and played with probably as much on the side - that's a nice sentiment in theory but doesn't reproduce in practice. There's definite learning curve when using a new stack/language - sure your experience makes that curve different to a newbie, but in my experience it's going to be a few months until you can move as quickly as you can with a stack you're familiar with.
To a first approximation, you will never be the person maintaining the code you wrote. So if that person who takes over from you, who is well versed in the idioms of the framework in question, cannot easily understand what you have done and why, then it isn't maintainable software.
I can't help but feel that while these tools speed everyone up they actually increase the difference between an expert and a novice rather than leveling the playing field.
He also has some good content out there on how to program using LLM's. He calls it Chat Oriented Programming, aka CHOP. That's a good term to Google for.
My previous company's lawyers really had success writing complex contracts in LLMs.
The writer one really hits home: a friend who's a writer is able to make amazing stuff with ChatGPT, because he knows how to do it himself. I can only make stupid shit, because I while can recognise quality writing, I don't really know why.
Similarly, I see the same thing happening with junior devs: a lot of them just can't get the answers I get out of Copilot or ChatGPT. I have to sit next to them and tell them what to ask the LLM, or when to say no to a proposed solution.
Similarly, I can get started myself on Computer Vision and Machine Learning projects with LLMs, but I need to hand over to a proper CV or ML Engineer to get the last 10% done.
I too am using AI to play with new languages and problem domains, and to write tools I've long wanted but never found time to implement. I find it helpful to ask other LLMs about best practices, with no conflict of interest where they're trying to write me code.
For my initial experiences with Windsurf Editor, the AI got ahead of me and then into trouble it couldn't resolve, thrashing in ways my daughter learned to avoid in her first semester of programming. I've found Claude 3.5 Sonnet understands the problems I work on best, so I'll use Cursor for access to this AI. Workflow enhancements are secondary to the underlying intelligence.
I think junior devs could easily get lost in the output if they're not familiar with what it's doing and why it's doing it.
HN discussion of original a few days ago: https://news.ycombinator.com/item?id=42617645
The rationale being, it tries to reduce any click-baity content by making titles a bit more serious/boring.
Really puts the whole thing in a different perspective, knowing he's FUCKING SELLING ME SOMETHING.
I'm so sick of this!
The very act of issuing a disclaimer can create a sense of distrust. It suggests that the author anticipates skepticism or potential backlash, raising questions about their motives and the integrity of their work. This is particularly problematic in academic settings, where disclaimers can undermine the authority and rigor of research findings. While acknowledging limitations is crucial for responsible scholarship, the way these limitations are framed can significantly impact how the research is received and interpreted. A carefully worded disclaimer can acknowledge potential weaknesses without undermining the overall strength of the argument, while a poorly constructed one can create unnecessary barriers to engagement and understanding.
For this reason, I created Craft Your Disclaimer, an AI-powered service to help creators craft disclaimers tailored to their goals and audience preferences. You can register for free at: CraftYourDisclaimerOrSomeOtherBS.ai