When it comes to React itself, only breaking changes I ever experienced were 17->18 and that was such a simple fix it's not worth talking about.
I'm dealing with a massive migration at my job where the schism between libraries going from node-sass to dart-sass caused something as simple as just updating bootstrap versions into a year long effort of having to move a dozen core libraries to their latest versions simultaneously with the result being what? It's not like our product will be significantly faster or gain some new features, all we get is our app not breaking because the deps shit the bed.
I think the lessons we can learn from the frontend community is that not having a robust standard library is a failure of javascript and this problem will persist until we stop caring about backwards compatibility.
Will we as software engineers continue having discussions about supporting browsers from 1990 in the year 3025? I hope not, because bad decisions were made then and they have been compounded since.
edit: clarity
I don't think "not caring about backwards compatibility" is the issue, though? Quite the contrary, constantly forcing migrations because things have changed hurts far more.
Really is wsdl all over again.
Also not sure how OpenAPI is supposed to support AWS as a target. Do you mean that AWS doesn't support OpenAPI specs for their JSON HTTP services? Pretty sure that's because they use Smithy: https://smithy.io/2.0/index.html
For supporting AWS, API Gateway has some efforts to support OpenAPI. It is a lot like the documentation problem. Close enough to be a pain in the neck.
So, the question is ultimately what did it help me do? I had a pleasant feeling of following standards for the intro. That is about it. Nothing was disastrous, I don't recall. Just a lot of paper cuts on things not working as hoped by the documentation. I grew to call it aspirational documentation.
I have never worked on a project where the only dependency was react (let's ignore build or testing tools for the sake of argument). What I do mostly see are projects that captured the react zeitgeist of the time in regards to which "popular" libraries were recommended and people just copied willy nilly.
Maybe this is more of an indictment against software development in general where professionals are not allowed to design and engineer robust solutions because the alternative is getting fired from your job because John Dev was able to complete more tickets in a sprint when they downloaded a bunch of bloated npm libs that will break in two years and they'll job hop to the next place to continue the cycle.
Just because you haven't seen it doesn't mean everyone who has must be new to the field.
You can't have an earnest discussion about react if you're going to argue that no one pulls in a myriad of other dependencies. Even the react docs recommend you use frameworks when starting out:
https://react.dev/learn/start-a-new-react-project
Your usage, while quite admirable (I earnestly mean that too, I wish I was on a team that was disciplined enough to only use react and nothing else), isn't the common experience.
It was presented as a challenge to the status quo. As you point out, a majority of developers don't ever think twice about including everything and the kitchen sink. The idea that you don't have to do that may not be novel information to you, but is to a large number of developers. If they don't hear it here, where are they going to hear it?
Last week I shipped another new web app[1] I got an idea for in <24h to closed alpha testers and in <7d to a public beta with no issues or fanfare.
Perhaps part of the increased velocity is my having gone deep and learned the ins and outs of this stack over multiple years and projects, but it is important to note that I probably would never have gained this level of mastery over these tools if I were constantly being hit with what the author calls "dependency management fatigue".
[1]: https://blucerne.app
The high level of API stability and lack of churn in the Actix ecosystem makes the book a particularly good investment for someone looking to settle on this stack in my opinion. In keeping with the topic of this submission, I doubt I'd be comfortable spending money on a similar book about building web apps with React.
[1]: https://www.lpalmieri.com/posts/2020-08-09-zero-to-productio...
My finger is hovering on the "Buy" button but I have to ask: how up to date is the book to the current Actix version / API?
EDIT: Frak it, I just bought it. :D
> Roughly every three months, the book is updated to keep up with the latest developments in the Rust ecosystem. In particular, we make sure to update all the crates we use in the book to their latest released version. If you bought a copy of the ebook, you can get the latest book revision at any time by redownloading the content from here.
I do have to work with React from time to time. But it isn't my main focus. I usually work implementing backend systems (with Go, SQL [Postgres], Redis, etc.) and infrastructure as code with Terraform.
I was very lucky to get referrals from a few folks that I social dance with in Seattle after that layoff, and I ended up with a job offer in a product area I knew very little about (networking, routing, programmable packet processing middleware with eBPF etc.)
Despite my lack of domain knowledge, I was told I received the offer largely because of my demonstrated proficiency in Rust (I did all the interview whiteboarding sessions in Rust), and although this isn't something that is measured "officially" in the interview process, because there are many hours of me live programming online[2] for people to feel confident that they're not hiring a dud who has hyper-specialized in passing interviews which are not representative of real-world workloads.
I general advise people in your situation to start looking further down the world's dependency tree, where things churn less frequently, and where the skills you acquire will last longer. This can be easier said than done, but since my very first job was as a React developer, I can at least share my path down the dependency tree:
Frontend (React etc.) -> Backend (web APIs) -> Infrastructure / Platform / DevOps (started with a cloud automation focus, moved gradually towards to bare metal) -> Networking (I'm in ur VPCs, directing ur packets)
All of this being said, the job market right now is very tough. I doubt I could walk out of this job and into another within 3 months like I did this time last year.
[1]: A lot of my technical blog posts on https://lgug2z.com/articles around that time refer to this layoff
I actually don't agree, being "further down" the dependency tree, the more likely you are to be exposed to new concepts and stay agile as a developer. I moved from a backend .NET background into frontend and have found the faster pace more refreshing with the evolving web and mobile platforms. Staying with .NET would have had me writing the same EF code over and over in almost a time capsule, at least talking to old colleagues.
I feel like i'm more valuable now, because i've been through a few tech stacks and understand the benefits and drawbacks of each.
Obviously managing complexity in templates is something I'm sure people solved years and years ago, but of course now with 99% of frontend resources (blog posting, video content, courses, etc) being SPA focused, resources on doing it "the old school way" are scant.
Do you know of any resources for designing web-apps of intermediate complexity with SSR templates? Or even just good resources from the HTMX + SSR camp, aside from the HTMX fellas?
90%+ of what you need for a web app that you're building as a one-person show can be handled by storing state in the URL - people these days can be quick to forget that this is how large and complex web apps like GitHub have run for many, many years.
With regards to complexity in templates, I find that you can cut down on a lot of that complexity if you enforce type constraints in your templating engine's context object.
Getting in a habit of doing this early gets you thinking about ways to shift things that require conditional logic to the pre-templating stage, and more often than not it leads you to breaking down bigger templates into smaller partials and using them to compose different template variations for different views. I like this and it makes a lot of sense to me, but I'm not sure how it would feel to people who have only ever written JS SPA web apps.
[1]: https://notado.app
[2]: Example of the only place I use HTMX in Notado (to show search results as you type): https://www.youtube.com/watch?v=KMxmf132-8k
The downside is Maud isn't proper HTML but the benefit is I can use normal Rust to format my variables, etc into whatever string format I need rather than deal with a constrained templating language. It feels like writing an API that happens to serve HTML instead of JSON.
I debate using Askama to avoid the extra requests but there's something nice about just serving static HTML files.
The only downside that I see is that the binary gets big pretty fast, especially with Askama which basically is a templating engine like Tera but pulls templates at compile time in the binary so you don’t have to copy templates around on your server.
I have not worked with SSR a lot before but it seems it’s harder to cache pages too.
[0]: https://ulry.app
I'd say caching can go into a different layer entirely.
Things may be better today, but that doesn't mean they're necessarily good. Even after climbing a couple circles away from the very bottom of dependency management hell, we're still condemned to endless suffering.
As someone who do a good amount of frontend dev, I feel like the "dependency management fatigue" sentiment, which is very common on this forum, is way too much inflated. Just keep your dependencies to a reasonable number, pick solid dependencies that don't break their APIs every year, and don't upgrade just for the sake of it. Like you surely do with your backend environment too.
That is not my experience. Maybe you could share with us some examples?
My most important dependencies, after React itself and TypeScript, are react-router (which released v6 in 2021) and react-query (which released v5 in 2023). I don't remember other major breaking changes in recent years, at least with the dependencies I'm using.
And, more power to them, but at that point I'm not going to willingly rely on the project for anything.
Did they ever? Wouldn't we just do <form onSubmit={handleSubmit}> and then in the handler we just grab the FormData, send it to the server, get back the response, then update whatever? Doesn't seem like it should be that hard?
Yet so often I have these thoughts that (feel free to replace react with anything):
- Yeah why was that in react anyway?
- Is the fact that the project grew into a mess a react thing or just typical project is developed over time and turns into a mess because humans.
- Are these problems "react" problems or ... choices?
- Is the new system better fundamentally, or better because everything got re-written after the fact / all the lessons learned were applied to it from the start?
Sometimes there are answers in the articles, sometimes not.
And the root of the problem is peer dependencies and the JS community's lack of backwards compatibility and maintenance.
Take any decently-sized JS application, whether React or whatever else. Put it in Github. Turn on dependabot. Watch your pull requests go up by 5-10 PRs per week, just to bump minor versions, and then watch how 1 of those PRs, every single time, fails because of a peer dependency on a lower version.
This has been a problem forever in the community, and there's no good solution. There's also just no feasible way to make a solution due to the nature of the language and the platform itself. You just have to absorb that problem when you decide to use eg Node for your backend code or React/etc for your frontend code.
If anything, ECMA should just absorb more lodash functions into the standard lib, like they've gradually done with some of the array functions. But common things like that shouldn't be up to each individual programmer & team to reinvent all the time. It just needlessly expands the maintenance surface and causes subtle bugs across teams & projects.
JS Date is in an even worse place. If you ever need to work across time zones on both the server and the client/browser, native JS date is totally unusable because it "loses" the original time zone string and just coerces everything into (basically) utc milliseconds. The Temporal API is supposed to fix that, but I've been waiting for that for nearly a decade: https://tc39.es/proposal-temporal/docs/. That proposal links to https://maggiepint.com/2017/04/09/fixing-javascript-date-get..., which explains some of the weaknesses of the current JS date system.
As you say, some of them are built in and those should just be used instead in most cases. The problem is that when you leave the choice of library to use, then the choice isn't always obvious, especially for niche use cases. A library that's well-maintained today may not be so tomorrow when the maintainer falls ill, gets burnt out on work + open source work or simply gets bored of the project.
Deno has the right approach in this regard where they are creating standard libraries to go with their runtime which are expected to be maintained in the long term, but even then I'd still prefer built-in APIs in most cases.
While Angular feels big and heavy because it's a batteries included framework, React feels simple and quick on the surface, but as the article and the discussion show, it comes with its own price.
Angular and React are not directly comparable.
It's easy to think that a new tech stack is somehow more complete because there are fewer add-ons and no vulnerabilities have been discovered yet.
It doesn't matter!
If I'm building a personal project, I don't have the same time to curate a full ecosystem stack and nobody in the react system is maintaining those for applications that are put to the side for weeks or months at a time.
As for me, I just restarted a personal project on rails because of its batteries included mentality - it means I can limit the number of dependencies, and they have gotten very good at migration paths and deprecations.
https://github.com/rails/rails/blob/main/Gemfile.lock
Just a playful comment - not challenging your experience
In contrast, I have work apps made in React that need regular piecemeal updating — routers, form libraries, query managers, CSS — because we’ve chosen to cobble that together ourselves. That’s fine, that’s the path we chose knowingly when we picked the tech we picked, but the point isn’t that frameworks don’t have dependencies — it’s that they take on more of the burden of managing them for you.
It replaces "React soup of the day" with a more standard "recipe" shared by most Next projects – like "Grandma Vercel's secret React minestrone", I guess. But yes, projects would typically still add their own "spices" on top of those basics.
I'd say Clojure and java ecosystem is miles ahead of rails as well, but for this project I don't want to play in that garden.
As a user of modules, if you can detect such module, you can choose not to use it, and save yourself all that future trouble.
Now. Let's see. How many times has react's major version number changed?...
Yes, it's not only react, but boy are they an enthusiastic leader of this approach.
but its still pleasant imo
Left-pad wasn't a problem because of browser constraints, it was a problem because of culture and to some extent discipline.
react-router might be one of the best examples (or the worst, depends on how you look at it), and it's unfortunately very popular, even though sane and stable alternatives exist (like wouter).
Should a library become compromised with a vulnerability, fine (if said vulnerability is relevant to your usage). If you need a feature only available in a newer version, fine (I’m counting better performance as a feature).
What I’m seeing far too much of is upgrading for the sake of it. It feels like such a waste of dev time. Pinning dependencies should be absolutely fine.
Frequent updates allow you to address the breaks gradually rather than all at once.
JS is just awful, though, because of the sprawling dep tree. I get why devs would prefer pinning as any one of the 1000 deps that get brought in could need an update and code changes on any given day. A sticky static version requires less daily maintenance.
You could make the same argument for any kind of code quality efforts. Frankly I think this site probably leans too far into a high-quality mindset, but apart from anything else good programmers won't want to work on a codebase that isn't seen as valuable and treated as such.
As I web developer I see so many CVEs in mature stacks, and every so often they really do apply to our work. It is hard to avoid updating, unless you kind of pretend those vulnerabilities don't exist or apply (honestly, the vast majority of devs and small orgs do just that). Even monitoring and deciding what vulnerability applies is a recurrent 'waste' of time, sometimes you might as well just do regular updates instead.
One issue I often see is that if you do your job well, any time sunk into security can by definition be seen as wasted. Until that rare moment comes when it is not so, and then it suddenly transforms from wasted time into a business critical or even business ending death crunch.
1. Process. As a guiding principle, it is easier to make frequent small steps rather than one big step. There are many reasons for this, and the benefit of frequent small chunks of work apply beyond updates. 2. Security. Frequent updates can improve security posture, for different reasons: you apply undisclosed security fixes without knowing it (not everything is a CVE), prevent unnoticed vulnerabilities (this can be fixed by automated monitoring) and when there is a time-critical upgrade, the work is faster and less risky (see previous reason).
Pinning and updating reactively would be fine and sometimes is, however: there will be security issues, you will have to update. Given that the task is hard to avoid, for any product that is actively maintained and developed I think the better choice is to do it regular updates regardless of security issues. Maybe with good monitoring and for products that are really not developed any further just reacting to security issues is the better choice - its often also a pain though.
It's not much of a problem when you use it yourself, but it is when your dependencies do.
The result is what TFA describes.
Settled on using Go HTML templates, Starlark and HTMX. Go has a great track record of not breaking backward compatibility. Go templates are widely used by ops teams, any breaking changes there will cause ops teams to revolt. Starlark is somewhat widely used by build systems (like Bazel), any breaking changes there will cause build engineers to rise up in arms. The HTMX 1.9 to 2.0 upgrade was also painless, no changes required in my test apps. Only change required was to update the way the websocket extension is resolved.
This is also why it’s slow and memory hungry: it’s not just the inherent inefficiency of the virtual DOM but also that having such a deep tree makes it hard to simplify - and since interoperability makes it cheaper to switch away, framework developers have conflicting incentives about making it easier.
I was around when vDOM was being called optimized.
I was also around when they called it DHTML.
Get off my lawn!
I even made my first website on that monitor (complete with animated gifs and <blink>, of course) - and seeing it finally on a color monitor was... interesting.
To this day I wonder if this particularly strange choice of a serif font that is very clearly intended primarily for printed documents rather than on-screen legibility is why this entire notion of using user-selected fonts for web pages has largely withered. What if they went with, say, Verdana instead?
I was around for DHTML days, and as I recall, it was just a generic term for the ability to manipulate the actual (not virtual) DOM programmatically from JS.
It is unsurprising to me if the router library is the first accused. When I was starting with a new project where I am using React, I went through a bunch of router libraries. There are tons, it seems like a low-hanging fruit with many implementations and many people trying to make a living off theirs (can’t blame them for it, unless they make changes for the sake of making changes and to incentivise people to pay for support). Ultimately, I found something off in every one, so I… just decided to not use any!
That is the thing, React is a small rendering library[0] and you are free to build whatever you want around it with as many or as few dependencies as you want. If the ecosystem is popular enough, there will be dependency tree monsters (simply because the ecosystem is extensive and using many dependencies allows package authors to make something impressive with less effort); switching to a less popular ecosystem as a way of dealing with that seems like a solution but a bit of a heavy-handed one.
[0] Though under Vercel it does seem to suffer from a bit of feature creep, RSC and all that, it is still pretty lean and as pointed out has two packages total in its dependency tree (some might say it’s two too many, but it is a far cry from dependency hell).
Personally, I avoid React because I don't want a compile step. I do everything I can to avoid one. And if I do need to use a framework like React, I prefer to isolate it to exactly where I need it instead of using it to build a whole site.
For the second part, a couple of times when I had to add a bit of purely client-side reactivity to something pre-existing but did not want to introduce any build step I simply aliased createElement() to el(). That said, personally I prefer TypeScript for a project of any size, so build step is implied and I can simply not think about converting JSX. Webpack triggers bad memories but esbuild is reasonable.
And yeah inline types are more verbose but I prefer to use .d.ts files for definitions and then declare with a comment (vim lets me move to definitions with ctrl-] which is nice).
I also come from a Go background so I actively don't like using the more esoteric and complex types that typescript provides.
Sorry, can't agree. React is a state management library that also implements efficient rendering on top of the DOM diff it computes as it propagates the state changes.
This allows React apps to remain so simple (one mostly linear function per component) and so composable without turning into an unmanageable dish of callback / future spaghetti.
There is a number of other VDOM libraries, but what sets React apart is the data / state flow strictly in one direction. This allows to reap many of the benefits of functional programming along the way, like everything the developer sees being immutable; not a coincidence.
Regarding the size, preact [1] is mostly API-compatible, but also absurdly small (3-4 kB minified), actually smaller than HTMX (10 kB). But with preact you likely also want preact-iso, so the size grows a little bit.
I suspect HTMX also does not come with every possible battery included, judging by the proliferation of libraries for HTMX-based projects. Modularity is a strength.
Lately I have started removing these libraries were possible and the maintainability has improved a lot.
Thankfully at my latest place there was no one when I came and I haven't seen a more productive team doing frontend.
I found this really noticeable while traveling over the summer with limited bandwidth: the sites which took 5 minutes to fail to load completely all used React or Angular along with many, many other things posturing at being an SPA but the fast sites were the classic server-side rendered PHP with a couple orders of magnitude less JavaScript. It really made me wonder about how we’ve gotten to the point where the “modern” web is basically unusable without a late-model iPhone and fast Wi-Fi or LTE even when you’re talking about a form with a dozen controls.
A designer who has a solid understanding of hypermedia and puts its principles first would be worth their weight in gold to a team who wanted to move away from the React ecosystem.
In one of my previous job, the main product was 100% pure Javascript (using AngularJS), with a few (vendored) third-party scripts, and it was very nice to work on it.
No package.json, no dependency issues, and above else, the workload we had was always related to business, and almost never related to external technical constraint such as a depreciated dependency.
The lack of pre-processing steps, combined with good CSS and a well-formed DOM made it one of the rare project in my work history that didn't create any rewrite-envy.
AngularJS or not, the main point is that avoiding piling layers of tooling that might force you to an upgrade for purely technical reasons was a nice experience.
Every single footgun from PHP but wrapped in a nice syntax so people don't understand they are in danger.
I'm deeply impressed by the Javascript community. What they have managed to create using a that language is amazing.
But please people, do yourself and everyone who ever has to touch your code base a favor and use TypeScript.
If you are one programmer it may work, but many of us work in teams. JS is horrible for that as you need a lot of discipline (which often does not carry over well from team to team) in order to write "good JS".
We use Elm now. Elm translates well to JS (quick to compile and Elm is designed to map well to JS). We use Elm libs, but not nearly as much as in the unholy React+jQuery (yes, that's a bad idea) code it replaces.
All is compiled into one bundle. For the browsers the result is much less to download. For us devs it is a very different development flow: once the compile errors (shown in the IDE) are gone, it just works.
Compared to the loads runtime bugs in JS, we are confident this is a huge step forward and a good foundation to build on top of.
Reduce your update frequency; a lot of the updates of these libraries are trivial, which is both good (fast updates and releases is good, many open source contributors are good) but leads to a high update frequency. But it's fine to run a month behind, the amount of actually critical issues are few and far between. If these projects have their semantic versioning correct, you should be able to see whether updating them once a month requires a lot of work.
The fear, which is justified, is that waiting too long with updates means these compatibility problems add up. Especially when the ecosystem was still figuring itself out and did major backwards-incompatible rewrites (remember Angular 2?) this was a major issue, but it seems to have eased off a bit. Last big one I've run into was when eslint decided to change its config format, and given ESLint's old config could get pretty convoluted already (especially in a monorepo with partially shared configuration and many plugins), changing that was effectively rebuilding the configuration from scratch.
Anyway. I frequently look to the Go ecosystem and attitude for things like this. And it's had an impact on the JS ecosystem too, it was only after Go came out and said "use gofmt, fuck your opinion on formatting and fuck spending time on trivial shit like that" did the JS and other ecosystems follow suit with e.g. Prettier and Biome. I unfondly remember peppering code reviews with dozens of "this single should be a double quote" and "there should be a newline there". Such a waste. Anyway, the Go ecosystem mindset is a healthy one. Go the language gets a lot of justified criticism and it's not for everyone / everything, but Go the mindset does a lot of things right or better, for less developer frustration, better future proofing, and more maintainable software.
htmx 2.0 has been released!
there is no escape! ;-)
React Table and React Query are powerful but end up simultaneously doing too much and not doing enough, because their boundaries are in the wrong place.
What’s wonderful about React is that it’s _not_ a framework. It does one thing well, and then stops at a well thought out, well documented, well tested boundary.
I try to only adopt libraries that also meet that standard. It means you have a lot fewer libraries you can lean on, but it means the API surface you build on will be more stable for longer.
I'd love to know what that is and I'm a long term React user myself.
Go play around with Angular 1, or BackboneJS, or try building a working SPA with jQuery, and you'll get a sense of the breakthrough that react represented in 2013.
I have used these in production (and mootools, prototype, and many more) and when these came out they were novel / a breakthrough as well at the time.
My point being is React is no longer a simple transform from state -> UI. Since fibers and concurrent rendering and suspense and server components and hooks and actions it is a much wider framework than you are remembering from 2012.
Clearly not long enough.
Hooks, actions, server vs client components, the switch from class components to functional, the fad of HOC and render functions.
React today is not the same React I started with.
- UI rendering
- State management
- Component lifecycle
- Event handling
- Data flow
- JSX templating
In that sense React truly is a comprehensive UI framework, not a single-purpose library. Counter-examples of "do one thing well" are Lodash and Axios.
I’m willing to hear arguments about the merits of how React approaches these issues, but I would want any frontend UI library for generating and updating DOM trees to address them in some way.
It’s only true feature is the view = func(state), everything else should be outsourced to something else.
To anyone reading, mobx is a more generic tool for any type of state management.
And starfx seems like a data fetching + holding state and providing hooks for that data kind of library. Very unique and looks someone cared to make something nice for that kind of problem
Can you share what those libraries are? Not to critique your choice but to actually use.
React Laag
Apollo client+server
Vanilla Extract
Visx
Downshift
React Router
It's not whether or not it's responsible. It's whether you suffer this pain if you choose a different framework/platform/language.
Coming from a Django/Python perspective - the Javascript ecosystem post-npm just feels pathological in this regard.
My only issue with Go is getting the old bootstrap C version compiled for a new port... so it can build a modern release. In that area, the whole paradigm of needing only Go hits hard as a dependency reality check in some use-cases.
JavaScript frameworks just follow a well known trajectory... =3
Genuinely curious — how often have you had to do that? What platforms are you running that Node has binaries for but Go doesn't?
I've been writing Go for 12 years and it's never taken more than a couple minutes to download and install the latest version.
If you are porting Go, than you may find it has a legacy dependency on the deprecated version (rarely used except for the porting use-case) which is a problem on some platforms.
To list Node issues would be pointless, as most already re-discover them within minutes. =3
I also ditched react on my side projects but for a whole different set of reasons.
I think that is a very valid criticism. I wanted to keep the article concise, but I should have elaborated more on why Go+HTMX+Templ solves the dependency management fatigue.
As I said in the article, it is mainly anecdotal evidence, i.e. the experience from having to maintain projects with either React or Go+HTMX. For example, in the Go+HTMX project I handle state management and routing solely with the Go stdlib (which is very very stable IMHO), I don't have to ever worry that a dependency update will force me to perform painful refactoring work.
Maybe in a future article I can expand on these points, thank you for the feedback :)
In fact, it reminds me of Joseph Tainter's theories in Collapse of Complex Societies. Additional civilizational complexity adds value until it starts producing negative marginal returns and then the complexity collapses and reverts to simpler forms.
It feels like a massive weight off my shoulders. The SPA felt like a ticking time bomb.
Now with the SPA it's very possible to have a Vue2 -> Vue3 situation, or just someone pulling a left-pad. Not to mention the build system requiring specific versions of nodejs, etc. And this is just to keep things running, not to speak of adding new stuff.
It's not even really comparable.
Wow, they reinvented PHP!
This looks more like .NET Razor, but for Go.
HTMX v0 to v2 in 5 years:
https://www.npmjs.com/package/htmx.org?activeTab=versions
Enough of a breaking change that they have a v1 to v2 migration guide: https://htmx.org/migration-guide-htmx-1/
Templ is still on v0 with no breaking changes yet, but lots of minor and patch releases:
https://pkg.go.dev/github.com/a-h/templ?tab=versions
To compare with the original packages they were complaining about:
Wouter. v1 to v3 over 6 years https://www.npmjs.com/package/wouter?activeTab=versions
TanStack/react-query:
as npm package: 'react-query': v0 to v3 over 5 years
as npm package: '@tanstack/react-query': v4 to v5 over 2 years
https://www.npmjs.com/package/react-query?activeTab=versions
https://www.npmjs.com/package/@tanstack/react-query?activeTa...
2 Examples:
1. "Convert any hx-on attributes to their hx-on: equivalent: [..] hx-on="htmx:beforeRequest: alert('Making a request!') [..] becomes: [..] hx-on:htmx:before-request="alert('Making a request!')" Note that you must use the kebab-case of the event name due to the fact that attributes are case-insensitive in HTML.
2. The htmx.makeFragment() method now always returns a DocumentFragment rather than either an Element or DocumentFragment
> So your brain must've just ignored the code diffs above when responding, because it likes htmlx so much.
It's less about whether someone likes HTMX or not and more has to do with none of those points in the upgrade guide being relevant to, or impacting, the author or most other people.
I don't how many time I was dealing with a breaking changes for trivial things like making an API prettier, renaming a few functions, a few parameters here and there because it suits the author's aesthetic sensibilities.
They're of course perfectly free to do this and being open source they don't owe anything to anybody, but I still wish that there was some degree of responsibility towards the end user. Or else why even release the code publicly? End users don't care *at all* how pretty the API is, we just want things to work.
So folks who would rather not have to write the functionality themselves can use it (like myself). Nobody is forcing you to use any of this.
Routing, state management, auth, components, theming, API access, and more are all still problems that people add libraries for and those problems don't go away just because you've abandoned the ecosystem with the most libraries.
> Some of the worst offenders in this respect were wouter (a React router package) and TanStackQuery (which I was using to fetch, cache and manage state from the backend).
Ok so don't use wouter and TanStackQuery...?
https://htmx.org/essays/a-real-world-react-to-htmx-port/
It's mainly an interactivity/simplicity tradeoff, sometimes the right trade other times not. A lot of people are using JSX on the server side w/htmx because it's a good and familiar templating option on the server side.
Though my comment belies it I'm a big fan of the work - I started following Intercooler a decade ago and I'm happy to see how far things have come.
I'm curious about HTMX + JSX, any recommended examples there?
A lot of this goes away if you choose a server-side framework that handles its own routing, auth, api, templates etc. And the state management also goes away if you don't need complex stateful widgets on the frontend.
Browsers come with a very limited selection of widgets, almost everything I make requires at least one custom widget, usually significantly more than one. How can you possibly know when you start a project that you won't need to make any complex stateful widgets?
Islands Architecture is really not complicated. The bulk of your app can be very simple hypermedia exchanges and components and when you need a really fancy widget, load it and mount it where it needs to be.
> JSX is a really elegant way to avoid templating
JSX is still a templating language. It's just an "inverted one" where the templates are embedded in scripts rather than the other way around. That said, I do think it is a very elegant templating system, especially because it can be type checked with Typescript. TSX is a massive improvement on most template compilers in part because it has such a massive types ecosystem today.
(My own efforts in "post-React"/"post-Angular" have been TSX-based. I've got a Knockout-inspired view engine with a single runtime dependency on RxJS. It has a developer experience similar to React, but isn't a virtual DOM, and has some some interesting tricks up its sleeve. I'm really happy with TSX as the template language for it.)
No, it isn't a templating language, it's still just JavaScript - calls to createElement or _jsx or other function with some syntactic sugar to make it look like HTML.
isn't that what a templating language is?
[1] https://vuejs.org/guide/essentials/template-syntax [2] https://ejs.co/
From where I'm sitting it looks like maybe you're describing a difference in complexity levels? Vue compiles to a representation that is less obviously related to the template? Or what?
I, like OP, am genuinely unsure what distinction you're drawing. It seems like you might just feel that templates are icky and jsx isn't icky.
JSX is a language that takes XML-influenced templates embedded in JS files and compiles that to JS files. EJS is a template language that can embed JS snippets and compiles that to JS files (or interprets it at runtime, though the distinction between compilers and interpreters I think is largely irrelevant here). They both have the same general target compiled language, and they both have similar transformations from an original document to a new process. The biggest difference I mentioned is an "inversion" of what people think of as a template language (the template being the "focus" and scripting it being secondary/embedded), but I don't think that disqualifies it as a template language.
Subjective feelings of "syntax sugar" or not, JSX is a language intended to write templates in. That's a "template language" by tautology, if not also by definition.
https://htmx.org/essays/a-real-world-react-to-htmx-port/
htmx triggers and responds to events, which can be used to integrate it into more interactive experiences, e.g.
After 4 years of mature devs taking it for a spin and reporting back with a thumb up, maybe it's best to try to actually use it in a project of your own and see how close your predictions match reality.
Perhaps you even have experience with it of course. In which case, it'd be interesting/useful to voice your objections more specifically.
Not necessarily. There are libraries in all mainstream languages that let you embed HTML generation directly in your backend server itself, without using a templating engine. Some examples:
Python: https://htpy.dev/
Scala: https://com-lihaoyi.github.io/scalatags/
OCaml: https://yawaramin.github.io/dream-html/ (that's mine)
> Routing, state management, auth, components, theming, API access, and more are all still problems that people add libraries for and those problems don't go away
Actually they kinda do go away. Have you ever tried Ruby on Rails? It does all this out of the box.
Then why did you upgrade in the first place. Clearly there were no security issues (and those hardly play a role in Frontend world, especially in those library Tanstack and Wouter). It seems people just want to upgrade to the latest version, just to upgrade to the latest version without any benefit.
I do .NET development, and I skip major versions all the time, instead of upgrading every year.
.NET has a security support policy that LTS versions (currently even version numbers) are supported for a couple of years and non-LTS versions (currently odd version numbers) for a year after they've been released. A lot of frontend packages don't have the maintenance budget to offer support plans on anything but the most recent major version (in part because many of them are open source and low contributor count; their own problems for the ecosystem).
Don't discount security/support maintenance concerns in the frontend. Also, yes, it is a problem that many frontend packages in the ecosystem don't have maintenance policies as strong as the best backends.
Citation needed.
> Especially in SPA designs where the frontend runs a massive in-memory database of an entire application state
How is that more relevant for security issues? Common Frontend Security issues are traditionally xss, csrf, token/session theft etc. Lot of those attack vectors have been severely weakened with modern browser security settings (Content-Security-Policy, HttpOnly cookies, SameSite=strict etc.). Eager to hear how a view router (wouter) or a server-state-management system (react-query) in the FE are likely to have security wholes in that field.
- Query injection (always a threat to any query library)
- RegEx DDoS (always a threat to routers because all most routers are is a miserable pile of RegExes and because route writers don't like RegExes directly, often include DSL compilers to RegExes, which can lead to their own exploits)
JS is a Turing complete language and though it is often run in a sometimes strict sandbox mathematicians have now proven there's a "0-Day" sandbox break in the Universal Turing Machine and that it is likely a corollary/relative of the Halting Problem. Sure, modern browsers absolutely have a ton of security settings and improve every day on that, but browsers aren't perfect (mathematically can't be perfect, according to our best understanding). JS is still a complete programming language with everything that implies about exploits and bugs and timing attacks and disclosure leaks. "severely weakened" is not "the threat doesn't exist" and certainly not "the threat isn't worth worrying about, it is fine to leave bugs unpatched".
It certainly means you can take a measured approach to how you prioritize unpatched bugs, but a general malaise sense of lack of priority in frontend issues is what contributes to why frontend libraries don't have the same backwards compatibility rules or long-term security maintenance habits as many backend systems. As an industry we are really bad about looking down on frontend as a second-class environment when it is one of the largest percentages of all program code running on the average user's machines this decade.
I could keep listing CVEs for days, and then still have weeks of lecture topics about how npm has one of the biggest ecosystems for supply chain attacks right now (and those are an active and ongoing threat). I understand the lack of perceived priority and I appreciate that not everyone has the same level of "full stack paranoia" that I do.
Adding dependencies should be something you consider carefully. Every line of code has a maintenance cost - a dependency has it times 1000. Effectively you are adding technical debt in many cases.
For instance I just developed a new react app with just react and react-router. My colleagues suggested react-query but why add this when you can do all you need with a few lines of code and fetch?
i cant speak to tanstack specifics but just fyi to general HN audience that it is very normal to bump a major version just because a major dependency bumped a major version (eg Typescript or React), and often its just a sign of deprecating legacy apis than breaking anything core.
Additionally, bumping a major version because a dependency changed isn't a common practice to my knowledge. In fact I'd say it's incorrect. You bump a major version if _your_ API has breaking changes. If one of your dependencies changed but you've adapted in a way that is transparent to your users, that is a patch not a major.
When we shipped React-Redux v8, we rewrote our internals to drop our own subscription management logic, and switched to React's new `useSyncExternalStore` hook instead. However, `uSES` only was added with React 18, and we still wanted to support earlier versions of React that had hooks (16.8+, 17). So, we defaulted to using React's "shim" package that provided a backwards-compatible implementation of `usES`, at the cost of a bit of extra bundle size.
Once React 18 reached sufficient adoption, we wanted to drop the use of the `uSES` shim package and use the built-in version, but that required React 18 as a minimum dependency. So, we did that in a major, React-Redux v9.
Code _using_ React-Redux never changed in the slightest - it's still the same `useSelector` calls either way. But given that anyone attempting to mix React-Redux v9 and React <=17 would have it break, that was clearly a major version bump for us.
gotta love hyrums law
This type of dependency is different to what I wrote about. I was talking about dependencies of your library that are fully internal and hidden from its users, whereas this example is much wider and closer to a dependency on the environment in which your library is being used.
Look for example how people at Remix do it: breaking changes are hidden behind future flags [2], so you a user can turn them on one by one and adapt their code on gradually without surprises. Another solution is creating codemods for upgrades. But how many open-source package developers are willing do to this extra work?
Same story with peer dependencies - they're completely fine, if package developers know how to use them.
As always, don't be mad at React, don't curse Npm, it's not their fault. There is no great package without great effort.
[1] https://semver.org/ [2] https://remix.run/docs/en/main/guides/api-development-strate...
No, I don't think that's the problem here. The author completely understands and accepts that a new major version will break their code. They're asking whether there's actually a benefit to these breaking changes.
> seamless updates […] That requires a lot of work from the package developers.
Well, the lower something is in the stack, the more likely developers seem to be to put in that work. The Linux kernel syscall API is sacred, so are most Win32 base interfaces. libc/VCRT almost as much. Python versions a little bit less. GUI toolkits and SSL libraries break a bit more frequently but tend to just be parallel install. But the more you move up the stack, the more frequent breakage you get.
Same in the browser. Basic DOM is backwards compatible to the stone age, but the more things you pile on the more frequent API/update breakages become.
It's really a kind of obvious logic, since the lower something is in the stack, the more things above it indirectly snake down dependencies, and the more pressure to not break things there is.
Tailwind Rust Axum/Askama SQL storage HTML frontend
If anyone is looking for IoC + async dependency injection in react components I've made a tiny lib that does wonders for me in a an `iti+mobx` combo
https://itijs.org/docs/with-react/react-full#async-request-f... https://stackblitz.com/edit/github-3g8pzp?file=src%2FApp.tsx
Plus, Claude generates some pretty nice code out of the box!
There are kinds of applications though where React is indispensable, and HTMX would become unmanageable spaghetti. Stuff like Facebook (the original authors of React), GMail, Jira, etc. Such complex applications (not "websites") are relatively rare. If yours is not of this class, do explore simpler solutions unless you enjoy React and would write it for fun (and even then).
examples of things you couldn't do well w/htmx are google sheets and google maps, see:
npm init
npm install react react-dom webpack @babel/core @babel/preset-env @babel/preset-react
alias npm="echo do not use"
that's it; you don't need them; they can be replaced with few hooks in 100 lines of code to fit your needs; just write javascript code, react hooks and components; don't install weird dependencies, they won't make it faster or more convenient in the long runl; if there is any other dependency everybody wants and needs, you will see it was last updated 9 years ago on GH and still works great to this day
Use raw esbuild or swc; or be hassle free with Vite... or something else less cursed. I am grateful for Babel, it opened up the js development to new syntax but it's a beast from the past times. (The same applies to webpack)
Transitive dependencies of those are exactly the thing Dependabot will nag you about day and night.
Why should I care about 500KiB of development dependencies, they won't end up inside the build anyway? I don't see any value in vite or other build tool since I know how to write a webpack config I need in 3 minutes, and it is the same process for almost 10 years now, just npx webpack init, adjust the config slightly and never touch it again, there is no option which is too complex or hard to grasp, just the typical output/input/modules/plugins, and you never need to update it without a good reason to. Just dependabot nagging is never a good reason to start manically updating your build dependencies
CVEs aside, core-js is a liability on itself. Sad personal story, sad that the world still thanklessly depends on it.
I hope the author will write a new post when he has used the new solution on a large project (not a personal) preferably with seceral devs working on the same code.