• satyanash 7 days ago |
    Full title is "You Can't Build Interactive Web Apps Except as Single Page Applications... And Other Myths"

    Omission of trailing part changes the meaning and makes it clickbaity.

    • gherkinnn 7 days ago |
      You are right. The original title was too long for HN. I have since edited it to fit inside the requirements while keeping the spirit.
      • notjoemama 7 days ago |
        It’s fine. I don’t see the click-bait-ness of what the other person is talking about. Especially since I’ve run into the title length limit before. Some people have bad days and are more (overly?) critical.
        • bbor 7 days ago |
          “You Can't Build Interactive Web Apps Except as Single Page Applications” is false, which would make that title clickbait, specifically of the “ragebait” family.

          You obviously shouldn't build interactive pure-HTML apps, but that’s a talk for another day ;)

  • LeanderK 7 days ago |
    I am no frontend-guy, so I don't understand why in the age of node.js web-servers this ditchonomy exists between server-side and client side (SPA). Can't you initialise/pre-render most of your your stuff on the server, serialise it and push it through the client, which then acts as an SPA already initialised and then updates itself on its own. After all, both are JS? Why is the decision not more flexible where to run code, depending on latency, compute intensity etc. Maybe someone can enlighten me, as this is often left out, probably because it is obvious to someone working with these technologies.
    • the__alchemist 7 days ago |
      I'm answering your initial question directly, but there is more going on that may be more informative.

      This is a distinction between computation that is performed on a server, using arbitrary tools, and computation run on a user's machine locally. The former is much more flexible, but comes at a cost of latency; all user input and output must be serialized, and transmitted a long distance. Front end code can result in much lower latency to the user due to ommitting this transmission.

      It makes sense to apply a suitable mix of these two tools that varies based on an application's details. For example, an immediate UI response that can be performed using only information already present on the client makes sense to be handled exclusively there, as computation time is minimal compared to transmission and serializtion.

      I believe that JS being able to be used on a server is not relevant to the core distinction.

      • Terretta 5 days ago |
        Though, as most apps are built, only the code run on the server can be trusted, and only the response processing performed on the server can be trusted.
    • do_not_redeem 7 days ago |
      You can! The terms to search for are "isomorphic web apps" and "hydration". It's definitely not a panacea though.

      https://react.dev/reference/react-dom/client/hydrateRoot

      • eddd-ddde 7 days ago |
        Hydration is not needed tho, frameworks like Qwik allow isomorphic apps without hydration.
      • LeanderK 7 days ago |
        why? what are the main drawbacks? I imagine the complexity, but can you go a little bit into the details for someone with only little frontend experience
    • austin-cheney 7 days ago |
      You don’t need a SPA or to pre render anything. The reasons these things occur is because most of the people doing this work cannot program and cannot measure things.
    • kfajdsl 7 days ago |
      This is what frameworks like SvelteKit and NextJS do
    • robertlagrant 7 days ago |
      One thing might be that you can build an SPA into a mobile app, which maybe would have a harder time passing review with app stores if half the code is running somewhere else? Having said that, of course backends do already exist, but I wonder if it might be viewed slightly differently.
    • ec109685 7 days ago |
      On the server, applications are generally super close to their backend dependencies, so multiple round trips to data stores, caches and whatnot are no problem. On the client, this would be deadly.

      So it’s not just easy to take code that runs on the server and run it on the client. Anytime the client needs to do more than one round trip, it would have been faster to render the data completely on the server, html included.

      Additionally, with SPA’s there’s a lot of nuance around back/forward handling, page transitions, etc. that make a page based application awkward to turn into a purely client side one.

      • jiggawatts 7 days ago |
        > t would have been faster to render the data completely on the server, html included.

        OData batch, GraphQL, and similar technologies exist to reinvent this wheel:

        "What if instead of a hundred tiny calls we made one high-level request that returned a composite formatted response that includes everything the client needs to show the next state."

        Also known has... server-side rendered HTML like we did since the 1990s.

    • kcrwfrd_ 6 days ago |
      This is what Next.js does.
    • WorldMaker 6 days ago |
      > then acts as an SPA already initialised and then updates itself on its own

      A lot of the trouble is how that data is "initialized". State management and state ownership are old complications in every client/server application back to the dawn of time. In the old "Progressive Enhancement" days all of the state was owned by the HTML DOM and JS would pick that up and reflect that. In a lot of the modern SPAs the state is owned by the JS code and the DOM updated to reflect it. The DOM never has a full picture of the "state". So state has to be passed in some other channel (often referred to as "hydration" by data/state as water analogy).

      Also, in most of the SPA frameworks, state management is deeply entangled with component management and the internals of how everything is built. It may not be deterministic that the server running the same code gets the same data, produces the same results. It may not be easily statically analyzable which components in the system have which data, which outputs given which data are repeatable enough to ship across the wire.

      (I just released a low level server-side rendering tool for Butterfloat, and one of the things that kept it easy to build was that Butterfloat was built with an architecture that more clearly separates static DOM from dynamic updating state bindings.)

  • nsonha 7 days ago |
    Do I have to point out that "you can do X without Y" is never an interesting nor insightful statement? No shit! so? Does that mean you should do X without Y?
    • zdragnar 7 days ago |
      This is largely code for "I used to slap shit together and nobody told me I did a bad job. Now I slap shit together and people tell me it's just shit slapped together and I don't like that."

      The real gradient is more like:

      1) a SPA is needed because it might have to run offline

      2) A SPA is extremely beneficial because the state of complex interactive are difficult to maintain on the server

      3) A SPA helps because we have complex interactions and want to keep the server stateless

      4) A SPA is used because we only have developers who haven't done it any other way and won't be fast enough trying something new

      5) A SPA is strictly detrimental and ought not be used because there aren't any complex interactions and the added weight to processing / network traffic / etc overwhelm whatever justification we had.

      This is not really novel, newsworthy or even worth yet another rambling blog post.

    • danaris 7 days ago |
      It's interesting when the common wisdom is that you cannot do X without Y.
      • nsonha 7 days ago |
        No the common wisdom is that you should not...
    • itronitron 7 days ago |
      Doing X without Y seems very one-dimensional though.
  • flappyeagle 7 days ago |
    This kind of stuff misses the entire point of frameworks like react or rails for that matter

    There might be some technical advantages you can argue but there’s an undeniable economic advantage to not needing to make a bunch of disparate choices and wire things up in a way that you hope will not bit you in the ass later.

    • austin-cheney 7 days ago |
      Test automation.
  • pier25 7 days ago |
    > I like to argue that some of the most productive days of the web were the PHP and JQuery spaghetti days

    I've wondered if going back to that paradigm would be more productive or not than using React et al.

    Plenty of big sites like Amazon or Steam still are made this way. Not exactly PHP + jQuery but rendering HTML on the server and sprinkling some JS on top of it.

    Has anyone gone back to working like that?

    • the__alchemist 7 days ago |
      I use this general approach in most cased.
    • WD-42 7 days ago |
      I just finished migrating a fairly large legacy vue2 app to server side rendering and HTMX. Its thousands of lines less code and also hundreds if not thousands less dependencies. Most importantly I’m not worried about the tech stack becoming abandoned and un updatable in 5 years like vue2 was.

      There are some pages that require more local state. Alpine.js has been great. It’s like Vuejs that you can initialize on your page and doesn’t require a build step.

      • Jaygles 7 days ago |
        How much of that code and dependency reduction is due to having the entire app to use as a spec? How can you be so sure this new stack won't be "abandoned"? (Vue has received regular updates for 11 years)
        • WD-42 7 days ago |
          Vue 2.x is NOT receiving updates. Not even security updates. Its abandonware.

          I had to ask myself if it was worth the hassle to update to 3.x and risk the same thing happening again. The answer was no.

          The new stack is Django (which the backend was already written in). Will it stop receiving updates? Extremely unlikely, conserving they have been preserving upgrade paths for the last 20 years and has a solid foundation supporting it.

          The supporting ui libraries like htmx and alpine could conceivably become abandoned. The big difference is that they can be vendored easily.

          I checked the vue project and it has 1500 transitive dependencies. The new “stack” has a whopping total of 7.

          On top of that there is no build step to maintain. Also it’s straight up way faster.

          • pier25 7 days ago |
            > Vue 2.x is NOT receiving updates. Not even security updates. Its abandonware.

            I algo got burned with Vue 2 at one point. Pretty amazing considering Vue is quite popular. Recently went over 1M daily downloads.

            https://npm-stat.com/charts.html?package=vue

            I don't think even jQuery had so many downloads back in its heyday but it has maintained its API and methodology after all these years.

            • WD-42 6 days ago |
              I think npm stats aren’t even the same thing as number of jquery downloads. I bet 99% of the downloads off npm are automated jobs like CI.
          • recursivedoubts 7 days ago |
            htmx will not be abandoned as long as I'm alive, and the API will not change significantly either

            i am hoping my oldest son gets interested in computer programming and can take over as the lead eventually

            • kelnos 7 days ago |
              I haven't done any real web development in over 20 years, but will soon have to build some sort of dynamic web site. I toyed with React 8 or 9 years ago (though never did anything with it, really), and found everything out there to be large and clunky and difficult to work with.

              I came across htmx a while back and have kept it in the back of my mind as something to potentially use if I ever had to build something. I'm glad this article came up on HN, and your comment here... makes me really want to build something with htmx!

            • WD-42 7 days ago |
              Hey! Whether your son takes over or not, the larger point I was trying to make is that worst case I can vendor htmx.js with my app and keep it going for a long, long time.

              Same can’t be said for the vue app and its 1500 dependencies + web pack build chain. At least not as easily.

            • grugroyal 6 days ago |
              Primogeniture is an underrated project leadership policy.

              You will need a Latin motto on your coat of arms. Something like "Simplex sigillum veri".

              • recursivedoubts 5 days ago |
                nemo codeo appendium lacessit
            • throwup238 6 days ago |
              > i am hoping my oldest son gets interested in computer programming and can take over as the lead eventually

              Sacrificing the first born as god intended.

        • ecshafer 6 days ago |
          Not OP, and this was Hotwire Rails with Stimulus but I also saw a similar reduction in moving a page from React to Hotwire. It was actually a new page, with significant changes so couldn't be trumped up to just a rewrite. But this was easily 1/10th if not more of a reduction in LOC than the similar React app it was replacing, with the more features, and a like a 90% increase in performance.
    • tobinfekkes 7 days ago |
      Yes, this is 100% of my work.
    • RadiozRadioz 7 days ago |
      I've found that to be the case for my personal projects. Not less for the fact that I don't have to spend any time googling anything. I can just write code. I've used web tech for so many years that I don't need to learn anything anymore if I'm not in a framework. Outside a framework, it's all just the same stuff that's always been there.

      Even today's LLM-assisted programming doesn't give me that fluidity. If I use LLMs to assist me in writing a big framework, it'll autocomplete stuff I haven't learned yet, so I need to retroactively research it to understand it.

      JS soup on a webpage is a mess, but it's all using the same tools that I know well, and that to me is productive.

    • gausswho 7 days ago |
      Steam has largely abandoned it in favor of React in their big facelift a couple years ago.
      • pier25 7 days ago |
        You mean the Steam client or the store?

        I was referring to the store. Just checked it and there's tons of jQuery and vanilla stuff (at least on the homepage).

        • tpxl 6 days ago |
          The steam client is a web browser displaying the web page, with a few additional things (like installing games and such).
    • educasean 7 days ago |
      I think the old way works well for a smaller scale app that doesn't need to change often. Otherwise, I find the components-based code reuse to be a pretty valuable pattern, especially when working as a team.
      • wild_egg 7 days ago |
        There's no reason at all why you can't also organise code into components under this model. Orthogonal concepts
      • pier25 7 days ago |
        Most (if not all) backend frameworks provide components these days.

        See Laravel, Dotnet, Rails, etc.

    • bitnasty 7 days ago |
      Not sure if this counts as “going back” but I have been managing a legacy project that is built like this for the past year or two. I hated it at first, but now I’m starting to appreciate it. My approach is to try to put as much as possible in php, but for parts of the page that are going to be manipulated by js/jquery, just have php pass the data as json and build that portion of the dom on the front end.
    • scrollaway 7 days ago |
      It’s really not more productive.

      Working with react has a higher barrier of entry (something which is becoming less true over time given that many templates etc exist for it), but if you want to be producing pages and functionality, a good react dev will run leagues around even an a exceptionally good jquery dev.

      I’m solid at both and we are talking a dev cycle that is magnitudes faster once you are set up reasonably well. There’s a reason it’s popular. Typescript makes a lot of testing unnecessary and gives many guarantees on robustness. React makes it a breeze to create complex state full components. Nextjs makes it super easy to deploy all this to a web app and quickly create new pages.

      • pier25 7 days ago |
        > a good react dev will run leagues around even an a exceptionally good jquery dev

        It probably depends on the use case, no?

        If you only need links, forms, and data tables what advantage does React have over SSR + jQuery?

        • morbicer 7 days ago |
          Non-spaghetti dynamic forms and their validation (yes you need to validate on server as well).

          Reusable components that can be tested in isolation. Type support. That leads to easier evolution and refactoring.

          With good architecture you can go mobile with react-native.

          • pier25 7 days ago |
            Most modern backend frameworks provide components, types, and validation.

            Laravel, Dotnet, Rails, etc.

        • klysm 7 days ago |
          Composabilty and abstraction. I can bang out a react form in our app in minutes just by defining the shape of the fields. All the loading states are handled automatically. Mutations and refreshing data works out of the box. Navigating between pages is instant. Data is cached even when navigating between pages. Some pages that require many user inputs utilize optimistic updates to afford very low latency feedback.

          React makes all of that easy to compose. I tell it how to render my state, and I write code that updates state.

          • thunky 7 days ago |
            > Composabilty and abstraction

            Also possible without react with far less complexity.

            • lmm 7 days ago |
              It really isn't, unless you go full server-side session (with something like Wicket), and the latency of that is usually unacceptable these days.
              • thunky 6 days ago |
                I think you and the sibling may have missed the requirements up thread:

                If you only need links, forms, and data tables

                • lmm 6 days ago |
                  In my experience even something you thought was a basic form will usually have some kind of UI state (e.g. one input that then enables/disables another input).
                  • thunky 6 days ago |
                    Can't that be done in one or two lines of jQuery?
                    • lmm 5 days ago |
                      Yes, of course. And then you add one or two more lines of jQuery. And then another few lines to deal with the client-side and server-side state getting out of sync. And then...

                      Is putting in a couple of lines of JS that has no knowledge of the backend-managed good enough for small one-off pages? Maybe. But it's definitely not composable and abstractable, which was the original claim.

                      • thunky 5 days ago |
                        I assume we're still talking about links, forms, and data tables. If the forms are so complex that we need React to manage them then it may be worth trying to simplify them a bit first.

                        Or if the forms have to sync with the backend then there are other React-less options, for example:

                        https://htmx.org/examples/value-select/

                        • lmm 5 days ago |
                          > If the forms are so complex that we need React to manage them then it may be worth trying to simplify them a bit first.

                          Some forms are simple enough that you don't need composition and abstraction, yes, no-one's denying that.

                          > Or if the forms have to sync with the backend then there are other React-less options, for example:

                          > https://htmx.org/examples/value-select/

                          Ok, now try composing that. What happens when you want to nest one of those inside another (i.e. have 3 levels of selection)?

                          • thunky 4 days ago |
                            The forms/inputs are composable on the backend where the state is managed and the html is produced.

                            Here's the most dynamic form I can think of: https://www.globalgolf.com/golf-clubs/1064148-titleist-tsr2-...

                            Any input change triggers any/all other inputs to change.

                            Looks like they send the entire page each action, no React, yet the user experience is much better than the typical competitor site where they've overengineered the whole front end.

                            Their faceted search is nice too: https://www.globalgolf.com/golf-clubs/used/titleist/fairway-...

                            Notice how when you add a filter it updates the URL. Breath of fresh air.

                            And this is way beyond what I had in mind when I read links, forms, and data tables.

                            • lmm 4 days ago |
                              I clicked a couple of inputs and it's now stuck on a loading animation of a glass getting filled with golf balls (been running for about two minutes now). So you've pretty much proven my point.
                              • thunky 4 days ago |
                                Oh no...

                                Welp guess I'll have to take my business elsewhere. Thanks for the heads up.

                                Now I'm off to find a golf site that uses React for guaranteed perfection.

                                • scrollaway 3 days ago |
                                  As a third party to this conversation, I think it's completely hilarious that you try really hard to make a point that ends up proven wrong, and your answer, rather than actual introspection about your beliefs, is sarcasm and an overall rude response.

                                  This stubbornly stupid mindset is why we have wars.

                                  Edit: I checked the site myself and was able to reproduce the issue mentioned in GP. This is a terrible website.

            • klysm 7 days ago |
              I’ll believe it when I see it. I’ve used many many different UI frameworks, and react works the best out of all of them that I’ve tried.
          • Zanfa 6 days ago |
            On the other hand, using React means you'll have a 2nd source of state (form & navigation at the very least) that needs to be maintained and tested for edge cases. Everything needs to be validated 2x, since you can't trust anything coming from a client anyway. Correct state management between page navigations is definitely not something you get for free judging from the number of broken React SPAs where you can't properly use multiple tabs or other browser mechanisms.
            • klysm 5 days ago |
              The libraries I use give it to me for free
      • swatcoder 7 days ago |
        For as much as you're right, your taking a very narrow short-term view of productivity.

        The maintenance demands for using a heavy, evolving framework and a large graph of churning dependencies are tremendous and perpetual. While you can make meaningful wins on delivery time for a new feature, once "you are set up reasonably well", you've impliclty introduced new costs that don't apply to "vanilla" code that stays closer to established standards and only includes code it actually uses.

        That's not to argue that vanilla is strictly better (both have a place), but just to acknowledge a latent but significant tradeoff that your comment seems to ignore.

        • pier25 7 days ago |
          > and a large graph of churning dependencies are tremendous and perpetual

          Absolutely. This the reason I'm moving on from JS in the backend. You're constantly stitching up dependencies which might be abandoned and/or become incompatible at any moment.

        • whstl 7 days ago |
          > The maintenance demands for using a heavy, evolving framework and a large graph of churning dependencies are tremendous and perpetual

          Counterpoint: If you replace a frontend framework with backend components, as has been suggested a few times in this thread, those frameworks are even larger, and a lot of the basic functionality provided by React (such as components) is often provided by third-party packages.

          Sure, if you're using something like jQuery, then it's smaller and more stable, but the functionality is limited compared to both backend and frontend frameworks. Which of course might be 100% appropriate depending on the use case.

        • imtringued 6 days ago |
          I'm still working on a project that uses jQuery in 2024 and it is a horribly bad fit and maintenance nightmare. I would prefer predictable new costs over random global effects in combination with duplicated server-side and client-side code that can blow up at any time.
    • lelanthran 7 days ago |
      It depends.

      A shopping site or similar, then sure - the "one thing at a time " workflow works.

      For internal line of business apps? I'm not sure sure anymore. From a comment of mine a few days ago: https://news.ycombinator.com/item?id=42148627

      > Which is a pity; I was watching a client do some crud work in a webapp.

      > Web - 1 form per page:

      > Click "back". Copy some text. Click forward. Paste it. Repeat for 3 different fields. Click submit.

      > Native apps (VB/Delphi/etc) used to be:

      > Open both forms. Copy and paste from one to the other. Open another one on the side to lookup some information, etc.

      > Webapps, even moreso with MPA, force a wizard-style interface - you only go forward through the form. This is not how people used to work; they would frequently have multiple forms in the same app open at the same time.

      > With SPA and html "windows" made out of movable divs you can probably support multiple forms open at the same time, but who does that?

      • tacticus 7 days ago |
        IMO MPA apps that don't break the back page and support multiple tabs and windows are far easier to have the side by side comparison model than SPAs which get fun dom nonsense the moment you try and do something the dev didn't precisely expect.
        • lelanthran 6 days ago |
          Broadly, I'm in agreement.

          But the folks using the company's line-of-business apps mostly aren't even aware that the browser can open a particular part of the app in a new tab, and even when they are, they aren't aware that the tabs can be torn off into separate windows, and of those that do, there are still some who wouldn't figure out that both browser windows can be tiled side-by-side.

          And even when you pass all those hurdles, it's still disruptive enough to the normal workflow that most people who can do that won't do it at all anyway.

    • dacryn 7 days ago |
      we more or less do for all the applications in my team, which don't follow the corporate standard.

      They're all Django applications, and the limited dynamic elements are just simple jquery. We have some bootstrap stuff and elements like form elements in javascript, but that's about it.

      We are extremely productive, especially compared to our official apps which follow the .NET/Angular stack, that run into all kinds of API versioning issues and errors, it's not even a faster user experience. The problem with such a stack is that you need a few highly skilled architects/system designers. We just have regular programmers piecing it all together, most of them learned these frameworks on the job and come from a regular app dev background, not web.

      Granted, we only serve something like 20-30 concurrent users for each of tthe Django apps (as in, page requests/second), but still...

    • kukkeliskuu 7 days ago |
      I am have several side projects, and they mostly follow this pattern. For one app, I developed around 100 views (plus the admin views) in Django in just a few months, I could have never done it if I was using a "modern" stack. In many applications most pages (login, registration, logout, entering data etc.) can be built using traditional server-side rendered HTML forms with a little Javascript sprinkled on top, and most of the JS can be handled by Alpine.js. For the pages that need more interactivity, I use HTMX and Alpine.js. It works really well.
    • wwweston 6 days ago |
      I tacked that direction after my work with early Angular and React sites 2014-2016. It isn’t that I’d never work on an SPA or a site using front-end frameworks like that, it’s that it became very clear to me quickly that they weren’t necessary or particularly helpful for the majority of the sites they were used on (definitely including the sites I was working on, one of which was thrown away entirely months after we delivered it to the client), and how much adoption was resume driven.

      I used to assume React solved genuine problems for FB but given ways in which the UX+performance has gotten worse, I’m not even sure about that.

      Meanwhile html plus progressive enhancement works fine for the majority of projects I’ve been involved in. Componentization can still be a win but there’s other ways to do it.

    • nicbou 6 days ago |
      This is how I work. My website is content enhanced with calculators and other widgets. The widgets are individual Vue apps contained within bits of Jinja templates.

      The backend is an API for the contact forms and the feedback collection forms.

  • KronisLV 7 days ago |
    To me it seems that if you have decent cache control headers, then SSR can be decent even without very specific optimizations.

    When choosing a tech stack, normally I’d also look for which one rots the slowest. Writing a SPA will typically mean that in the case of the libraries/frameworks there becoming untenable, at least you have an API that you can use when writing a new client.

    I have this PrimeFaces/JSF project at work - it’s unpleasant to work with and feels brittle (especially when you have to update components, in addition to nested table row updates being pretty difficult). I’ve also helped migrate an AngularJS project to Vue, the former was less pleasant to use than the latter but the migration itself was also unpleasant, especially when you wanted to get close to 1:1 the functionality. I like how Angular feels more batteries included than the other options, but I’ve seen some overabstracted codebases. React seems like it has the largest ecosystem but the codebases seem to rot pretty quickly (lots of separate libraries for the typical project, lots of updates, things sometimes break). The likes of Laravel and Rails let you be pretty productive but also have pretty tight coupling to the rest of the codebase, same as with PrimeFaces/JSF. I’ve also seen attempts at putting as much logic in the DB as possible and using the back end as more or less only the view layer, it was blazing fast but debugging was a nightmare.

    Honestly, just pick whatever technology you think will make the people working with the project in 5 years the least miserable. For me, often that is something for a SPA, a RESTful web API, some boring back end technology that connects to a relational database (sometimes SQLite, sometimes PostgreSQL, sometimes MariaDB; hopefully not Oracle). Whatever you do, try not to end up in circumstances where you can only run a front end with Node.js 10 or where you can't update your Spring or ASP.NET version due to there being breaking changes in your coupled front end technology.

  • mh-cx 7 days ago |
    I wonder why the article doesn't mention utilizing the browser cache for your static CSS and JS assets instead of introducing a service worker as first measure.

    Few years ago I built a shopping site MPA this way and the page transitions were almost not noticable.

    • jdsleppy 7 days ago |
      Same (but not a shopping site). Bundle the JS and CSS into one file each and cache it forever (hash in the filename to bust the cache). Then with each page transition there's exactly one HTTP request to fetch a small amount of HTML and it's done. So fast and simple.
      • mh-cx 7 days ago |
        Exactly this. In our case we went so far to cache all static assets. Putting them into a directory with a hash in the name made them still easy to bust.

          # Cache static files
          location ~* \.(ico|css|js|gif|jpe?g|png|svg|woff|woff2)$ {
            expires 365d;
            add_header Pragma public;
            add_header Cache-Control "public";
            proxy_pass http://127.0.0.1:9000;
          }
      • wmfiv 7 days ago |
        This has been the common/best practice for so long I don't understand why TFA is proposing something different.
        • youngtaff 6 days ago |
          Cache control directives indicate how long a browser, proxy etc can store a resource for… they don’t guarantee they will store it for that long though

          Control over Service Workers cache lifetime is more explicit

          I’d still specify ‘good’ cache lifetimes though

          • wmfiv 6 days ago |
            Makes sense as a theoretical problem. Have you ever seen data that suggests it's a practical problem? Seems like one could identify "should be cached forever but wasn't" using etag data in logs.
            • youngtaff 6 days ago |
              Facebook did a study about ten years or so back where they placed an image in the browser cache and then they checked how long it was available for… for something like 50% of users it had been evicted within 12hrs

              If one of the most popular sites on the web couldn’t keep a resource in cache for long then most other sites have no hope, and that’s before we consider that more people are on mobile these days and so have smaller browser caches than on desktop

              • jeeeb 6 days ago |
                From the discussion above it seems that browsers have changed their behaviour in the last 10 years based on that study.

                See: https://news.ycombinator.com/item?id=42166914

                • youngtaff 5 days ago |
                  Browsers have finite cache sizes… once they’re full the only way to make space to cache more is to evict something even if that entry is marked as immutable or cacheable for ten years
      • exceptione 7 days ago |
        + you need to pair that with immutable, otherwise you are still sending validation requests each reload time, so you are doing more than one HTTP request.
    • thangngoc89 7 days ago |
      Edit: below isn’t true. You could set immutable in cache header and the browser wouldn’t verify.

      —— Original comment:

      With browser cache, the browser stills need to send a HEAD request to see if the content was modified. These requests are noticeable when networks are spotty (train, weak mobile signals…)

      Service Worker could cache the request and skip the head requests all together

      • 01HNNWZ0MV43FF 7 days ago |
        Not if you tell the browser it's guaranteed fresh for 10 minutes and then use cache-busting in the URL
        • thangngoc89 7 days ago |
          Oh yeah. The immutable tag right? Total forgot about that
      • cuu508 7 days ago |
        Set max-age in the Cache-Control header to a high value and the browser will not need to revalidate. When deploying a new version, "invalidate" the browser cache by using a different filename.
        • austin-cheney 7 days ago |
          Or keep the file name and send the file with a query string of some changed numeric value
    • palsecam 7 days ago |
      Especially since the `stale-while-revalidate` and `immutable` Cache-Control directives are well supported nowadays.

      Stale-while-revalidate: see https://web.dev/articles/stale-while-revalidate & https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ca...

      Immutable: https://datatracker.ietf.org/doc/html/rfc8246 & https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ca...

      And if using a CDN, `s-maxage` (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Ca...) is quite useful. Set it to a long time, and purge the CDN cache on deploy.

      • exceptione 7 days ago |

          Immutable
        
        is what you need. Crazy enough, Chrome has not implemented it. Bug open since 2016: https://issues.chromium.org/issues/41253661

        ------------------

        EDIT: appears that Chrome suddenly had decided in 2017 to not validate at all on reload anymore, after Facebook had complained to Chrome devs about Chrome being more a drag on their servers compared to other browsers.

        • palsecam 7 days ago |
          To be fair, it’s because Chrome handling of a soft reload is different from Firefox or Safari, and does not lead to revalidating (let alone refetching) assets files. See https://blog.chromium.org/2017/01/reload-reloaded-faster-and...

          Quoting https://engineering.fb.com/2017/01/26/web/this-browser-tweak...:

          > We began to discuss changing the behavior of the reload button with the Chrome team. […] we proposed a compromise where resources with a long max-age would never get revalidated, but that for resources with a shorter max-age the old behavior would apply. The Chrome team thought about this and decided to apply the change for all cached resources, not just the long-lived ones.

          > Firefox was quick in implementing the cache-control: immutable change and rolled it out just around when Chrome was fully launching their final fixes to reload.

          > Chrome and Firefox’s measures have effectively eliminated revalidation requests to us from modern version of those browsers.

          • exceptione 7 days ago |
            I just wrote my edit before I saw yours. Must have been a cache problem :)
            • palsecam 7 days ago |
              “There are only two hard things in Computer Science: cache invalidation and naming things.” — Phil Karlton (cf. https://www.Karlton.org/2017/12/naming-things-hard/ or https://MartinFowler.com/bliki/TwoHardThings.html)

              ;-)

              • traverseda 7 days ago |
                And off by one errors
                • ohthatsnotright 7 days ago |
                  And dates and times And currency
                  • yndoendo 7 days ago |
                    Manage date and time in UTC/Zulu. Append the time zone if that meta is needed and store both the client and server for backend. That way you don't have to deal with time travel and can handle time shifts.

                    I would say that concurrency is a caching issue once proper locking has been set.

                  • zbentley 6 days ago |
                    And ce conditions rainduced by concurrency.
                • CBarkleyU 7 days ago |
                  That's a nice addendum to the quote, actually. Will use it in future.

                  > There are only two hard things in computer science: Cache invalidation, naming things and off-by-one errors

    • henriquegogo 7 days ago |
      I was just wondering the same. Browser cache is is old and effective, but unfortunately nobody cares nowadays.
      • leptons 7 days ago |
        People care when the browser cache is holding on to something - "try clearing your cache" is still a very common tech support solution.
    • Onavo 6 days ago |
      Because cache invalidation is one of the two hardest problems in computer science.
    • youngtaff 6 days ago |
      On a session basis browser caches are effective but on a session to session basis it’s likely the content will get evicted as resources from other sites gets cached - there’s an oldish Facebook study where they placed an image in the cache and after 12hrs it had been evicted for 50% of visitors

      Service Workers offer more controls over cache lifetime, and in Chromium (at least) JS gets stored in an intermediate form (which reduces the re-compile overhead on reuse)

    • VoodooJuJu 6 days ago |
      Well that would just make it too simple and not blog-worthy.
  • dnndev 7 days ago |
    The only reason to use spa as far as I am concerned is that’s the way the industry was going 10 years ago… so the community and controls etc… were / are for spas.. any other reason to me was just chasing new tech.

    I made the switch and the community it stronger than ever for vuejs and react

    • em-bee 7 days ago |
      when i discovered angularjs i thought it was a revelation. finally i could write webapps without having to track UI state in the backend, only treat the backend like a database. all the UI logic was dramatically simplified. it went so far as making my backend so reusable that i don't have to do any backend coding at all anymore. creating a complex webapp is now as simple as writing a desktop client.

      sure, i could do the same thing with a traditional fullstack framework. with discipline i would be able to keep frontend and backend code separate. but i have yet to work on a project where that is the case.

      i don't build SPAs because the industry demands it. i build SPAs since before the industry even heard about it. and i build them because they make for a cleaner architecture and give me more flexibility than any fullstack framework would.

  • recursivedoubts 7 days ago |
    The HN headline is more combative than the actual headline:

    You Can't Build Interactive Web Apps Except as Single Page Applications... And Other Myths

    This is an essay contributed by Tony Alaribe, based on his talk at BigSkyDevCon this past year, discussing techniques to make a non-SPA based web application feel fast and slick. He mentioned some techniques that were new to me and I thought it was the best talk at the conference.

  • justinko 7 days ago |
    The most dangerous thing in programming is developer boredom with a dash of hubris and ignorance of the past.
    • bbor 7 days ago |
      What if we took these micro services and grouped them into a single block so they can talk to each other faster? Call it… a macroservice!
      • intelVISA 6 days ago |
        Perhaps even transpiled to wasm, served on the edge, a 'compiled monoservice' if you will.
  • pino82 7 days ago |
    My feeling nowadays (i.e. since around a decade maybe) is that there is a lot of technical complexity going on which should imho be either an internal part of the browser itself, or should not be used that widely.

    It's probably all good for something... But I would love to just make my web app out of what the browser itself can do, without a tech stack as high as a skyscraper for me to handle.

    I know, this way the ecosystem can develop more rapidly (compared to waiting for improvements in the official web standards), and it's also fun to play with toys, and everyone can raise his/her value by learning more and more of these tools.

    On the other hand, the web was imho in a better shape before all that began. From user perspective and from developer perspective.

    I could be wrong... I'm not primarily a web developer at all...

  • 0xbadcafebee 7 days ago |
    I dream of the day that we are finally free of the iron grip the web browser has on the minds of those that would create the future of technology.
  • jonahx 7 days ago |
    Can someone explain what the service worker strategy accomplishes that plain old http Cache headers don't? It saves a (almost zero weight) network roundtrip, but feels like it's re-inventing the entire wheel for that small (I think) optimization? Am I missing something?
    • ysofunny 7 days ago |
      that they can be used to compute stuff locally

      I imagine ideally we want user choice of where the computation is happening. if on a mobile device I'd save battery in exchange for network latency

      but in a desktop computer I'd rather do as most local computation as I can

      • bathtub365 7 days ago |
        Radio is one of the biggest users of battery on mobile devices.
      • jonahx 7 days ago |
        I'm not asking about web workers generally. I'm specifically asking about their use as a client side cache as described in the article.
      • plorkyeran 7 days ago |
        For the sort of thing that are fast enough that network latency is relevant, on a mobile device you save battery by doing them locally. The radio takes more power than the cpu.
    • alganet 7 days ago |
      It was designed for apps, extensions and pages that behave like apps (stuff that might not have a server anywhere, just a manifest and some static HTML/JS). The cache is only one of the use cases.

      I think some pages still use them for running background stuff. My browser is setup to clear all of them upon closing the tab.

      This whole direction is being silently discontinued anyway. Running browser apps has become harder, not easier.

      • imbnwa 7 days ago |
        >This whole direction is being silently discontinued anyway. Running browser apps has become harder, not easier.

        I'm outta the loop, can you expand on how this is the case?

        • alganet 7 days ago |
          When these things appeared, both Mozilla and Google were signaling the intention of distributing some kind of standard webapp. At that time, via FirefoxOS and ChromeOS. Even MS was signaling web with Windows 8 (WinJS apps, even for Windows Phone).

          So, there is some piece of infrastructure for this future here and there. Service Workers is one of those pieces. But the apps only achieved some success in closed markets (extension stores). It never became a standard (visit a page, pin it, becomes a fully fledged app).

          Instead, the web moved to mobile and desktop apps through other means (super-Cordoba/Electron-like apps, little JS/HTML insertions in traditional apps, other inventive web ways that do not involve a collaborative standard).

          The leftovers of this imagined distribution mechanism are being pushed aside (hidden in weird menus or options). Tech is still there because it is a standard, but the counterpoint UI and market decisions are pointing in other directions.

          For example, both in Chrome and Firefox, the ability to invoke the browser "chromeless" that was a part of this whole thing has been removed or muted in some way. It was never a standard, so it was removed as soon as possible (probably few people working on it).

          Does that make sense?

          • imbnwa 5 days ago |
            Ah, yes, didn't know that service workers belonged to such an larger business plan like that.
    • thfuran 7 days ago |
      A minimal network roundtrip is pretty minor only so long as you're on a reliable connection to a nearby server. Add even a little packet loss or moderate latency jitter and 5,000 miles and suddenly any roundtrip avoided is a good thing.
    • slibhb 7 days ago |
      You program servive workers in the client whereas headers are controlled by the server. Among other things, this means that service workers work when you have no internet access.
    • EionRobb 7 days ago |
      For a multi-page app, one of the important uses of serviceworkers is pre-loading and caching resources that aren't on the first page. eg you might have a background image that's only displayed three pages deep but can download and cache it as soon as the site is open.

      You can potentially use http2 push to send down files from the server early but I've seen the browser drop the files if they're unneeded for that page load.

      Yes, there are other hacks you could do to make the browser download the resources early, like invisible images or preload audio files, but if you're going down that path why not put in a service worker instead and have it download in the background.

      • jeroenhd 7 days ago |
        Unfortunately HTTP/2 push is basically dead. Very few sites made use of it and Chrome removed it: https://developer.chrome.com/blog/removing-push
      • xnx 7 days ago |
        If preloading is the goal, would hidden/off-screen loading be an option for images?
      • bastawhiz 7 days ago |
        Preload link tags work great and have been supported for over a decade.
  • easeout 7 days ago |
    The final spaghetti point really takes the wind out of this article's sails. A lot of it is good information, but too many parts make me think "That's a stretch."
  • ThalesX 7 days ago |
    I have some "internal" web apps that I use for myself, and while I do use Remix which is a framework that allows me to use React, I just use SSR and HTML default form controls as interpreted by the browsers, minimal client side processing and almost no styling. I love it so much compared to the "modern" cruft. It's responsive by default because I don't really style it. It has a high signal to noise ratio.

    I wouldn't change it for the world, but I've been told multiple times I'm very much in the minority.

  • amelius 7 days ago |
    > The myth that you can’t build interactive web apps except as single page app

    You can do it, but you might paint yourself into a corner. For example, your manager might at some point say: please load X, while Y is animating; that will not work if the entire page is reloading. A SPA will give you more control and will also reduce the probability of having to rework entire parts of the code.

  • peutetre 7 days ago |
    Just do it all in WebAssembly.

    WebAssembly is pretty great. Here's an example: https://bandysc.github.io/AvaloniaVisualBasic6/

  • xnx 7 days ago |
    Even preceding the tech stack decision, many devs err when they misunderstand what they are writing. GMail, Google Sheets, Google Docs, etc. are apps, if you don't need that level of interactivity, it's probably just a crud app (AirBnB, Craigslist, ecommerce, etc.) and you'd be fine with more a mostly server-side framework.
  • lmm 7 days ago |
    Ok but what's the benefit? Yes, if you really want to, you can carve your validation logic in half, write two different kinds of JavaScript and CSS, so that you can change screens in two technically different ways. But the only thing it will accomplish is making your app harder to maintain, and probably slower to boot.
  • pahbloo 6 days ago |
    People treat SPA and MPA as oposing teams, one is the right way and the other is garbage. But this is not how it must be seen.

    What we have is the natural way to do things with web stack (the way it's is mean to be used), and the "hacky way" (the way that let us do what we want to do, even when the web stack doesn't support it yet).

    SPA is the hacky way today, but before it we had CGI, Java applets, Flash... And the web purists were always very vocal against the hacky way.

    But the hacky way is what pushs the envelope of what the natural way can do. Every feature cited in the article that makes an MPA competitive with an SPA today only exists because of SPAs.

    I'm on the side of preferencially use the web the way it's meant to use whenever it's possible, but I love to see what can be done when we are a little hacky, and it's awesome to see the web stack adapting to do these things in a less hacky way.