• BiteCode_dev 5 days ago |
    Note that it's really dom centric and doesn't include ajax.
    • ceejayoz 5 days ago |
      Isn’t AJAX fairly well supported via fetch now?
      • jitl 5 days ago |
        Yeah at this point I’ve totally forgotten $.ajax API but fetch is pretty easy, just a single function call
        • amelius 5 days ago |
          Now we only need something that makes websockets more resilient against network errors and corporate firewalls.
      • BiteCode_dev 5 days ago |
        In the same way selectors and map replace jquery. It depends how much sugar you want.
      • _hyn3 5 days ago |
        ... unless you want to send a body with your HTTP GET. There is tons of utility value in this! For example, let's say you want to GET some data but also provide some client request statistics along with the request -- happens all the time in the real world.

        Fetch will reject your GET if it contains a body (a deliberate maintainer decision), even though it's entirely permissible by HTTP and done by many real-world AJAX APIs. Real AJAX will do what it's supposed to. (The HTTP 1.1 2014 Spec says that including a request body in a GET "might cause some implementations to reject the request." Guess which one!)

        Also, advanced features like progress are completely absent from Fetch as well.

        However, there are some fantastic libraries like Axios[1], SuperAgent (requires npm), and, yes, jQuery[2], that have really excellent API's (far superior to Fetch), or you could just write your own (or use an LLM) short wrapper around modern AJAX and call it a day. h/t to Claude:

            const xhr = ['GET','POST','PUT','PATCH','DELETE'].reduce((x,m) => (x[m.toLowerCase()] = 
              (u,d,opt={}) => new Promise((r,j) => {
                const q = new XMLHttpRequest();
                q.open(m,u);
                q.responseType = opt.responseType || '';
                if(opt.headers) Object.entries(opt.headers).forEach(([k,v]) => q.setRequestHeader(k,v));
                if(opt.signal) opt.signal.addEventListener('abort', () => q.abort());
                q.withCredentials = opt.credentials === 'include';
                q.onload = () => r({
                  ok: q.status >= 200 && q.status < 300,
                  status: q.status,
                  headers: new Headers(q.getAllResponseHeaders()),
                  text: () => Promise.resolve(q.responseText),
                  json: () => Promise.resolve(JSON.parse(q.responseText)),
                  blob: () => Promise.resolve(new Blob([q.response])),
                  response: q
                });
                q.onerror = () => j(new TypeError('Network request failed'));
                q.send(d instanceof FormData ? d : JSON.stringify(d));
              }), x), {});
        
        This gives you xhr methods with a fetch-style API and you can still do all the things that fetch can't (but this won't do real streaming or cache control like Fetch, but it'll do 95% of all common use cases in a tiny bit of code.)

        Each method listed above returns a Promise that resolves with the XMLHttpRequest object or rejects with the error. So you get both the Promise functionality and full access to the XHR object in the resolution.

        Usage:

            xhr.post('/api', { data: 123 }, {
              headers: { 'Content-Type': 'application/json' },
              credentials: 'include',
              signal: abortController.signal
            })
            .then(res => res.json())
            .then(data => console.log(data));
        
        For more advanced AJAX stuff, check out the very powerful and flexible Axios library[1].

        And, if you don't need AJAX but do want some of the features from jQuery (like some of the more unusual selectors) that aren't in Cash (to save bytes!), AJAX (and special effects) is excluded from jQuery Slim which brings the code down to only 69KB[3].

        1. Axios https://github.com/axios/axios (41kb)

        2. jQuery AJAX https://api.jquery.com/jQuery.ajax/ (87kb but includes ALL of jquery!)

        3. https://code.jquery.com/jquery-3.7.1.slim.min.js

        • erik_seaberg 5 days ago |
          Caching is the most important reason to consider GET for a non-hypertext API. Vary headers tell the server which header diffs should cause cache misses, but there's no way to do that for an encoded body.
        • jdlshore 4 days ago |
          I believe providing a body with GET is non-standard, which could lead to problems with proxies. IETF is introducing the QUERY method to fill this gap.
          • _hyn3 4 days ago |
            It's not non-standard; it's actually in the standard: https://www.rfc-editor.org/rfc/rfc7231#page-24

            In standard HTTP/1.1, any method can have a request body. In Representational State Transfer (REST) as defined by Dr. Fielding, HTTP doesn't even come up, let alone "methods" per se, so there is no distinction between DELETE, POST, or GET from a REST standpoint, only within HTTP as an engine for hypertext. Further, in HTTP, any of these requests can contain a request body.

            But, because of this behavior by the WhatWG for Fetch, the IETF has added this paragraph to the specification for HTTP/1.1:

              "A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request."
            
            "Some existing implementations" really just means fetch. The p*ing contest between two groups resulted in a neutered and prescriptive fetch.

            In other words, it's fetch that is non-standard, and the actual HTTP standard had to be updated to let you know that.

            • chrismorgan 4 days ago |
              You've got the chronology and causality wrong. The Fetch API came after the RFC 7230 advice. Due to arguably dubious interpretation of arguably poor wording in RFC 2616 (from 1999) that suggested you SHOULD ignore GET bodies, various caching and proxy servers would ignore or reject GET request bodies, so that it became dangerous to use them.

              Since then, each iteration of the HTTP specs has strengthened the advice. The most recent 9110 family says you SHOULD NOT use GET request bodies unless you have confirmed in some way that they'll work, because otherwise you can't trust they'll work.

              Fetch was going along with this consensus, not causing the problem.

              The pool was muddied; nay, poisoned. And so the solution is the QUERY method. That's how things tend to work in such a space. See also 307 because of 302 being misimplemented.

  • yieldcrv 5 days ago |
    ah that’s what people were looking for

    a jquery alternative

    actually the native typescript is interesting

  • xg15 5 days ago |
    window.$ = document.querySelectorAll
    • pwdisswordfishz 5 days ago |
      Uncaught TypeError: 'querySelectorAll' called on an object that does not implement interface Document.
      • xg15 5 days ago |
        Damn, you're right, sorry.

        window.$ = (x => document.querySelectorAll(x))

        • Matheus28 5 days ago |
          I find this a little cleaner:

              window.$ = document.querySelectorAll.bind(document);
          
          Since it works properly for any function no matter the number of arguments it receives
          • simonw 5 days ago |
            I like wrapping it in an Array.from() so you can use .map/.filter/etc.
    • dsego 5 days ago |
      But it doesn't do chaining and you have to loop through elements to do anything with them.
      • Spivak 5 days ago |
        I'm always surprised that an API that is defined by matching 0-n dom elements doesn't return a container that by default maps over them list monad style.
        • freeone3000 5 days ago |
          There’s a fairly small polyfill that makes a DOMNodeList have the same functions as Array.
          • jasonjayr 5 days ago |
            Are the various browser JS implementations clever enough not to make a new Object for Array.from(DOMNodeList) ?
    • roebk 5 days ago |
      window.$ = document.querySelectorAll.bind(document)
    • ComputerGuru 5 days ago |
      The wrong syntax notwithstanding, this doesn't let you recursively use querySelector(All), e.g. to find children of a node like document.querySelector("#foo").querySelectorAll(".bar")
      • xg15 5 days ago |
        I know, it was a bit of a joke.

        But I think the OP's jQuery replacement is also dropping features in the service of a small footprint. So this was my 80/20 contribution to the "smallest jQuery replacement" problem ;)

  • wackget 5 days ago |
    But why? With mainstream websites pumping out literal megabytes of JavaScript, why spend time rewriting an entire library (with less features) to save 50KB?
    • simonw 5 days ago |
      Some of us still try to ship websites that use less than 50KB of JavaScript total.
    • happytoexplain 5 days ago |
      Maybe if we embraced small dependencies rather than saying "why bother?", then dependencies would become smaller?
      • szundi 5 days ago |
        This
    • w4 5 days ago |
      Not relevant to this package in particular, but this line of reasoning baffles me every time I see HN comments about JQuery. So many posters argue against the use of JQuery because of its package size and bandwidth constraints, while simultaneously advocating for SPA frameworks that use orders of magnitude more bandwidth. Absolutely ridiculous cargo cult reasoning.
      • happytoexplain 5 days ago |
        A. You're assuming they are largely the same people by extrapolating from your observations. It's impossible to actually know.

        B. Your two examples provide different things. This is like saying it's OK to include any old multi-megabyte dependency if a site loads a couple mb worth of images. There's no reason to stop considering the size of the small parts just because you decided you need some large parts. Things add up - that will never stop being a useful thing to remember, in any context.

      • nashashmi 5 days ago |
        Two different types of people. One wants to create lightweight applications. The other wants lightweight development.

        Lightweight development for lightweight applications is a bit of an oxymoron at this time.

        • hecanjog 5 days ago |
          IMHO the way to achieve this is to pay the upfront cost of building out a small framework for your application, which has lightweight abstractions for common patterns. With some design, a small internal API can be as nice to work with as the kitchen sink abstractions. (Much nicer, too, when it comes to maintenance and debugging.)
          • KronisLV 5 days ago |
            > IMHO the way to achieve this is to pay the upfront cost of building out a small framework for your application

            And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price. In corporate or professional contexts, you probably just should pick whatever is popular.

            Though that anecdote about risk management should also have this link alongside it: https://www.robinsloan.com/notes/home-cooked-app/

            When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.

            For everything else? Svelte, HTMX, jQuery, Vue, React, Angular or whatever else makes sense.

            That said, sometimes I wonder what a world would look like, where the browser would have the most popular options pre-packaged in a way where you wouldn’t need to download hundreds of KB in each site you visit, but you’d get the packages with browser updates. It’d probably save petabytes of data.

            Except seems like we went in the opposite direction, with even CDNs being less efficient in some ways: https://httptoolkit.com/blog/public-cdn-risks/

            • skydhash 5 days ago |
              The thing is that while your application is working well, the library authors would have moved on and it's up to you to upgrade your application and fix breaking changes. At least with an in-house framework, it's always morphing into something that the company needs. Not saying that there aren't nicer framework, but it's always someone agenda that has aligned with yours at the time of selection.
              • KronisLV 4 days ago |
                > The thing is that while your application is working well, the library authors would have moved on and it's up to you to upgrade your application and fix breaking changes.

                AngularJS is actually a pretty good argument to support your point, I had to migrate an app off of it (we picked Vue as the successor) and it was quite the pain, because a lot of the code was already a bit messy and the concepts don't carry over all that nicely, especially if you want something quite close to the old implementation, functionality wise.

                On the other hand, jQuery just seems to be trucking along throughout the years. There are cases like Vue 2 to Vue 3 migrations which can also have growing pains, but I think that the likes of Vue, React and Angular are generally unlikely to be abandoned, even with growing pains along the way.

                In that regard, your job as a developer is probably to pick whatever might have the least amount of surprises, the most longevity and the lowest chance of you having to maintain it yourself and instead being able to coast off of the work of others (and maybe contributing, if you have the time), with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.

                Sometimes that might even be reaching for something like SSR instead of making SPAs, depending on what you can get away with. One can probably talk about Boring Technology or Lindy effect here.

                • lelanthran 4 days ago |
                  I think, in view of my previous comment which was made prior to reading this refinement of yours, that it all very much depends on whether you are choosing something that is designed to be replaced vs something that is not.

                  Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).

                  > with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.

                  The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.

                  IOW, small efforts are two-way doors; large efforts (thousands of contributors over 5 years) are effectively one-way doors.

                  • KronisLV 4 days ago |
                    > Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).

                    I agree in principle and strive to do that myself, but it has almost never been my experience with code written by others across bunches of projects.

                    Anything developed in house without the explicit goal of being reusable across numerous other projects (e.g. having a framework team within the org) always ends up tightly coupled to the codebase to a degree where throwing it away is basically impossible. E.g. other people typically build bits of frameworks that infect the whole project, rather than decoupled libraries that can be swapped out.

                    > The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.

                    Because of the above, this also becomes really difficult - you end up with underdocumented and overly specific codebases vs community efforts that are basically forced to think about onboarding and being adaptable enough for all of the common use cases.

                    Instead, these codebases will often turn to shit, due to not enough people caring and not being exposed to enough eyes to make up for whatever shortcomings a small group of individuals might have on a technical level. This is especially common in 5-10 year old codebases that have been developed by multiple smaller orgs along the way (one at a time, then inherited by someone else).

                    Maybe it’s my fault for not working with the mythical staff engineers that’d get everything right, but so don’t most people - they work with colleagues that are mostly concerned with shipping whatever works, not how things will be 5 years down the line and I don’t blame them.

            • lelanthran 4 days ago |
              > And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price.

              Isn't that true for using the popular alternative too? At some point the original devs have moved on from $FRAMEWORK v1 to $FRAMEWORK v2 and now you're going to have to do a migration project and hope it doesn't break.

              > When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.

              I think the implication is, with the in-house library, that the in-house library would be a lot easier to replace or update than a deprecated external alternative.

              IMO, it's all very contextual.

              • sensanaty 4 days ago |
                No one's forcing you to upgrade when the framework does. We still have a Vue 2.7 codebase chugging along just fine and won't upgrade it unless truly necessary.
                • 1dom 4 days ago |
                  > No one's forcing you to upgrade when the framework does.

                  Many large companies have entire departments dedicated to forcing you to keep your code up to date.

                  • Capricorn2481 4 days ago |
                    If you're working for that kind of company then you certainly aren't getting a choice whether to use JQuery or React.
                    • lelanthran 3 days ago |
                      > If you're working for that kind of company then you certainly aren't getting a choice whether to use JQuery or React.

                      Not necessarily. There is probably a tickbox for satisfying some regulation that says "Don't use versions that aren't getting security fixes anymore".

                      In which case, yes, you get the choice to choose between JQuery and $SOMETHING_ELSE but not the choice to remain on unsupported versions of anything.

                      • Capricorn2481 3 days ago |
                        > There is probably a tickbox for satisfying some regulation that says "Don't use versions that aren't getting security fixes anymore"

                        In theory, yes, that would be bad. But we're talking about JS frameworks here, not C++ libraries. Go look at the CVEs for React and you will find 2-3 in the past 10 years that were patched out in minor version upgrades.

                        There is a difference between updates due to security and updates due to wanting to use the newest shiny tool. JS is a slow moving language and browsers are excellent sandbox environments. This combo means browsers still support old versions of a lot of libraries and they are completely secure, save a few examples.

                        So if you're telling me a company is forcing everyone to upgrade to the latest Angular/React/Vue for security reasons, I would say they unfortunately don't know what they're talking about.

        • not_a_bot_4sho 5 days ago |
          > Lightweight development for lightweight applications is a bit of an oxymoron at this time.

          Apt description

      • leptons 5 days ago |
        We're using jQuery on our sites which score 100% on all Google Lighthouse pagespeed tests. A smaller version of jQuery really wouldn't matter to us, our pages are already extremely fast to load and score amazingly well on any page speed/SEO test.

        About the only place I could see a benefit from this library is maybe in embedded, where space really is an issue. I've created a few IoT devices with web interfaces that are built-into the tiny ROM of the device. A 6KB library is nice, but I'm using Preact with everything gzipped in one single .html file and my very complex web app hosted in the IoT device is about 50KB total size gzipped - including code, content, SVG images and everything, so jQuery or a JQ substitute isn't going to be a better solution for me, but maybe it fits for someone that doesn't know how to set up the tooling for a react/preact app.

        • Rapzid 4 days ago |
          To add, I really try to minimize external deps but if first-load speed were absolutely critical loading from jQuery CDN would increase odds of it already being cached..

          Meh for most places I've worked though.

          • leptons 4 days ago |
            We don't make any external HTTP requests for any library code. jQuery is embedded into the page HTML file, along with all other required library code necessary for the page to start functioning, in one bundle. Nothing that runs below the fold is executed until the page is scrolled. All scripts are deferred, except the required libraries, one of which is jQuery and is loaded in-line in a <script> block in the page <head>. There's a ton of tricks we use to get to a perfect Google Lighthouse score - we also score perfect 100% on mobile too. This isn't a complex web application but we do a lot of cool front-end stuff.
            • Rapzid 4 days ago |
              That's great and fair. Some places are NUTS about first page load speed(and I mean first time someone has ever visited the site) though and it really could matter across all deps depending on a ton of other factors..

              Serving super common libs, like jQuery, from the lost likely CDN location could maximize the likelihood it's already cached.

              I have never personally worked anywhere this mattered.

              • leptons 4 days ago |
                We provide a website among many other services to our clients. Our clients are very SEO focused, and they will go to Google's Lighthouse (or another testing site) to test their site's page speed, and then they will put in the URL for their competition's website to see how their site compares to their competitors. If they see their page speed score is 1/2 as fast as their competition, they have a reason to leave us and find a better host (whoever their competition is using). We have thousands of clients, so I am managing thousands of individual customized websites based on core "white-label" template code. Page speed matters to us very much, because it matters to our clients.

                Google Lighthouse will complain about every HTTP request, and it doesn't care about CDN caching, because none of the external code will be cached when the test is run. It will tell you to minimize external HTTP requests. This is the same way every page speed test works, not just Google. So including any external dependency will cause the page speed score to go down a bit. Have enough of them and your page speed score ends up being very poor (many other factors can affect this, all of which are detailed in the Lighthouse report). It doesn't matter what the average site visitor experiences if their cache has jQuery in it from some random CDN. The only thing that really matters is that Google is telling our client that their site is performing badly compared to their competitor's site.

                So, my job is to make sure our clients never, ever think about leaving us because of page load speed as measured by Google or any other testing site. Our clients pay us hundreds of dollars every month, some of them pay 10s of thousands depending on their needs (we don't just provide websites). So there is a lot of money at stake. Page speed scores matter very much to us. When our client sees their site is scoring perfect 100% on all Lighthouse tests, and their competitor is scoring a 70%, then we win, and the client has one less reason to leave. We even use this as a selling point to bring on new clients, because we have an absolutely untouchable page speed score compared to our competitors in this space.

                • Rapzid 3 days ago |
                  I'm not sure what to say, I believe you but you seem to be talking past my point that other companies may prefer to go a different route based on their needs and what they are optimizing for. There are real situations a CDN may be preferred.
                  • leptons 3 days ago |
                    Companies that are using CDNs to load commonly used libraries aren't actually interested in page load speed scores. They're pursuing a tech trick that was always somewhat of a red herring, and frankly a bit risky. We've experimented with CDNs and they have actually added stuff to the libraries that shouldn't be there. Trusting a 3rd party to load library code from isn't great for security.
                    • Rapzid 3 days ago |
                      Right, I didn't say anything about scores. Just adding another point of view.
          • mnutt 4 days ago |
            Using jQuery CDN might have helped with cross-site caching in the past, but now all major browsers have cache partitioning by origin for privacy reasons.
    • oliwarner 5 days ago |
      Mainstream websites are advertising-delivery trash. Don't use them as a benchmark for what we should be doing.
    • karaterobot 5 days ago |
      This argument confuses me. It seems equivalent to saying "with mainstream fast food restaurants selling meals with 1600 calories, why are you making yourself a green salad for lunch?", or saying "with the national debt approaching $35 trillion dollars, why are you shopping around for the best rate on a mortgage?". One answer for all three cases is: I'm not the thing that's big, I'm a different thing that's smaller. Another answer is: if being too large is the problem, then being smaller sounds like a solution.

      But I guess you're really asking why the developer would spend time on rewriting a library. Is that really surprising? Most of programming is rewriting something that's been made before, either because you have to for your job, or because you need it to do something slightly different, or have different performance characteristics, or just want to learn how it's done.

    • EasyMark 5 days ago |
      Embedded system? Or “I don’t need all that stuff for my comic book collection manager” or “minimalism has it’s own rewards”?
    • knowitnone 5 days ago |
      oh, ok. Let make things larger then.
    • NotAnOtter 5 days ago |
      Why rewrite an entire code base away from JQuery.. and not to native implementations?

      The era of jQuery and it's clones are over. People need to move on. If you're ever at the architecture level of your code base and think "What package should I use for DOM manipulation?", you're doing something wrong.

      • skydhash 5 days ago |
        jQuery's API is nice. And it's abstraction reflects common sense more than technical implementations. It's another abstraction layer, all right, and not required, but it's so convenient.
      • hu3 4 days ago |
        for htmx, jQuery is amazing

        My current client has a web application written in a lightweight strongly typed php framework, htmx and sprinkled jquery.

        Devs move very quickly, the website is blazing fast, and it makes around 140k mrr. It's not small. About 350 database tables and 200 crud pages. Business logic is well unit tested.

        You don't need to make jQuery the center of DOM manipulation if your application swaps dom with htmx with all the safety and comfort of a cozy backend.

        It feels magical. And the node_modules folder is smol. Icing on the cake.

        I look forward to jQuery 4 and 5.

        You don't see this kind of architecture in CVs because these people are too busy making money to bother.

        • hit8run 4 days ago |
          Sounds interesting from a tech perspective. What PHP framework is it and at what abstraction level you handle forms?
          • hu3 4 days ago |
            Thanks. It's a small custom framework built from libraries, some custom, some third party.

            - File based HTTP router running on top of https://frankenphp.dev/

            - ORM/SQL with: https://github.com/cycle/orm but this is preference. Anything works. From SQL builders to ORMs.

            I'll try to explain their form handling:

            Forms almost always POST to their own GET URL.

            If you GET /user/save you'll get back HTML and `<script>` to build the form.

            If you POST /user/save you're expected to pass the entire form data PLUS an "operation" parameter which is used by the backend to decide what should be done and returned.

            For example if user clicks [add new user] button, the "operation" parameter has value of "btnSubmit.click".

            Why pass operation parameter? Because business forms can have more than just a [submit] button.

            For example, there might be a datagrid filter value being changed (operation: "txtFilter.change"), or perhaps a dropdown search to select a city name from a large list (operation: "textCitySearch.change"), it can be a postal code to address lookup (operation: "txtPostalCode.change"), etc.

            On the backend, the pseudocode looks somewhat like this but it's cleaner/safer because of encapsulation, validation, error handling, data sanitization, model binding and csrf/xss protection:

               function user_save($operation) {
                  $form = new Form('/user/save');
                  $form->add($textName = new component(...));
                  $form->add($textCitySearch = new component(...));
                  $form->add($btnSubmit = new component(...));
            
                  if (method == "GET") return $form->getHtml();
            
                  try {
            
                     if ($operation == "btnSubmit.click") {
                        $newUser = UserService.createNewUser($_POST);
                        return '<script>' . makeJavaScriptSuccessDialog('New user created!') . '</script>';
                     }
            
                     if ($operation == "textCitySearch.change") {
                        $foundCities = UserService.searchCities($_POST);
                        return '<script>' . $textCitySearch->getJsToReplaceResultsWith($foundCities) . '</script>';
                     }
               } catch ($exception){
                  // Services above throw ValidationException() for incorrect input, $form takes that and generates friendly HTML for users in a centralized way
                  if ($exception is ValidationException) {
                     return '<script>' . $form->getValidationErrorJs($exception) . '</script>';
                  }
                  // code below is actually done by a middleware elsewhere that catches unhandled exceptions,
                  // but i put it here for brevity in this example.
                  logSystemException($exception);
                  return '<script>' . makeJavaScriptErrorDialog('Ops, something went wrong with us. We will fix it!') . '</script>';
               }
            
            So the HTML generation and form processing for user creation is handled by a single HTTP endpoint and the code is very straight-forward. The locality of behaviour is off the charts and I don't need 10 template fragments for each form because everything is component based.
            • hit8run 3 days ago |
              Thanks for the detailed response. Very interesting approach. I didn't know about FrankenPHP! You ever considered pure Go for the backend?
              • hu3 2 days ago |
                Pure Go is amazing. I worked with it in another client and can recommend since they were quite productive with it.

                Simple, predictable, boring tooling and great standard library. I love it.

            • pier25 3 days ago |
              Thanks for sharing FrankenPHP. This thing looks amazing.
  • moffkalast 5 days ago |
    Finally a name that is perfectly fitting and describes the library surprisingly well.
    • elaus 5 days ago |
      Assuming you mean that ironically. Unfortunately, the README doesn't reveal where the name comes from, but it is truly absurdly misleading, as if it came from a random generator...
      • luckylion 5 days ago |
        I assumed it comes from jQuery defaulting to $ as an alias for the jQuery function.
      • moffkalast 5 days ago |
        Not sarcastic at all actually, I take you you've missed the absolute horde of dollar signs it uses in its syntax?

        Reminds me of this old joke: "Why do greedy developers all learn PHP? Because there's a lot of dollars in that."

        • elaus 5 days ago |
          Oh wow, I really didn't make that connection. Thanks!

          Still not sure it really is a good name for a lib: someone who doesn't already know it will probably not think about jQuery when they see this name in a dependency list...

    • EasyMark 5 days ago |
      For some reason I would have preferred they called it “Cash Money”
  • mg 5 days ago |
    Browsers have become so nice to work with, that these days, I get away with just the following two lines of code to simplify DOM manipulation:

        dqs  = document.querySelector.bind(document);
        dqsA = document.querySelectorAll.bind(document);
    
    So instead of

        country = document.querySelector('#country');
        cities  = document.querySelectorAll('.city');
    
    I can write

        country = dqs('#country');
        cities  = dqsA('.city');
    
    For everything else, I am fine with just using the native browser functions.

    I usually import the two functions from a module like this:

    import { dqs, dqsA } from '/lib/js/dqs.js';

    This is the module:

    https://github.com/no-gravity/dqs.js

    • nashashmi 5 days ago |
      I really wish it was a native script to use qs and qsa, rather than something I have to add.

      FYI: I know you meant to give an example, but element tags with ID are DOM variables as well.

    • wmanley 5 days ago |
      > Browsers have become so nice to work with, that these days, I get away with just the following two lines of code to simplify DOM manipulation: > > dqs = document.querySelector.bind(document); > dqsA = document.querySelectorAll.bind(document);

      Sounds useful and reasonable.

      > I usually import the two functions from a module like this: > > import { dqs, dqsA } from '/lib/js/dqs.js';

      Utterly absurd. Just copy and paste. It’s only two simple lines, how could it be worth a dependency?

      • brightball 5 days ago |
        Included in multiple places?
      • 0xCMP 5 days ago |
        modern browsers support the import syntax natively, so it really shouldn't be a lot of overhead to import it.
        • notpushkin 4 days ago |
          The overhead here would be the need to make another request just for these two functions.

          On the other hand, with bundling though it’s totally fine to have a module just for these two helpers. (Even better if it can be inlined, but I haven’t seen anything supporting this since Prepack, which is still POC I think.)

          • mg 4 days ago |
            AFAIK modern HTTP versions like HTTP/3 can request multiple files in a single network packet. So it is basically free to do "another request". As the data request goes out and the data comes in in packets with other "requests".
            • theandrewbailey 4 days ago |
              A network request isn't free, only less costly than it used to be. Even with HTTP 3, your JS execution is stalled for however long the RTT is back to the server. That could be 500+ ms if its on the side of the world and doesn't have a CDN.
              • mg 4 days ago |
                Depends on the import tree. The way I understand it, this:

                    import { x } from '/a.js';
                    import { y } from '/b.js';
                
                Does not take longer than this:

                    import { x } from '/a.js';
                
                Because the message to the server "Give me b.js" goes out in the same network packet as "Give me a.js" and the data of b.js comes back in the same packet(s) as the data of a.js.
      • Cyphase 5 days ago |
        It looks like it's intended to be copied and pasted into your codebase, not be an external dependency.
      • 0x457 5 days ago |
        I guess it's meant to be processed by some bundler later.
      • kfajdsl 5 days ago |
        /lib/js/dq.js is part of their codebase.
      • Kiro 5 days ago |
        Bizarre comment. Why would you copypaste this into every file when you can do it once and import it? What's the problem exactly?
        • croes 4 days ago |
          Left-pad anyone?
          • sadeshmukh 4 days ago |
            It's in their codebase.
          • Kiro 4 days ago |
            That's an external dependency. You don't install anything here. It's no different than making any other module you reuse in multiple places.
        • eastbound 4 days ago |
          Because using a JS module will set back your 30-minutes project by an entire day.

          Of course you are supposed to master which of the es/interop/amd/require incantation you are supposed to use. I wish Typescript would have mandated one style and one style of JS module only!!

          And I’ve never succeeded to find a good guidelines on which kind of JS module I should use. Any advice on what is a very easy and stable and worth-to-learn technique to master imports in 2024?

          • genuinelydang 4 days ago |
            What? If you don’t have external dependencies, just remove your bundler/transpiler and rely on browsers to import your code.
          • afavour 4 days ago |
            Your rant is several years out of date. You can use ES imports natively in Node and the browser. I have been very happy doing so.

            Besides, if you’re working with a codebase of non-zero complexity you need imports/require/whatever anyway.

          • jazzypants 4 days ago |
            Use ESM. It's built into the browser. CJS is legacy. Every major JS runtime has module interop built in now.

            Modules aren't hard anymore.

          • nsonha 3 days ago |
            > Because using (a JS) module will set back your 30-minutes project by an entire day

            May be learn to do the basic of YOUR JOB for once, some "software engineer".

          • WorldMaker 3 days ago |
            In 2024 the one and only answer is finally just ESM.

            If you need to support the browser use `<script type="module">`. That works natively in every current browser today. You may need an "importmap" for more easily handling dependencies. You may need to spot build with something like esbuild or rollup or rolldown for some of your dependencies if they were written in CJS or to add missing things a browser needs like ".js" file extensions.

            If you need to support Node use the package.json incantation `"type": "module"` and the `"exports"` key instead of `"main"` and get sane modern defaults in all LTS supported versions of Node (and a few past versions now, too). Most imports will "just work", your module files will be sensibly named ".js". If you publish a library, most people can consume it, even if some of them (CJS stalwarts/people trapped in legacy swamps) complain about having to work with Promises some of the time.

            If you need to support Deno or Bun, they already have sane defaults and their documentation guides you pretty well, including Typescript setup.

            > I wish Typescript would have mandated one style and one style of JS module only!!

            The good news it that they kind of did: the import/export syntax that Typescript has made familiar since around TS 1.0/1.5 is "surprise" ESM syntax. Typescript has been preparing developers into using it all along. You can stop cross-compiling the ESM you've already become used to writing to older, worse formats like CJS and AMD, you can let Typescript do a lot less work on your behalf and just "strip types" more than "transpile".

        • mrweasel 4 days ago |
          Every file? You have one .js file per project, if you're like me. So just throwing those two lines in the top and never having to worry about it ever again seems like a nice option.
          • NoahKAndrews 4 days ago |
            How big are your projects? It's very strange to me that you would want to have just a single JS file per project. Even if you want to avoid bundlers, ES modules make it easy to import code from other files.
            • mrweasel 3 days ago |
              > How big are your projects?

              In terms of Javascript, as little as I can possibly get away with. The web stuff that I do is mostly CRUD type apps, which can be done entirely server side. The Javascript is only where it make the user experience better, so basic form help or to do a modal, things like that.

      • fragmede 5 days ago |
        Because if it ever needs to change, you're in for a world of hurt. Because useful stuff like that is worth sharing elsewhere. It starts with 2 lines, but then theres another useful function you'd like in another file. So you just copy and paste those two lines. But then you want that in a third file. Pretty soon you have this almost-library you're carrying around you've spread across a bunch of files, and now what started with two simple lines is now a mountain of tech debt.

        Maybe you'll never write enough JavaScript to have additional utility functions. You'll probably never need to modify those two lines. But copying and pasting like that makes for quite the code smell. Because if you're copying and pasting that, the question that someone may never actually verbalize to you is what else in the code is copy and pasting instead of being turned into a shared function in a libray?

    • kccqzy 5 days ago |
      I recently attempted to remove React as a dependency just to see what would happen. It turns out different browsers are still incredibly inconsistent when it comes to event handling. For example the select event on an <input> element somehow doesn't fire at all on Safari during my test, and doesn't fire when the caret is merely moved on some browsers. Using just the native browser functions isn't just fine, even if you don't need all the React features like components or state or props. It turns out React DOM is valuable as it papers over browser differences.
      • divbzero 5 days ago |
        I haven’t tested myself, but according to MDN the select event on <input> elements should be supported by Safari?

        https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputEl...

        • kccqzy 5 days ago |
          The MDN page you linked to includes a nice selection logger example. It just doesn't work on my Safari (iOS).
          • chrismorgan 4 days ago |
            And does an equivalent in React work? Because I don't believe React does any of the papering-over you describe. My understanding (as a non-user) is that React does, logically, essentially nothing special around event handling.
            • kccqzy 4 days ago |
              I don't remember off the top of my head whether this specific example works in React as I'm not next to a computer. But I remember reading React source code and finding a whole lot of code to handle the select event. (Just found it by doing a GitHub code search on my phone https://github.com/facebook/react/blob/7c8e5e7ab8bb63de91163...)

              In general React has its own event handling code. For one in React the user doesn't even deal with the browser native DOM events but React synthetic events. React also readily creates brand new synthetic events from other browser events. React also sometimes gives different names or behaviors to browser events; the most famous example is that the React onChange event is roughly equivalent to the browser onInput event, but absolutely different from the browser onChange event.

              • chrismorgan 4 days ago |
                Good to know, thanks. I knew it made synthetic events, but thought it was all still 1:1. I see I was completely wrong. I gotta say, yuck. Don't like it, wish they'd taken a more polyfill-like approach.
                • Izkata 4 days ago |
                  IIRC they do that to deal with browser differences and be consistent with things like event bubbling. Probably other benefits as well but that's the one I'm fairly sure I remember from years ago.
            • fiddlerwoaroof 4 days ago |
              There are a couple of edge cases I forget at the moment where react event handlers intentionally behave differently from the DOM handlers with the same name.
    • theandrewbailey 5 days ago |
    • namanyayg 4 days ago |
      This is my favorite trick that I've been using for a long time

      ...I just checked and it turns out I first blogged[0] about it 12 years ago. Time flies.

      [0] https://nmn.gl/blog/javascript-shortcut-for-getelementbyid-a...

      • cr125rider 4 days ago |
        Your “ct” ligatures in your headings have a fun little loop connecting them!
        • namanyayg 4 days ago |
          Haha I'm glad you noticed :) I'm a huge typography nerd.

          The css to do that comes from the Normalize-OpenType.css [0] library

          [0] https://kennethormandy.com/journal/normalize-opentype-css/

          • efilife 3 days ago |
            > Just look at it’s size!

            Should have been its. Now you are aware of a 12 year old typo

    • gardenhedge 4 days ago |
      How do people you work with recieve this? I'd imagine it could get messy6if every dev has their own little things like this
    • nedt 4 days ago |
      querySelectorAll() isn't live. So you could do what I very often do and already convert the result to an array, i.e.

        dqsA = s => Array.from(document.querySelectorAll(s));
      
      Reason why I do that very often is because it allows all array methods to be used on the result, like .map() or .filter(), which makes it feel very much like jQuery. YMMV
      • theandrewbailey 4 days ago |

            NodeList.prototype.__proto__ = Array.prototype;
        
        Problem solved without Array.from().
        • cr125rider 4 days ago |
          Does the underlying data structure work okay with that? I would assume there is some sort of lazy iterator involved that may not work with array methods, or only work once.

          This is JavaScript though…

        • jazzypants 4 days ago |
          Prototype pollution is bad. We learned this over a decade ago.
      • mg 4 days ago |
        Good point!

        I wonder what I would have to look for in my codebase in terms of what could break when dqsA starts returning an array instead of a NodeList?

      • WorldMaker 3 days ago |
        The versions of those methods (map/filter/reduce/etc) that support any iterator (including upgrading NodeList "for free") have passed Stage 4 of the process, which means they will be in the next version of the standard and already starting to show up in some browsers.

        https://github.com/tc39/proposal-iterator-helpers

    • sekLabs 4 days ago |
      you can use `$(queryGoesHere)` or `$$(queryGoesHere)` too from the devtool console.
  • openrisk 5 days ago |
    Fine as an exercise but for a range of use cases what you really want is the smallest alternative to the bloated reactive js frameworks and alpine.js seems to be occupying that sweet spot.
    • samdixon 5 days ago |
      This seems pretty different from the functionality alpine provides, no?
  • chmod775 5 days ago |
    Here's a stretch goal: use typescript template string magic to correctly infer the type of elements. For instance you can statically infer that $('div#name') will be a HTMLDivElement.
    • hinkley 5 days ago |
      Elixir and a few other languages have the pattern matching and type system that could pull that off but not a lot of languages do. Can you do that in typescript? I don’t see how.
      • dimava 5 days ago |
        You can, using `function $<S>(sel: S | `${S}${ ' '|'#'|'.'|'[' }${string}`): HTMLElementMap[T];` or

            export type inferSelectorElementName<sel extends selector> =
            | string extends sel ? HTMLElement
              : sel extends `${infer A},${infer B}` ? inferSelectorElementName<A | B>
                : sel extends `${infer A}${'.' | '[' | '#'}${infer _}` ? inferSelectorElementName<A>
                  : sel
        
            export type inferElementFromSelector<sel extends selector> =
            | string extends sel ? HTMLElement
              : inferSelectorElementName<sel> extends infer S ?
                S extends '' ? HTMLElement
                  : S extends keyof HTMLElementTagNameMap ? HTMLElementTagNameMap[S]
                    : never
                : never
        
        TS types may go quite deep Check Arktype library [https://arktype.io/], it's type definitions are basically a Typescript written in JSON

            const user = type({
                name: "string",
                platform: "'android' | 'ios'",
                "version?": "number | string"
            })
      • totallykvothe 5 days ago |
        You definitely can do that in TypeScript. The kinds of things you can do with generic inference and string literals are crazy
      • norskeld 5 days ago |
        TypeScript's type system is Turing complete, so you can not only do that, but also some insane stuff like:

        - A SQL database implemented purely in TypeScript type system (https://github.com/codemix/ts-sql)

        - Chess implemented entirely in TypeScript (and Rust) type systems (https://github.com/Dragon-Hatcher/type-system-chess)

        - Lambda calculus in TypeScript type system (https://ayazhafiz.com/articles/21/typescript-type-system-lam...)

    • hoten 4 days ago |
      The package is called `typed-query-selector`. Here it is in action: https://github.com/GoogleChrome/lighthouse/blob/main/types/i...
  • dr_kretyn 5 days ago |
    What does the "modern websites" mean? It honestly sounds like "this only works in the latest chrome, and only on the latest windows and macos".
    • fabiospampinato 5 days ago |
      "modern websites" means IE11+ for cash, it's a fairly old library.
      • Cannabat 3 days ago |
        I remember using cash about 10 years ago. Was it under a different user back then? Ken wheeler maybe?

        Thanks for your continued work on it!

        • fabiospampinato 3 days ago |
          Yes exactly, at some point I asked to maintain it and kinda redid it. Now I kinda consider it "done", as in "maybe some more work would be put into it, but by end large I don't think it's going to change in the future".
  • hinkley 5 days ago |
    The way I see it once you’ve thinned the polyfills to next to nothing, the enduring feature of jQuery is the automatic list comprehensions. The ability to unselect all of the buttons in a form in a single call is still hard to match elsewhere. That and parent queries.

    The main problem I have with the implementation is that it chooses to fail silently when the list is empty. I’ve fixed too many bugs of this sort, often caused by someone refactoring a DOM tree to do some fancy layout trick after the fact. If I were implementing jquery again today, I’d make it error on empty set by default and add a call chain or flag to fail silently when you really don’t care. I’ve spent a few hours poking around at jQuery seeing what it would take to pull out sizzle and do this, but never took things any farther than that.

    At the end of the day jquery is about the old debate of libraries versus frameworks. We’ve been doing SPAs with giant frameworks long enough now for the Trough of Disillusionment to be just around the corner again.

    • luckylion 5 days ago |
      Not having to worry whether some selector matches any elements is part of what makes jQuery attractive to many though. It's very "fire and forget", you send off your command to hide all .foo, and if there are any .foo they will be hidden, and if there are no .foo, nothing happens and you don't need to worry about it, much like CSS. If you write .foo { color: red; } and there isn't any .foo in the document it doesn't do anything but also has no negative side-effects (except that tiny overhead).
      • bussyfumes 5 days ago |
        Just had a script today that fires in 2 contexts and ran into an error where the element I attach a handler to doesn't exist in one of the contexts which breaks JS on the page. Since I already had jQuery as a dependency in the project, in the moment it felt easier to replace the querySelector call with jQuery, which I did, instead of checking the querySelect result so I second this, the 'fire and forget' part still holds up very well even though the tree traversal pain points have mostly been solved by browsers.
      • wwweston 5 days ago |
        Exactly this. It's extremely useful to be able to say "with all document elements that matches this selector (whether there are any or not), do this."

        It's possible it's also situationally useful to say "if there aren't any document elements that match this selector, error out, if so do this with all of them." I'm struggling to imagine a specific situation in which that has compelling advantages (and would be interested in elaboration), but let's say it exists. Then something like this:

            $.fn.mustMatch = function(onNoMatch) {
                if (!this.length) {
                    if(typeof onNoMatch == 'function') onNoMatch(this);
                    else throw "mustMatch failed on selector " + this.selector;
                }
                return this;
            }
        
        would make it easy to explicitly add the guard condition with an invocation like `$("#selector .nonextant").mustMatch().each(function (emt) { /*do this*/ })`, rather than having it invisibly ride along with every comprehension and making the "whether there are any or not" case harder.

        And if for some reason one were possessed of the conviction that implicit enforcement of this universally within their project outweighed the advantages of explicit options for both ways, it'd probably be better to patch the jQuery lib for that specific project than enforce it as a standard for everyone worldwide.

      • moritzwarhier 5 days ago |
        Optional chaining is widely supported and solves this problem for single-element queries. qsA returns an empty NodeList when the selector matches no elements, as long as the string is a valid selector. Then forEach doesn't require it anyway.
      • almd 5 days ago |
        true it is tedious but I have a vs code shortcut for doing the following (and same goes for queryselectorall) let foo = document.queryselector(‘.foo’);

        if (!!foo) { //do thing }

      • hinkley 3 days ago |
        For every one of these we had ten where a button press definitely needed to definitely update a DOM element.

        As I said above, you’d want a way to override the behavior in the few cases where it’s inappropriate

    • spankalee 5 days ago |
      You can modify every item in a query pretty nicely with a one-liner in modern browsers now:

          document.querySelectorAll('input[type=checkbox]').forEach((i) => i.checked = false);
      
      This takes advantage of iterable NodeList and iterator helpers.

      Many parent queries can be done with element.closest()

    • tored 4 days ago |
      Fully agree, the default should be strict, jQuery based code requires every developer to be aware of every selector used in the project and remember to update it when the DOM changes. That is of course impossible.

      I think it is possible to replace the jQuery init function with your own implementation that enforces length.

      • tored 4 days ago |
        Enjoy!

            <html>
            <head>
                <script src="https://code.jquery.com/jquery-4.0.0-beta.2.js"></script>
            </head>
            <body>
            <ul id="list">
                <li>foo</li>
                <li>bar</li>
                <li>gnord</li>
            </ul>
            <script>
                (function ($) {
                    "use strict";
        
                    if ("development") {
                        // jQuery strict mode, logs errors on empty selectors if not opting out
                        const getStackTrace = function (error) {
                            const stack = error.stack || '';
                            return stack
                                .split('\n')
                                .map(function (line) {
                                    return line.trim();
                                })
                                .filter(function (line) {
                                    return !!line;
                                });
                        };
        
                        // by calling try() we silence selectors that returns empty
                        // problem is that this function runs after the selector
                        $.fn.try = function () {
                            this.__store.try = true;
                            return this;
                        };
        
                        // thus we use the GC to check when the jQuery object is destroyed
                        const registry = new FinalizationRegistry((store) => {
                            if (!store.try) {
                                console.error(
                                    'Empty result for selector "' + store.selector + '"',
                                    getStackTrace(store.error)
                                );
                            }
                        });
        
                        // override the init method
                        const jQueryInit = $.fn.init;
                        $.fn.init = function (selector, context) {
                            const result = new jQueryInit(selector, context);
                            if (selector && result.length === 0) {
                                const store = {selector: selector, try: false, error: new Error()};
                                result.__store = store;
                                registry.register(result, store);
                            }
                            return result;
                        };
                    } else {
                        $.fn.try = function () {
                            return this;
                        };
                    }
                })(jQuery);
        
                // normal usage, have result, no error
                $("#list li").each(function (i, el) {
                    console.log(el);
                });
        
                // empty result, triggers error log
                $("#nolist li").each(function (i, el) {
                    console.log(el);
                });
        
                // empty result but with try, no error
                $("#trylist li").try().each(function (i, el) {
                    console.log(el);
                });
            </script>
            </body>
            </html>
  • robertoandred 5 days ago |
    Not sure I’d call IE11 a modern browser. Aren’t they leaving more size/speed improvements on the table by supporting it?
    • fabiospampinato 5 days ago |
      > Aren’t they leaving more size/speed improvements on the table by supporting it?

      Only tiny ones, I don't remember the details now, IE11 ended up providing almost all the same APIs.

  • aargh_aargh 5 days ago |
    From the migration guide I learned a few things that jQuery can do (and Cash can't) that I didn't know and I'll probably use some time:

    https://github.com/fabiospampinato/cash/blob/master/docs/mig...

  • aleclarsoniv 5 days ago |
    I used this initially in a browser extension I'm building. Ended up migrating to a JSX library instead, because jQuery turns into hard-to-reason-about code pretty quickly once you're past “simple app” territory (and I say this as someone who wrote my own jQuery-inspired library[1]). Right tool for the job, as they say.

    [1]: https://github.com/aleclarson/dough

    P.S. If you can cope with jQuery in a medium/large app, good for you. But it's not my cup of tea.

  • AltruisticGapHN 5 days ago |
    I created a similar `$()` utility function for my projects albeit with 10 times less functionality.

    I used the same basic signature for the `$()` function. However I found that 95% of the time I don't need to use the chain method on a collection. There's almost no scenario in which I want to do <collection>.addClass() etc. There's practically ZERO situations in which I would use something like attach an event to a collection of nodes, since event delegation is more elegant (attach a single event and check for event.type and event.target).

    So TLDR I made $() always select a single element with `querySelector()`, which means I could remove the collection/loop from every chained method like addClass() or css() or toggle().

    Point unless you write bad code to begin with, you can probably make this significantly smaller by removing the collection handling. The 1% of the time it is warranted to do an addClass() or something else on a bunch of nodes you can just go native and if the collection is small enough just call $() on each element.

    PS: I guess the subtext also to my post is sometimes something looks logically elegant, like the ability for any chained method to act on the collection selected by $(), but it may not make any sense in the real world.

  • PikachuEXE 5 days ago |
    I am using another one Umbrella JS https://umbrellajs.com
  • jgalt212 5 days ago |
    The primary reason we keep around and use jQuery is because most pages on our site rely upon datatables.net which relies upon jQuery.
    • exodust 4 days ago |
      Thanks. Didn't know about datatables.net looks very useful.

      Looks like it does a great job of dealing with tables on mobile, putting my own manual efforts for that task to shame. I would typically just enable horizontal scrolling on mobile and call it a day. Now I feel a bit guilty about that after seeing the much better ways datatables does it!

  • andai 5 days ago |
    I hear jQuery 4 is a jQuery alternative for modern browsers.
  • geenat 4 days ago |
    For those interested in jQuery alternatives- I've been waiting for jQuery 4.0 soooo long I ended up making my own jQuery with some key differences:

      * Animations, tweens, timelines use pure CSS, instead of jQuery's custom system.
      * Use one element or lists transparently.
      * Inline <script> Locality of Behavior. No more inventing unique "one time" names.
      * Vanilla first. Zero dependencies. 1 file. Under 340 lines.
    
    https://github.com/gnat/surreal
    • hu3 4 days ago |
      This is great!

      Locality of Behaviour is of special interest to me.

      How is your experience with currentScript.parentElement?

      Last month I did a quick research and my impression is that it wasn't reliable in some probably niche case but I can't remember when.

      But I didn't investigate much and I'm glad you made it work!

      If I load 3 consecutive scripts currentScript.parentElement should still work in all browsers right? As long as it is not async or module, which is fine with me.

      SvelteKit had this conversation and they ended up implementing random ids for elements to set their targets:

      https://github.com/sveltejs/kit/issues/2221

    • Izkata 4 days ago |
      Conflicting documentation:

          me() is guaranteed to return 1 element (or first found, or null).
          any() is guaranteed to return an array (or empty array).
      
        Array methods
      
          any('button')?.forEach(...)
          any('button')?.map(...)
      
      So does any() always return an array as described near the top, or can it return null as implied by the example below?
  • slmjkdbtl 4 days ago |
    I'm confused, how is this helpful beyond having some aliases for already existing web apis?
  • k__ 4 days ago |
    In theory, I love all those tiny libs and frameworks

    In practice, I always need to import some huge a* library that make gains from these small alternatives miniscule.

    Framework -> 50KB

    Tiny version of framework -> 5KB

    Lib I need and can't replace -> 1MB

  • nedt 4 days ago |
    Back in the days when trying to slim down JS I used https://github.com/filamentgroup/shoestring Main reason was because they had offered a custom build to only add what you really need.

    It looks like cash has that as well, just bit more hidden in the documentation https://github.com/fabiospampinato/cash/blob/master/docs/par... If I'd use it I'd give that a try.

    Somehow I still think going with what the browsers have to offer nowadays is a better option - actually it's really good and jQuery isn't really needed anymore. Especially when even the small jQuery alternative is still 6kB, while Preact, a react like lib, is only half the size.

  • nsonha 4 days ago |
    Is it just me who doesn't need jQuery or anything like that anymore? What kind of crazy direct dom query/manipulation do you need?

    The manipulation should be on the backing state, and then the dom should just derive from that, such as with data binding.

  • dzonga 2 days ago |
    glad sanity is returning to the world bit by bit

    a lot of apps -- just need a few reactive interactions which htmx, alpine and cash or other libraries like hyperscript, stimulusjs etc fulfill. I mean your standard line of the business apps. consumer apps it might be different, but then even those that have use once mechanics such as Kashi, polymarket, reddit etc don't need to use react.