Fetch will reject your GET if it contains a body (a deliberate maintainer decision), even though it's entirely permissible by HTTP and done by many real-world AJAX APIs. Real AJAX will do what it's supposed to. (The HTTP 1.1 2014 Spec says that including a request body in a GET "might cause some implementations to reject the request." Guess which one!)
Also, advanced features like progress are completely absent from Fetch as well.
However, there are some fantastic libraries like Axios[1], SuperAgent (requires npm), and, yes, jQuery[2], that have really excellent API's (far superior to Fetch), or you could just write your own (or use an LLM) short wrapper around modern AJAX and call it a day. h/t to Claude:
const xhr = ['GET','POST','PUT','PATCH','DELETE'].reduce((x,m) => (x[m.toLowerCase()] =
(u,d,opt={}) => new Promise((r,j) => {
const q = new XMLHttpRequest();
q.open(m,u);
q.responseType = opt.responseType || '';
if(opt.headers) Object.entries(opt.headers).forEach(([k,v]) => q.setRequestHeader(k,v));
if(opt.signal) opt.signal.addEventListener('abort', () => q.abort());
q.withCredentials = opt.credentials === 'include';
q.onload = () => r({
ok: q.status >= 200 && q.status < 300,
status: q.status,
headers: new Headers(q.getAllResponseHeaders()),
text: () => Promise.resolve(q.responseText),
json: () => Promise.resolve(JSON.parse(q.responseText)),
blob: () => Promise.resolve(new Blob([q.response])),
response: q
});
q.onerror = () => j(new TypeError('Network request failed'));
q.send(d instanceof FormData ? d : JSON.stringify(d));
}), x), {});
This gives you xhr methods with a fetch-style API and you can still do all the things that fetch can't (but this won't do real streaming or cache control like Fetch, but it'll do 95% of all common use cases in a tiny bit of code.)Each method listed above returns a Promise that resolves with the XMLHttpRequest object or rejects with the error. So you get both the Promise functionality and full access to the XHR object in the resolution.
Usage:
xhr.post('/api', { data: 123 }, {
headers: { 'Content-Type': 'application/json' },
credentials: 'include',
signal: abortController.signal
})
.then(res => res.json())
.then(data => console.log(data));
For more advanced AJAX stuff, check out the very powerful and flexible Axios library[1].And, if you don't need AJAX but do want some of the features from jQuery (like some of the more unusual selectors) that aren't in Cash (to save bytes!), AJAX (and special effects) is excluded from jQuery Slim which brings the code down to only 69KB[3].
1. Axios https://github.com/axios/axios (41kb)
2. jQuery AJAX https://api.jquery.com/jQuery.ajax/ (87kb but includes ALL of jquery!)
In standard HTTP/1.1, any method can have a request body. In Representational State Transfer (REST) as defined by Dr. Fielding, HTTP doesn't even come up, let alone "methods" per se, so there is no distinction between DELETE, POST, or GET from a REST standpoint, only within HTTP as an engine for hypertext. Further, in HTTP, any of these requests can contain a request body.
But, because of this behavior by the WhatWG for Fetch, the IETF has added this paragraph to the specification for HTTP/1.1:
"A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request."
"Some existing implementations" really just means fetch. The p*ing contest between two groups resulted in a neutered and prescriptive fetch.In other words, it's fetch that is non-standard, and the actual HTTP standard had to be updated to let you know that.
Since then, each iteration of the HTTP specs has strengthened the advice. The most recent 9110 family says you SHOULD NOT use GET request bodies unless you have confirmed in some way that they'll work, because otherwise you can't trust they'll work.
Fetch was going along with this consensus, not causing the problem.
The pool was muddied; nay, poisoned. And so the solution is the QUERY method. That's how things tend to work in such a space. See also 307 because of 302 being misimplemented.
a jquery alternative
actually the native typescript is interesting
window.$ = (x => document.querySelectorAll(x))
window.$ = document.querySelectorAll.bind(document);
Since it works properly for any function no matter the number of arguments it receivesBut I think the OP's jQuery replacement is also dropping features in the service of a small footprint. So this was my 80/20 contribution to the "smallest jQuery replacement" problem ;)
B. Your two examples provide different things. This is like saying it's OK to include any old multi-megabyte dependency if a site loads a couple mb worth of images. There's no reason to stop considering the size of the small parts just because you decided you need some large parts. Things add up - that will never stop being a useful thing to remember, in any context.
Lightweight development for lightweight applications is a bit of an oxymoron at this time.
And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price. In corporate or professional contexts, you probably just should pick whatever is popular.
Though that anecdote about risk management should also have this link alongside it: https://www.robinsloan.com/notes/home-cooked-app/
When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.
For everything else? Svelte, HTMX, jQuery, Vue, React, Angular or whatever else makes sense.
That said, sometimes I wonder what a world would look like, where the browser would have the most popular options pre-packaged in a way where you wouldn’t need to download hundreds of KB in each site you visit, but you’d get the packages with browser updates. It’d probably save petabytes of data.
Except seems like we went in the opposite direction, with even CDNs being less efficient in some ways: https://httptoolkit.com/blog/public-cdn-risks/
AngularJS is actually a pretty good argument to support your point, I had to migrate an app off of it (we picked Vue as the successor) and it was quite the pain, because a lot of the code was already a bit messy and the concepts don't carry over all that nicely, especially if you want something quite close to the old implementation, functionality wise.
On the other hand, jQuery just seems to be trucking along throughout the years. There are cases like Vue 2 to Vue 3 migrations which can also have growing pains, but I think that the likes of Vue, React and Angular are generally unlikely to be abandoned, even with growing pains along the way.
In that regard, your job as a developer is probably to pick whatever might have the least amount of surprises, the most longevity and the lowest chance of you having to maintain it yourself and instead being able to coast off of the work of others (and maybe contributing, if you have the time), with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.
Sometimes that might even be reaching for something like SSR instead of making SPAs, depending on what you can get away with. One can probably talk about Boring Technology or Lindy effect here.
Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).
> with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.
The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.
IOW, small efforts are two-way doors; large efforts (thousands of contributors over 5 years) are effectively one-way doors.
I agree in principle and strive to do that myself, but it has almost never been my experience with code written by others across bunches of projects.
Anything developed in house without the explicit goal of being reusable across numerous other projects (e.g. having a framework team within the org) always ends up tightly coupled to the codebase to a degree where throwing it away is basically impossible. E.g. other people typically build bits of frameworks that infect the whole project, rather than decoupled libraries that can be swapped out.
> The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.
Because of the above, this also becomes really difficult - you end up with underdocumented and overly specific codebases vs community efforts that are basically forced to think about onboarding and being adaptable enough for all of the common use cases.
Instead, these codebases will often turn to shit, due to not enough people caring and not being exposed to enough eyes to make up for whatever shortcomings a small group of individuals might have on a technical level. This is especially common in 5-10 year old codebases that have been developed by multiple smaller orgs along the way (one at a time, then inherited by someone else).
Maybe it’s my fault for not working with the mythical staff engineers that’d get everything right, but so don’t most people - they work with colleagues that are mostly concerned with shipping whatever works, not how things will be 5 years down the line and I don’t blame them.
Isn't that true for using the popular alternative too? At some point the original devs have moved on from $FRAMEWORK v1 to $FRAMEWORK v2 and now you're going to have to do a migration project and hope it doesn't break.
> When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.
I think the implication is, with the in-house library, that the in-house library would be a lot easier to replace or update than a deprecated external alternative.
IMO, it's all very contextual.
Many large companies have entire departments dedicated to forcing you to keep your code up to date.
Not necessarily. There is probably a tickbox for satisfying some regulation that says "Don't use versions that aren't getting security fixes anymore".
In which case, yes, you get the choice to choose between JQuery and $SOMETHING_ELSE but not the choice to remain on unsupported versions of anything.
In theory, yes, that would be bad. But we're talking about JS frameworks here, not C++ libraries. Go look at the CVEs for React and you will find 2-3 in the past 10 years that were patched out in minor version upgrades.
There is a difference between updates due to security and updates due to wanting to use the newest shiny tool. JS is a slow moving language and browsers are excellent sandbox environments. This combo means browsers still support old versions of a lot of libraries and they are completely secure, save a few examples.
So if you're telling me a company is forcing everyone to upgrade to the latest Angular/React/Vue for security reasons, I would say they unfortunately don't know what they're talking about.
Apt description
About the only place I could see a benefit from this library is maybe in embedded, where space really is an issue. I've created a few IoT devices with web interfaces that are built-into the tiny ROM of the device. A 6KB library is nice, but I'm using Preact with everything gzipped in one single .html file and my very complex web app hosted in the IoT device is about 50KB total size gzipped - including code, content, SVG images and everything, so jQuery or a JQ substitute isn't going to be a better solution for me, but maybe it fits for someone that doesn't know how to set up the tooling for a react/preact app.
Meh for most places I've worked though.
Serving super common libs, like jQuery, from the lost likely CDN location could maximize the likelihood it's already cached.
I have never personally worked anywhere this mattered.
Google Lighthouse will complain about every HTTP request, and it doesn't care about CDN caching, because none of the external code will be cached when the test is run. It will tell you to minimize external HTTP requests. This is the same way every page speed test works, not just Google. So including any external dependency will cause the page speed score to go down a bit. Have enough of them and your page speed score ends up being very poor (many other factors can affect this, all of which are detailed in the Lighthouse report). It doesn't matter what the average site visitor experiences if their cache has jQuery in it from some random CDN. The only thing that really matters is that Google is telling our client that their site is performing badly compared to their competitor's site.
So, my job is to make sure our clients never, ever think about leaving us because of page load speed as measured by Google or any other testing site. Our clients pay us hundreds of dollars every month, some of them pay 10s of thousands depending on their needs (we don't just provide websites). So there is a lot of money at stake. Page speed scores matter very much to us. When our client sees their site is scoring perfect 100% on all Lighthouse tests, and their competitor is scoring a 70%, then we win, and the client has one less reason to leave. We even use this as a selling point to bring on new clients, because we have an absolutely untouchable page speed score compared to our competitors in this space.
But I guess you're really asking why the developer would spend time on rewriting a library. Is that really surprising? Most of programming is rewriting something that's been made before, either because you have to for your job, or because you need it to do something slightly different, or have different performance characteristics, or just want to learn how it's done.
The era of jQuery and it's clones are over. People need to move on. If you're ever at the architecture level of your code base and think "What package should I use for DOM manipulation?", you're doing something wrong.
My current client has a web application written in a lightweight strongly typed php framework, htmx and sprinkled jquery.
Devs move very quickly, the website is blazing fast, and it makes around 140k mrr. It's not small. About 350 database tables and 200 crud pages. Business logic is well unit tested.
You don't need to make jQuery the center of DOM manipulation if your application swaps dom with htmx with all the safety and comfort of a cozy backend.
It feels magical. And the node_modules folder is smol. Icing on the cake.
I look forward to jQuery 4 and 5.
You don't see this kind of architecture in CVs because these people are too busy making money to bother.
- File based HTTP router running on top of https://frankenphp.dev/
- ORM/SQL with: https://github.com/cycle/orm but this is preference. Anything works. From SQL builders to ORMs.
I'll try to explain their form handling:
Forms almost always POST to their own GET URL.
If you GET /user/save you'll get back HTML and `<script>` to build the form.
If you POST /user/save you're expected to pass the entire form data PLUS an "operation" parameter which is used by the backend to decide what should be done and returned.
For example if user clicks [add new user] button, the "operation" parameter has value of "btnSubmit.click".
Why pass operation parameter? Because business forms can have more than just a [submit] button.
For example, there might be a datagrid filter value being changed (operation: "txtFilter.change"), or perhaps a dropdown search to select a city name from a large list (operation: "textCitySearch.change"), it can be a postal code to address lookup (operation: "txtPostalCode.change"), etc.
On the backend, the pseudocode looks somewhat like this but it's cleaner/safer because of encapsulation, validation, error handling, data sanitization, model binding and csrf/xss protection:
function user_save($operation) {
$form = new Form('/user/save');
$form->add($textName = new component(...));
$form->add($textCitySearch = new component(...));
$form->add($btnSubmit = new component(...));
if (method == "GET") return $form->getHtml();
try {
if ($operation == "btnSubmit.click") {
$newUser = UserService.createNewUser($_POST);
return '<script>' . makeJavaScriptSuccessDialog('New user created!') . '</script>';
}
if ($operation == "textCitySearch.change") {
$foundCities = UserService.searchCities($_POST);
return '<script>' . $textCitySearch->getJsToReplaceResultsWith($foundCities) . '</script>';
}
} catch ($exception){
// Services above throw ValidationException() for incorrect input, $form takes that and generates friendly HTML for users in a centralized way
if ($exception is ValidationException) {
return '<script>' . $form->getValidationErrorJs($exception) . '</script>';
}
// code below is actually done by a middleware elsewhere that catches unhandled exceptions,
// but i put it here for brevity in this example.
logSystemException($exception);
return '<script>' . makeJavaScriptErrorDialog('Ops, something went wrong with us. We will fix it!') . '</script>';
}
So the HTML generation and form processing for user creation is handled by a single HTTP endpoint and the code is very straight-forward. The locality of behaviour is off the charts and I don't need 10 template fragments for each form because everything is component based.Simple, predictable, boring tooling and great standard library. I love it.
Reminds me of this old joke: "Why do greedy developers all learn PHP? Because there's a lot of dollars in that."
Still not sure it really is a good name for a lib: someone who doesn't already know it will probably not think about jQuery when they see this name in a dependency list...
dqs = document.querySelector.bind(document);
dqsA = document.querySelectorAll.bind(document);
So instead of country = document.querySelector('#country');
cities = document.querySelectorAll('.city');
I can write country = dqs('#country');
cities = dqsA('.city');
For everything else, I am fine with just using the native browser functions.I usually import the two functions from a module like this:
import { dqs, dqsA } from '/lib/js/dqs.js';
This is the module:
FYI: I know you meant to give an example, but element tags with ID are DOM variables as well.
Sounds useful and reasonable.
> I usually import the two functions from a module like this: > > import { dqs, dqsA } from '/lib/js/dqs.js';
Utterly absurd. Just copy and paste. It’s only two simple lines, how could it be worth a dependency?
On the other hand, with bundling though it’s totally fine to have a module just for these two helpers. (Even better if it can be inlined, but I haven’t seen anything supporting this since Prepack, which is still POC I think.)
import { x } from '/a.js';
import { y } from '/b.js';
Does not take longer than this: import { x } from '/a.js';
Because the message to the server "Give me b.js" goes out in the same network packet as "Give me a.js" and the data of b.js comes back in the same packet(s) as the data of a.js.Of course you are supposed to master which of the es/interop/amd/require incantation you are supposed to use. I wish Typescript would have mandated one style and one style of JS module only!!
And I’ve never succeeded to find a good guidelines on which kind of JS module I should use. Any advice on what is a very easy and stable and worth-to-learn technique to master imports in 2024?
Besides, if you’re working with a codebase of non-zero complexity you need imports/require/whatever anyway.
Modules aren't hard anymore.
May be learn to do the basic of YOUR JOB for once, some "software engineer".
If you need to support the browser use `<script type="module">`. That works natively in every current browser today. You may need an "importmap" for more easily handling dependencies. You may need to spot build with something like esbuild or rollup or rolldown for some of your dependencies if they were written in CJS or to add missing things a browser needs like ".js" file extensions.
If you need to support Node use the package.json incantation `"type": "module"` and the `"exports"` key instead of `"main"` and get sane modern defaults in all LTS supported versions of Node (and a few past versions now, too). Most imports will "just work", your module files will be sensibly named ".js". If you publish a library, most people can consume it, even if some of them (CJS stalwarts/people trapped in legacy swamps) complain about having to work with Promises some of the time.
If you need to support Deno or Bun, they already have sane defaults and their documentation guides you pretty well, including Typescript setup.
> I wish Typescript would have mandated one style and one style of JS module only!!
The good news it that they kind of did: the import/export syntax that Typescript has made familiar since around TS 1.0/1.5 is "surprise" ESM syntax. Typescript has been preparing developers into using it all along. You can stop cross-compiling the ESM you've already become used to writing to older, worse formats like CJS and AMD, you can let Typescript do a lot less work on your behalf and just "strip types" more than "transpile".
In terms of Javascript, as little as I can possibly get away with. The web stuff that I do is mostly CRUD type apps, which can be done entirely server side. The Javascript is only where it make the user experience better, so basic form help or to do a modal, things like that.
Maybe you'll never write enough JavaScript to have additional utility functions. You'll probably never need to modify those two lines. But copying and pasting like that makes for quite the code smell. Because if you're copying and pasting that, the question that someone may never actually verbalize to you is what else in the code is copy and pasting instead of being turned into a shared function in a libray?
https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputEl...
In general React has its own event handling code. For one in React the user doesn't even deal with the browser native DOM events but React synthetic events. React also readily creates brand new synthetic events from other browser events. React also sometimes gives different names or behaviors to browser events; the most famous example is that the React onChange event is roughly equivalent to the browser onInput event, but absolutely different from the browser onChange event.
...I just checked and it turns out I first blogged[0] about it 12 years ago. Time flies.
[0] https://nmn.gl/blog/javascript-shortcut-for-getelementbyid-a...
The css to do that comes from the Normalize-OpenType.css [0] library
[0] https://kennethormandy.com/journal/normalize-opentype-css/
Should have been its. Now you are aware of a 12 year old typo
dqsA = s => Array.from(document.querySelectorAll(s));
Reason why I do that very often is because it allows all array methods to be used on the result, like .map() or .filter(), which makes it feel very much like jQuery. YMMV NodeList.prototype.__proto__ = Array.prototype;
Problem solved without Array.from().This is JavaScript though…
I wonder what I would have to look for in my codebase in terms of what could break when dqsA starts returning an array instead of a NodeList?
export type inferSelectorElementName<sel extends selector> =
| string extends sel ? HTMLElement
: sel extends `${infer A},${infer B}` ? inferSelectorElementName<A | B>
: sel extends `${infer A}${'.' | '[' | '#'}${infer _}` ? inferSelectorElementName<A>
: sel
export type inferElementFromSelector<sel extends selector> =
| string extends sel ? HTMLElement
: inferSelectorElementName<sel> extends infer S ?
S extends '' ? HTMLElement
: S extends keyof HTMLElementTagNameMap ? HTMLElementTagNameMap[S]
: never
: never
TS types may go quite deep
Check Arktype library [https://arktype.io/], it's type definitions are basically a Typescript written in JSON const user = type({
name: "string",
platform: "'android' | 'ios'",
"version?": "number | string"
})
- A SQL database implemented purely in TypeScript type system (https://github.com/codemix/ts-sql)
- Chess implemented entirely in TypeScript (and Rust) type systems (https://github.com/Dragon-Hatcher/type-system-chess)
- Lambda calculus in TypeScript type system (https://ayazhafiz.com/articles/21/typescript-type-system-lam...)
Thanks for your continued work on it!
The main problem I have with the implementation is that it chooses to fail silently when the list is empty. I’ve fixed too many bugs of this sort, often caused by someone refactoring a DOM tree to do some fancy layout trick after the fact. If I were implementing jquery again today, I’d make it error on empty set by default and add a call chain or flag to fail silently when you really don’t care. I’ve spent a few hours poking around at jQuery seeing what it would take to pull out sizzle and do this, but never took things any farther than that.
At the end of the day jquery is about the old debate of libraries versus frameworks. We’ve been doing SPAs with giant frameworks long enough now for the Trough of Disillusionment to be just around the corner again.
It's possible it's also situationally useful to say "if there aren't any document elements that match this selector, error out, if so do this with all of them." I'm struggling to imagine a specific situation in which that has compelling advantages (and would be interested in elaboration), but let's say it exists. Then something like this:
$.fn.mustMatch = function(onNoMatch) {
if (!this.length) {
if(typeof onNoMatch == 'function') onNoMatch(this);
else throw "mustMatch failed on selector " + this.selector;
}
return this;
}
would make it easy to explicitly add the guard condition with an invocation like `$("#selector .nonextant").mustMatch().each(function (emt) { /*do this*/ })`, rather than having it invisibly ride along with every comprehension and making the "whether there are any or not" case harder.And if for some reason one were possessed of the conviction that implicit enforcement of this universally within their project outweighed the advantages of explicit options for both ways, it'd probably be better to patch the jQuery lib for that specific project than enforce it as a standard for everyone worldwide.
if (!!foo) { //do thing }
As I said above, you’d want a way to override the behavior in the few cases where it’s inappropriate
document.querySelectorAll('input[type=checkbox]').forEach((i) => i.checked = false);
This takes advantage of iterable NodeList and iterator helpers.Many parent queries can be done with element.closest()
I think it is possible to replace the jQuery init function with your own implementation that enforces length.
<html>
<head>
<script src="https://code.jquery.com/jquery-4.0.0-beta.2.js"></script>
</head>
<body>
<ul id="list">
<li>foo</li>
<li>bar</li>
<li>gnord</li>
</ul>
<script>
(function ($) {
"use strict";
if ("development") {
// jQuery strict mode, logs errors on empty selectors if not opting out
const getStackTrace = function (error) {
const stack = error.stack || '';
return stack
.split('\n')
.map(function (line) {
return line.trim();
})
.filter(function (line) {
return !!line;
});
};
// by calling try() we silence selectors that returns empty
// problem is that this function runs after the selector
$.fn.try = function () {
this.__store.try = true;
return this;
};
// thus we use the GC to check when the jQuery object is destroyed
const registry = new FinalizationRegistry((store) => {
if (!store.try) {
console.error(
'Empty result for selector "' + store.selector + '"',
getStackTrace(store.error)
);
}
});
// override the init method
const jQueryInit = $.fn.init;
$.fn.init = function (selector, context) {
const result = new jQueryInit(selector, context);
if (selector && result.length === 0) {
const store = {selector: selector, try: false, error: new Error()};
result.__store = store;
registry.register(result, store);
}
return result;
};
} else {
$.fn.try = function () {
return this;
};
}
})(jQuery);
// normal usage, have result, no error
$("#list li").each(function (i, el) {
console.log(el);
});
// empty result, triggers error log
$("#nolist li").each(function (i, el) {
console.log(el);
});
// empty result but with try, no error
$("#trylist li").try().each(function (i, el) {
console.log(el);
});
</script>
</body>
</html>
Only tiny ones, I don't remember the details now, IE11 ended up providing almost all the same APIs.
https://github.com/fabiospampinato/cash/blob/master/docs/mig...
[1]: https://github.com/aleclarson/dough
P.S. If you can cope with jQuery in a medium/large app, good for you. But it's not my cup of tea.
I used the same basic signature for the `$()` function. However I found that 95% of the time I don't need to use the chain method on a collection. There's almost no scenario in which I want to do <collection>.addClass() etc. There's practically ZERO situations in which I would use something like attach an event to a collection of nodes, since event delegation is more elegant (attach a single event and check for event.type and event.target).
So TLDR I made $() always select a single element with `querySelector()`, which means I could remove the collection/loop from every chained method like addClass() or css() or toggle().
Point unless you write bad code to begin with, you can probably make this significantly smaller by removing the collection handling. The 1% of the time it is warranted to do an addClass() or something else on a bunch of nodes you can just go native and if the collection is small enough just call $() on each element.
PS: I guess the subtext also to my post is sometimes something looks logically elegant, like the ability for any chained method to act on the collection selected by $(), but it may not make any sense in the real world.
Looks like it does a great job of dealing with tables on mobile, putting my own manual efforts for that task to shame. I would typically just enable horizontal scrolling on mobile and call it a day. Now I feel a bit guilty about that after seeing the much better ways datatables does it!
* Animations, tweens, timelines use pure CSS, instead of jQuery's custom system.
* Use one element or lists transparently.
* Inline <script> Locality of Behavior. No more inventing unique "one time" names.
* Vanilla first. Zero dependencies. 1 file. Under 340 lines.
https://github.com/gnat/surrealLocality of Behaviour is of special interest to me.
How is your experience with currentScript.parentElement?
Last month I did a quick research and my impression is that it wasn't reliable in some probably niche case but I can't remember when.
But I didn't investigate much and I'm glad you made it work!
If I load 3 consecutive scripts currentScript.parentElement should still work in all browsers right? As long as it is not async or module, which is fine with me.
SvelteKit had this conversation and they ended up implementing random ids for elements to set their targets:
me() is guaranteed to return 1 element (or first found, or null).
any() is guaranteed to return an array (or empty array).
Array methods
any('button')?.forEach(...)
any('button')?.map(...)
So does any() always return an array as described near the top, or can it return null as implied by the example below?In practice, I always need to import some huge a* library that make gains from these small alternatives miniscule.
Framework -> 50KB
Tiny version of framework -> 5KB
Lib I need and can't replace -> 1MB
It looks like cash has that as well, just bit more hidden in the documentation https://github.com/fabiospampinato/cash/blob/master/docs/par... If I'd use it I'd give that a try.
Somehow I still think going with what the browsers have to offer nowadays is a better option - actually it's really good and jQuery isn't really needed anymore. Especially when even the small jQuery alternative is still 6kB, while Preact, a react like lib, is only half the size.
The manipulation should be on the backing state, and then the dom should just derive from that, such as with data binding.
a lot of apps -- just need a few reactive interactions which htmx, alpine and cash or other libraries like hyperscript, stimulusjs etc fulfill. I mean your standard line of the business apps. consumer apps it might be different, but then even those that have use once mechanics such as Kashi, polymarket, reddit etc don't need to use react.