I think its an uphill battle, but I am hopeful.
Is anyone able to credibly comment on the likelihood that these make it into the standard, and what the timeline might look like?
I'll be first in line to try it out if it ever materializes, though!
It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi...
Your client is already generic you just aren’t using that functionality:
Creating frameworks and standards to support "true" RESTful APIs is a noble goal, but for most people it's a matter of semantics as they aren't going to change how they are doing things.
A list of words that have changed meaning, sometimes even the opposite, of their original meaning:
https://ideas.ted.com/20-words-that-once-meant-something-ver...
It seems these two discussions should not be conflated: 1. What RESTful originally meant, and 2. The value of RESTful APIs and when they should and shouldn't be used.
Feel free to change the meaning of 'agile' to mean 'whatever' (which is how it's interpreted by 99.99% of the population), but leave things like RESTful alone.
Signed, CEO of htmx
Have you forgotten how XML was all the rage not that long ago ?
Also, specific people might not change, but they do retire/die, and new generations might have different opinions...
I'm not suggesting that going back to the original meaning is a bad thing, in fact more power to those who are attempting this. I'm just suggesting that instead of moving the mountain, they could just go around it.
"People won't change" does not imply "people don't change"; "I observe change" does not imply "I cause change."
Dante's Paradiso XVII.37–42 (Hollander translation): "Contingent things [...] are all depicted in the Eternal Sight, / yet are by that no more enjoined / than is a ship, moved downstream on a river's flow, / by the eyes that mirror it."
> Also, specific people might not change, but they do retire/die, and new generations might have different opinions.
Yes, that's certainly the case. "Science advances one funeral at a time." https://en.wikipedia.org/wiki/Planck%27s_principle
This shouldn't be a "war" between "HTML is the original REST" and "JSON is what everyone means today by REST", this should be a celebration together that if these proposals pass we can do both better together. Let User Agents negotiate their content better again. It's good for JSON APIs if the Browser User Agents "catch up" to more features of REST. The JSON APIs can sometimes better specialize in the things their User Agents need/prefer if they aren't also doing all the heavy lifting for Browsers, too. It's good for the HTML APIs if they can do more of what they were originally intended to do and rely on JS less. Servers get a little more complicated again, if they are doing more Content Negotiation, but they were always that complicated before, too.
REST says "resources" it doesn't say what language those resources are described in and never has. REST means both HTML APIs and JSON APIs. (Also, XML APIs and ASN.1 APIs and Protobuf APIs and… There is no "one true" REST.)
Eventually, I would like that audience to be "everyone," but for the time being, the simplest and clearest way to build on the intellectual heritage that we're referencing is to the use the term the same way they did. I benefited from Carson's refusal to let REST mean the opposite of REST, just as he benefited from Martin Fowler's usage of the term, who benefited from Leonard's Richardson's, who benefited from Roy Fielding's.
The alternative is to have hypermedia for the UI on the one hand, and separately JSON/whatever for the API on the other. But now you have all this code duplication. You can cure that code duplication by just using the API from JavaScript on the user-agent to render the UI from data, and now you're essentially using something like a schema but with hand-compiled codecs to render the UI from data.
Even if you go with hypermedia, using that as your API is terribly inefficient in terms of bandwidth for bulk data, so devs invariably don't use HTML or XML or any hypermedia for bulk data. If you have a schema then you could "compress" (dehydrate) that data using something not too unlike FastInfoSet by essentially throwing away most of the hypermedia, and you can re-hydrate the hypermedia where you need it.
So I think GP is not too far off. If we defined schemas for "pages" and used codecs generated or interpreted from those schemas then we could get something close to ideal:
- compression (though the
data might still be highly
compressible with zlib/
zstd/brotli/whatever,
naturally)
- hypermedia
- structured data with
programmatic access methods
(think XPath, JSONPath, etc.)
The cost of this is: a) having to define a schema for every page, b) the user-agent having to GET the schema in order to "hydrate" or interpret the data. (a) is not a new cost, though a schema language understood by the user-agent is required, so we'd have to define such a language and start using it -- (a) is a migration cost. (b) is just part of implementing in the user-agent.This is not really all that crazy. After all XML namespaces and Schemas are already only referenced in each document, not in-lined.
The insistence on purity (HTML, XHTML, XML) is not winning. Falling back on dehydration/hydration might be your best bet if you insist.
Me, I'm pragmatic. I don't mind the hydration codec being written in JS and SPAs. I mean, I agree that it would be better if we didn't need that -- after all I use NoScript still, every day. But in the absence of a suitable schema language I don't really see how to avoid JS and SPAs. Users want speed and responsiveness, and devs want structured data instead of hypermedia -- they want structure, which hypermedia doesn't really give them.
But I'd be ecstatic if we had such a schema language and lost all that JS. Then we could still have JS-less pages that are effectively SPAs if the pages wanted to incorporate re-hydrated content sent in response to a button that did a GET, say.
https://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.h...
That's the problem here. People need APIs, which means not-for-humans, and so to find an efficient way to get "pages" for humans and APIs for not-humans they invented SPAs that transfer data in not-for-humans encodings and generate or render it from/to UIs for humans. And then the intransigent HATEOAS boosters come and tell you "that's not RESTful!!" "you're misusing the term!!", etc.
Look at your response to my thoughtful comment: it's just a dismissive one-liner that helps no one and which implicitly says "though shall have an end-point that deals in HTML and another that deals in JSON, and though shall have to duplicate effort". It comes across as flippant -- as literally flipping the finger[0].
No wonder the devs ignore all this HATEOAS and REST purity.
[0] There's no etymological link between "flippant" and "flipping the finger", but the meanings are similar enough.
The essay I linked to somewhat agrees w/your general point, which is that hypermedia is (mostly) wasted on automated consumers of REST (in the original sense) APIs.
I don't think it's a bad thing to split your hypermedia API and your JSON API:
https://htmx.org/essays/splitting-your-apis/
(NB, some people recommend even splitting your JSON-for-app & JSON-for-integration APIs: https://max.engineer/server-informed-ui)
I also don't think it's hard to avoid duplicating your effort, assuming you have a decent model layer:
As far as efficiency goes, HTML is typically within spitting distance of JSON particularly if you have compression enabled:
https://github.com/1cg/html-json-size-comparison
And is also may be more efficient to generate because it isn't using reflection:
https://github.com/1cg/html-json-speed-comparison
(Those costs will typically be dwarfed by data store access anyway)
So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.
Hope that's more useful.
> So, all in all, I kind of agree with you on the pointlessness of REST purity when it comes to general purpose APIs, but disagree in that I think you can profitably split your application API (hypermedia) from your automation API (JSON) and get the best of both worlds, and not duplicate code too much if you have a proper model layer.
I've yet to see what I proposed, so I've no idea how it would work out. Given the current state of the world I think devs will continue to write JS-dependent SPAs that use JSON APIs. Grandstanding about the meaning of REST is not going to change that.
As far as the future, we'll see. htmx (and other hypermedia-oriented libraries, like unpoly, hotwire, data-star, etc) is getting some traction, but I think you are probably correct that fixed-format JSON APIs talking to react front-ends is going to be the most common approach for the foreseeable future.
the innovation of hypermedia was mixing presentation information w/control information (hypermedia controls) to produce a user interface (distributed control information, in the case of the web)
i think that's an interesting and crucial aspect of the REST network architecture
1) you write your web page in HTML
2) where you fetch data from a server and would normally use JS to render it you'd instead have an HTML attribute naming the "schema" to use to hydrate the data into HTML which would happen automatically, with the hydrated HTML incorporated into the page at some named location.
The schema would be something like XSLT/XPath, but perhaps simpler, and it would support addressing JSON/CBOR data.
if you have a schema then you are breaking the uniform interface of REST: the big idea with REST is that the client (that is, the browser) doesn't know or care what a given end point returns structurally: it just knows that it's hypermedia and it can render the content and all the hypermedia controls in that content to the user
the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas
REST (original sense) does couple your responses to your UI, however, in that your responses are your UI, see https://htmx.org/essays/two-approaches-to-decoupling/
I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.
This doesn't follow. Why is rendering one thing that consists of one document versus another thing that consists of two documents so different that one is RESTful and the other is not?
> this sounds like client side templating to me (some annotated HTML that is "hydrated" from a server) but attached directly to a JSON api rather than having a reactive model
I wouldn't call it templating. It resembles more a stylesheet -- that's why I referenced XSLT/XPath. Browsers already know how to apply XSLT even -- is that unRESTful?
> the necessity of a schema means you are coupling your client and server in a manner that REST (in the traditional sense) doesn't. See https://htmx.org/essays/hateoas
Nonsense. The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server", it's not coupling anything. It's a compression technique of sorts, and mainly one that allows one to reuse API end-points in the UI.
EDIT: Sending the data and the instructions for how to present it separately is no more non-RESTful than using CSS and XML namespaces and Schema and XSLT are.
I think you're twisting REST into pretzels.
> REST (original sense) does couple your responses to your UI, however, in that your responses are your UI, see https://htmx.org/essays/two-approaches-to-decoupling/
How is one response RESTful and two responses not RESTful when the user-agent performs the two requests from a loaded page?
> I may be misunderstanding what you are proposing, but I do strongly agree w/Fielding (https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...) that the uniform interface of REST is its most distinguishing feature, and the necessity of a shared schema between client and server indicates that it is not a property of the proposed system.
You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.
Two documents (requests) vs one request has nothing to do with anything: typical HTML documents make multiple requests to fully resolve w/images etc. What does bear on if a system is RESTful is if an API end point requires an API-specific schema to interact with.
> Browsers already know how to apply XSLT even -- is that unRESTful?
XSLT has nothing to do with REST. Neither does CSS. REST is a network architecture style.
> The schema is sent by the server like any other page. Splitting a thing into two pieces, one metadata and one data, is not "coupling [the] client and server"...
I guess I'd need to see where the hypermedia controls are located: if they are in the "data" request or in the "html" request. CSS doesn't carry any hypermedia control information, both display and control (hypermedia control) data is in the HTML itself, which is what makes HTML a hypermedia. I'd also need to see the relationship between the two end points, that is, how information in one is consumed/referenced from the other. (Your mention of the term 'schema' is why I'm skeptical, but a concrete example would help me understand.)
If the hypermedia controls are in the data then I'd call that potentially a RESTful system in the original sense of that term, i'd need to see how clients work as well in consuming it. (See https://htmx.org/essays/hypermedia-clients/)
> You don't have to link to Fielding's dissertation. That comes across as an appeal to authority.
When discussing REST i think it's reasonable to link to the paper that defined the term. With Fielding, who defined the term, I regard the uniform interface as the most distinguishing technical characteristic of REST. In as much as a proposed system satisfies that (and the other REST constraints) I'm happy to call it RESTful.
In any event, I think some concrete examples (maybe a gist?) would help me understand what you are proposing.
It's an API-specific schema, yes, but the browser doesn't have to know it because the API-to-HTML conversion is encoded in the second document (which rarely changes). I.e., notionally the browser only deals in the hydrated HTML and not in the API-specific schema. How does that make this not RESTful?
Why split them? Just support multiple representations: HTML and JSON (and perhaps other, saner representations than JSON …) and just let content negotiation sort it all out.
https://htmx.org/essays/why-tend-not-to-use-content-negotiat...
What code duplication? If both these APIs use the same data fetching layer, there's no code duplication; if they don't, then it's because the JSON API and the Hypermedia UI have different requirements, and can be more efficiently implemented if they don't reuse each other's querying logic (usually the case).
What you want is some universal way to write them both, and my general stance is that usually they have different requirements, and you'll end up writing so much on top of that universal layer that you might as well have just skipped it in the first place.
We already have the schema language; it’s HTML. Stylesheets and favicons are two examples of specific links that are automatically interpreted by user-agents. Applications are free to use their own link rels. If your point is that changing the name of those rels could break automation that used them, in a way that wouldn’t break humans…then the same is true of JSON APIs as well.
Like, the flaws you point out are legit—but they are properties of how devs are ab/using HTML, not the technology itself.
Yes.
> It’s 2025, the client don’t need to be generic and able to surf and discover the internet like it’s 2005.
No. Where the client is a user-agent browser sort of application then it has to be generic.
> The database is consumed via APIS distributed in two parts: first the client (a lib), second the data: json.
Yes-ish. If instead of a hand-coded "re-hydrator" library you had a schema a schema whose metaschema is supported by the user-agent, then everything would be better because
a) you'd have less code,
b) need a lot less dev labor (because of (a), so I repeat myself),
c) you get to have structured data APIs that also satisfy the HATEOAS concept.
Idk if TFA will like or hate that, but hey.
<button action="/users/354" method="DELETE"></button>
over <button action="/users/delete?id=354"></button>
?
The first has the advantage of being a little clearer at the HTTP level with `DELETE /users/354`.
Ok, but what is the advantage to be "clear at the http level"?
Correctness is very rarely a bad goal to have.
Also, of course, different methods have different rules, which you know as an SE. For example, PUT, UPDATE and DELETE have very different semantics in terms of repeatability of requests, for example.
[0] https://datatracker.ietf.org/doc/html/rfc7231#section-4.2.1
GET is defined to be safe by HTTP. There have been decades of software development that have happened with the understanding that GETs can take place without user approval. To abuse GET for unsafe actions like deleting things is a huge problem.
This has already happened before in big ways. 37Signals built a bunch of things this way and then the Google Web Accelerator came along, prefetching links, and their customers suffered data loss.
When they were told they were abusing HTTP, they ignored it and tried to detect GWA instead of fixing their bug. Same thing happened again, more things deleted because GET was misused.
GET is safe by definition. Don’t abuse it for unsafe actions.
In in this context it meant "misuse", there's no malicious actor involved. GET should have no side-effect which enables optimisation like prefetching or caching: they used it for an effectful operation (deletion) so prefetching caused a bug. It's the developers fault, for not respecting the guarantees expected from GET.
If they'd used POST, everything would have been fine. There's much less of an argument for using `POST /whatever/delete` rather than `DELETE /whatever`. At this point it's a debate on whether REST is a good fit or not for the application.
It's possible to protect against this using various techniques, but they all add some complexity.
Also, the former is more semantically correct in terms of HTTP and REST.
About 2007 or so there was a case where a site was using GET to delete user accounts, of course you had to be logged in to the site to do it so what was the harm the devs thought, however a popular extension made by Google for Chrome started prefetching GET requests for the users - so coming in to the account page where you could theoretically delete your account ended up deleting the account.
It was pretty funny, because I wasn't involved in either side of the fight that ensued.
I would provide more detail than that, but I'm finding it difficult to search for it, I guess Google has screwed up a lot of other stuff since then.
on edit: my memory must be playing tricks on me, I think it had to be more around 2010, 2011 that this happened, at first I was thinking it happened before I started working in Thomson Reuters but now I think it must have happened within the first couple years there.
> Implementors should be aware that the software represents the user in their interactions over the Internet, and should be careful to allow the user to be aware of any actions they might take which may have an unexpected significance to themselves or others.
> In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval.
See the "GET scenario" section of https://owasp.org/www-community/attacks/csrf to learn why ignoring the HTTP spec can be dangerous.
Or this blog post: https://knasmueller.net/why-using-http-get-for-delete-action...
https://alexanderpetros.com/triptych/form-http-methods
This is going to be a long term effort, but Alex has the stubbornness to see it through.
My answer is: I'm pretty optimistic! The people on WHATWG have been responsive and offered great feedback. These things take a long time but we're making steady progress so far, and the webpage linked here will have all the status updates. So, stay tuned.
Similarly, any interesting ways you could see other libraries adopting these new options?
i do think it would reduce and/or eliminate the need for htmx in many cases, which is a good thing: the big idea w/htmx is to push the idea of hypermedia and hypermedia controls further, and if those ideas make it into the web platform so much the better
This is native HTMX, or at least a good chuck of the basics.
But do we have if Google or Apple have shown any interest? At the end you could still end up being on WHATWG and Chrome / Safari not supporting it.
Triptych could be it, and it’s particularly interesting that it’s being championed by the htmx developers.
This perspective seems to align closely with how the creator of htmx views the relationship between htmx and browser capabilities.
1. https://www.youtube.com/watch?v=WuipZMUch18&t=1036s 2. https://www.youtube.com/watch?v=WuipZMUch18&t=4995s
Might want to fix that. :)
<form><button type="submit" formaction="/session" formmethod="DELETE"></form>
<form action="/session" method="DELETE"><button type="submit"></form>
I wish the people behind this initiative luck and hope they succeed but I don't think it'll go anywhere; the browser devs gave up on HTML years ago, JavaScript is the primary language of the web.
The partial page replacement in particular sounds like it might be really interesting and useful to have as a feature of HTML, though ofc more details will emerge with time.
Unless it ended up like PrimeFaces/JSF where more often than not you have to finagle some reference to a particular table row in a larger component tree, inside of an update attribute for some AJAX action and still spend an hour or two debugging why nothing works.
1) Add this to your css:
@view-transition { navigation: auto; }
2) Profit.Well, not so fast haha. There are a few details that you should know [1].
* Firefox has not implemented this yet but it seems likey they are working on it.
* All your static assets need to be properly cached to make the best use of the browser cache.
Also, prefetching some links on hover, like those on a navbar, is helpful.
Add a css class "prefetch" to the links you want to prefetch, then use something like this:
document.addEventListener("mouseover", ({target}) => {
if (target.tagName !== "A" || !target.classList.contains("prefetch")) return;
target.classList.remove("prefetch");
const linkElement = document.createElement("link");
linkElement.rel = "prefetch";
linkElement.href = target.getAttribute("href");
document.head.appendChild(linkElement);
})
There's more work on prefetching/prerendering going on but it is a lil green (experimental) at the moment [2].--
1: https://developer.mozilla.org/en-US/docs/Web/CSS/@view-trans...
2: https://developer.mozilla.org/en-US/docs/Web/API/Speculation...
One of the driving ideas behind Triptych is that, while HTML is insufficient in a couple key ways, it's a way better foundation for your website than JavaScript, and it gets better without any effort from you all the time. In the long run, that really matters. [1]
[0] https://developer.chrome.com/blog/paint-holding [1] https://unplannedobsolescence.com/blog/hard-page-load/
The idea of using PUT, DELETE, or PATCH here is entirely misguided. Maybe it was a good idea, but history has gone in a different direction so now it's irrelevant. About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems. This inconsistency led to unpredictable failures, varying by website, network, and the specific proxy or caching software in use.
The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing. The entire internet infrastructure operates on these semantics, with little to no consideration for other HTTP verbs. Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
Please let's just use what already works. GET for reading, POST for writing. That’s all we need to define transport behavior. Any further differentiation—like what kind of read or write—is application-specific and should be decided by the endpoints themselves.
Even the <form> element’s "action" attribute is built for this simplicity. For example, if your resource is /tea/genmaicha/, you could use <form method="post" action="brew">. Voilà, relative URLs in action! This approach is powerful, practical, and aligned with the infrastructure we already rely on.
Let’s not overcomplicate things for the sake of theoretical perfection. KISS.
This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]
> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.
This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]
> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.
What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.
> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.
You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.
[0] https://alexanderpetros.com/triptych/form-http-methods#ref-6
[1] https://alexanderpetros.com/triptych/form-http-methods#rest-...
[2] https://alexanderpetros.com/triptych/form-http-methods#usage...
[3] https://alexanderpetros.com/triptych/form-http-methods#appli...
I see no such thing in the link you have there. #ref-6 starts with:
> [6] On 01/12/2011, at 9:57 PM, Julian Reschke wrote: "One thing I forgot earlier, and which was the reason
But the link you have there [1] does not contain any such comment. Wrong link?
[1] https://lists.w3.org/Archives/Public/public-html-comments/20...
(will reply to other points as time allows, but I wanted to point out this first)