Little tip, you might want to add a -webkit-user-select: none on the container elements so it prevents highlighting in Safari when dragging the cursor out and back over.
It's working great in Chrome and Firefox though.
I was testing on Safari iOS and it looks like the non-prefix version worked there.
// Written from a browser Apple calls Safari on iOS.
This one reminds me of this cool card effect
---
One challenge with a demo like this is that subtle effects may look better, but they are harder to see. So I balanced making them visible enough to appreciate which is more intense than I’d otherwise want.
Would this be possible to achieve in CSS? I presume having a larger box with the blur, but clipping it to a smaller box or something like that.
> This can be solved with CSS. Extend the background blur all the way through the element and then use CSS masks to cut out the actual shape you want.
> With this, you can remove the border (or inset box shadow), and the edge of the glass will look much, much more real
I tried this and it works! One unfortunate impact is a loss in simplicity. In the example on the page you can apply the non-JavaScript version to pretty much any element and get a nice glass effect with `border-radius` and such still functioning as expected.
Using `clip-path` I'm able to consider background pixels more correctly but it looks like I'd need an extra div and/or some sizing tricks to get everything working exactly as expected.
I'll keep noodling on this and may add an update to the page if a simple solution comes to mind.
The dragging works with another bit of JavaScript--the only other bit on the page--which uses a `data-click-drag-area` to define an element which will contain draggable children and a `data-click-drag-item` attribute to indicate a child can be dragged.
The the parent must be a 'positioned element' (it must have `position` set to something in CSS) the children must have `position: absolute`.
I did this in TypeScript. I'll share the code below. You have to call `initDataClickDrag` from another script... if you want to include this script directly you can just remove the `export` keyword and call `initDataClickDrag()` at the bottom after it is defined:
export const initDataClickDrag = () => {
// Get all of the areas we can drag items in
const dragAreas = document.querySelectorAll("[data-click-drag-area]");
for (const dragArea of dragAreas) {
// Only iterate `HTMLElement`s
if (!(dragArea instanceof HTMLElement)) continue;
// Get all of the items we can drag
const dragItems = dragArea.querySelectorAll("[data-click-drag-item]");
for (const dragItem of dragItems) {
// Only iterate `HTMLElement`s
if (!(dragItem instanceof HTMLElement)) continue;
let isDragging = false;
let lastCursorX: number | undefined = undefined;
let lastCursorY: number | undefined = undefined;
// Mouse down event to start dragging
const downCallback = (obj: {
readonly pageX: number;
readonly pageY: number;
}) => {
isDragging = true;
lastCursorX = obj.pageX;
lastCursorY = obj.pageY;
};
dragItem.addEventListener("mousedown", (e) => {
downCallback(e);
});
dragItem.addEventListener("touchstart", (e) => {
const touches = e.touches;
if (touches.length === 0) return;
downCallback(touches[0]);
});
// Mouse move event to scroll while dragging
const moveCallback = (obj: {
readonly pageX: number;
readonly pageY: number;
}): boolean => {
if (!isDragging) return false;
if (lastCursorX === undefined) return false;
if (lastCursorY === undefined) return false;
const x = lastCursorX - obj.pageX;
const y = lastCursorY - obj.pageY;
const left = dragItem.offsetLeft - x;
const top = dragItem.offsetTop - y;
dragItem.style.left = `${left.toString()}px`;
dragItem.style.top = `${top.toString()}px`;
// Get dragArea dimensions
const dragAreaRect = dragArea.getBoundingClientRect();
// Get element dimensions
const elementRect = dragItem.getBoundingClientRect();
if (dragItem.offsetLeft < 0) dragItem.style.left = "0px";
if (dragItem.offsetTop < 0) dragItem.style.top = "0px";
if (left + elementRect.width > dragAreaRect.width) {
// Right boundary
const left = dragAreaRect.width - elementRect.width;
dragItem.style.left = `${left.toString()}px`;
}
if (top + elementRect.height > dragAreaRect.height) {
// Bottom boundary
const top = dragAreaRect.height - elementRect.height;
dragItem.style.top = `${top.toString()}px`;
}
lastCursorX = obj.pageX;
lastCursorY = obj.pageY;
return true;
};
document.addEventListener("mousemove", (e) => {
moveCallback(e);
});
document.addEventListener(
"touchmove",
(e) => {
const touches = e.touches;
if (touches.length === 0) return;
if (!moveCallback(touches[0])) return;
e.preventDefault();
},
{ passive: false },
);
// Mouse up event to stop dragging
document.addEventListener("mouseup", () => {
isDragging = false;
});
document.addEventListener("touchend", () => {
isDragging = false;
});
}
}
};
Forza used a custom UI system based on XAML and the acrylic elements at the top of the article were largely implemented in a custom shader. For a custom system it was pretty solid. We also built a lot of tech into it related to 3D placement of elements to support diagetic UI.
A general drawback of using web UIs in games is the lack of support for performant 2D vfx. It's something I'd like to tackle in 2025 with [Spark2D](https://github.com/Singtaa/Spark2D).
Why not embedding a browser directly though?
How do you feel about the Flutter based UI Widgets? (https://github.com/UIWidgets/com.unity.uiwidgets)
In my experience there are two kinds of games: UI is the game or UI supports the game. When UI is the game, the performance bottleneck is almost always text rendering. From a DX POV there are many solutions, but from a performance POV, how can anyone compete with Skia or the native text APIs?
Unity's UI Toolkit, for example, is mesh and shader-based, so it can be highly performant for complex UIs.
Interesting note: If you use OneJS with the NodeJS backend, you can use the wasm version of Skia and render all kinds of cool stuff onto a texture in Unity. Obviously the performance with that is not ideal for animation. But I was able to quickly and easily render music notations and charts on my deployed Unity app on Android using that method.
> Why not embedding a browser directly though? How do you feel about the Flutter based UI Widgets?
Mostly to avoid overheads on both disk space and runtime performance. QuickJS is only ~20MB to embed. And I have no experience with Flutter, unfortunately. =/
I thought about playing with saturation and I saw some other examples do that. I decided against it. For my article anyways it wouldn’t have mattered as much given that the moon image I used doesn’t have much color. I’d encourage folks making their own glass to play with that though.
That being said, it has more of a place in games. Especially in HUD where you don’t want to occlude the background since that’s the main area a user is interacting.
Thousands of years of adaptation has honed the brain and the eye to be optimized for this type of view much moreso then the simple UX you see here on HN.
Not only does the blurred/frosted glass background look better but it should be clearer because that's what we've been designed to do.
Every single article I've read on the matter says higher contrast is more readable. The debate is over how high is 'good enough'.
UX poses as a scientific field when really there is little evidence based research and it’s all just technical procedure and big words. Technical procedures are illusory they make you think it’s legit with a bunch of rules but to be scientific you need evidence. As technical as the rules are, a lot of it is made up bs.
UX design is one of those bs concepts that litter the world and poses as legitimate. It’s like the food pyramid from the USDA that says refined carbs are the most important part of every meal.
If the debate is on how much contrast then run some science. Instead UX just debates and as a result the entire field is made up conjecture.
https://www.sciencedirect.com/science/article/abs/pii/S01698...
https://www.tandfonline.com/doi/abs/10.1080/0144929041000166...
https://jov.arvojournals.org/article.aspx?articleid=2121593
Though this lacks citations and evidence, it's by a generally accepted expert and authority in the field:
https://www.nngroup.com/articles/low-contrast/
I'm really struggling to understand the connections you're drawing to food.
The food pyramid is based off of cherry picked data and biased experiments influenced the food industry. This is similar to your cherry picked data.
Your data measures low contrast vs high contrast but really you need to measure high contrast vs. blurred background.
The human body is designed to desire and consume the maximum amount of feel good tasty food for maximum energy but we are finding that the evolution of the human body is not designed to actually accept such massive consumption despite our desire for such food. Our bodies do not handle the highest capacity consumption instead they have narrowly evolved to fill a strangely specific niche.
Same with our eyes. It may seem easier to like high contrast designs but our eyes through millions of years of evolution are not optimized for high contrast signs since those things never existed in nature.
Again we need evidence based measurements which the entire UX field lacks. It’s just a bunch of made up concepts strung together with little scientific research.
And there’s a lot of research regarding UX, under the term Human-Computer Interaction. The thing is that it easily converge to something like Win 2000, macOS Leopard.
But this also gets into another gray area where looking at a design for a UI != using said design to perform important tasks. Hence why prototyping and user tests often run counter to “pretty” interfaces.
I can simply say you’re wrong and I disagree and you got nothing to move your argument forward.
Things happening in the background being distracting and disorienting is also very subjective. You can lower the translucency of the glass just as you would lower the opacity of a solid color.
My point is that your criticism is far from being objectively true. There are ways of integrating this design element without running into those issues. The screenshot from the Forza game looks fine, for example, and having seen it in action, the background is not distracting. And what you gain is a more interesting and visually pleasing UI than if you were using a solid background. This may be more important in a video game, but it can also be done tastefully in OS and web design.
A good integration would:
- consider how heavily to use the effect to not overload the ui
- query for feature support to avoid adding all additional decorations if blur isn't available
- query for display & video features to avoid rendering the effect on devices that likely don't have a capable gpu
- query for user preferences and serve solid colors with good contrasts to users who might struggle with the busier background
- limit the extent of the effects and shadows depending on available space
- tweak the blurring and opacities to ensure good readability in the specific UI, regarding how complex or contrasted the background under blurred areas will be
- ensure blurs and shadows scale adaptively with the typography to ensure consistent visuals
UX is by definition the design of how a user experiences the complete app and interaction, it's not made or broken by individual stylistic choices
Light inside the frosted glass just goes in a straight direction. It will not behave like this : https://blendamator.com/wp-content/uploads/2023/09/schema-ra...
That being said my example is not acrylic… and it’s not quite glass either as you mention. It’s more like glass with some artistic license.
Not that this matters much anyway, the effect is cool nonetheless, albeit a little bit improperly named.
That said I dislike the use of frosted glass in user interface design, and feel it was a step backwards for Mac OS when it was added. Designers (hypocritically including myself - I use it too sometimes) are so seduced by it. But in practice it’s never ideal. Especially if the content behind the panel is dynamic or user generated.
It’s the brushed metal of the 2010s, I’m surprised that it leaked out of Windows Vista into everything else!
It debuted already in 1999 in QuickTime Player, years before Mac OS X:
http://hallofshame.gp.co.at/qtime.htm
The rumor was that Steve Jobs liked this style a lot, so it spread to the operating system despite the widespread external criticism (like the link above).
If you ask me, skeuomorphism makes interfaces more pleasing to use than the minimalistic trend of the past decade+, where everything must be flat. It adds a visual flourish that replicates surfaces and materials we're used to from the real world, making interfaces more familiar and approachable. From a practical standpoint, flat design makes it hard to distinguish elements and their state, whereas when we had 3D buttons, brushed metal and frosted glass, this was very clear.
I think the pendulum is swinging back now, as more designers are shunning the flat look, so hopefully we'll see a resurgence of more human-friendly UIs.
Nowadays I have just one terminal program that is just slightly transparent and I have a nice desktop background that is fairly uniform and almost never gets in the way -- but I never use that one for my Neovim or for ssh-ing into remote machines. Only for myself fiddling around with stuff.
Transparency did look cool but once it prevents you from doing your job well once, it's out the door.
The first example looks beautiful to me though, I might use it in my next UI.
It works by using a fixed-position pre-blurred (with glass effects) background image: https://webdev.andersriggelsen.dk/aero/bgl.jpg
This is a lot more performant than a live gaussian blur but it of course has all the drawbacks of not allowing for a dynamic background image.
This works as expected in Firefox.
https://www.rastergrid.com/blog/2010/09/efficient-gaussian-b...
I updated a few sentences in the article to reflect that uncertainty. Thanks!
This results in flickering when vertically scrolling over abrupt BG color borders , eg noticeable on mobile browser Twitter UI
With this, you can remove the border (or inset box shadow), and the edge of the glass will look much, much more real
Using `clip-path` I'm able to consider background pixels more correctly but it looks like I'd need an extra div and/or some sizing tricks to get everything working exactly as expected.
I'll keep noodling on this and may add an update to the page if a simple solution comes to mind.
I'm not seeing the "background-attachment fixed" working at all. Not the CSS and neither the JavaScript solution. The rays stay static, detached from the moving div just as they were before applying that code.
In both Firefox and Vivaldi, on Windows.
As a embedded developer, I feel this is kind of wasteful. Every client computes an "expensive" blur filter, over an over again? Just for blending to a blurred version of the background image?
I know - this is using the GPU, this is optimized. In the end, this should not be much. (is it really?)
<rant> I feel the general trend with current web development is too much bloat. Simple sites take 5 seconds to load? Heavy lifting on the client? </rant>... but not the authors fault
I really wonder what's the field of reference of "quickly" there. To me convolution is one of the last resort techniques in signal processing given how expensive it is (O(size of input data * size of convolution kernel)). It's of course still much faster than gaussian blur which is still non-trivial to manage at a barely decent 120fps even on huge Nvidia GPUs but still.
Now for a bit of whimsy. It's been said that a picture is worth a thousand words. However, a thousand words uses far less bandwidth. What if we go full-tilt down the energy saving path, replace some images with prose to describe them? What would articles and blog posts look like then?
I know it's not practical, and sending actual images saves a lot of time and effort over trying to describe them, but I like the idea of imagining what that kind of web might look like.
We’d just be sending prompts lol. Styling , css, etc all could receive similar treatment, using a standardized code generating model and the prompt/seed that generates the desired code.
Just need to figure out how to feed code into a model and have it spit out the prompt and seed that would generate that code in its forward generation counterpart.
The interesting thing here is that the model wouldn’t have to be the one that produces the end result, just -a- end result deterministically produced from the specified seed.
That end result could then act as the input to the user custom model which would add the user specific adjustments, but presumably the input image would be a strong enough influence to guide the end product to be equivalent in meaning if not in style.
Effectively, this could be lossless compression, but only for data that could be produced by a model given a specific prompt and seed, or lossy compression for other data.
It’s a pretty weird idea, but it might make sense if thermodynamic computing or similar tech fulfills its potential to run huge models cheaply and quickly on several orders of magnitude less power (and physical size) than is currently required.
But that will require nand-scale, room temperature thermodynamic wells or die scale micro-cryogenic coolers. Both are a bit of a stretch but only engineering problems rather than out-of-bounds with known physics.
The real question is whether or not thermodynamic wells will be able to scale, and especially whether we can get them working at room temperature.
If you want to save energy, send less data.
Technically yes you could make some savings but since images were transferred over an HTTP-1.1 Keep-Alive connection, I don't feel it was such a waste.
Would love to get more data if you have it, it's just that from the limited work I did in the area it did not feel very worth of only downloading the high-res image and do the blur yourself... especially in scenarios when you just need the blurred image + dimensions first, in order to prevent the constant annoying visual reflow as images are downloaded -- something _many_ websites suffer from even today.
I grew up in the era of 14.4k modems, so I'm used to thinking that network bandwidth is many, many orders of magnitude more scarce and valuable than CPU time.
To me, it's wasteful to download an entire image over the Internet if you can easily compute it on the client.
Think about all the systems you're activating along the way to download that image: routers, servers, even a disk somewhere far away (if it's not cached on the server)... All that just to avoid one pass of processing on data you already had in RAM on the client.
If the goal is to optimize for server bandwidth, wouldn't you still want to send the already-blurred photo? Surely that will be a smaller image size than the pre-blurred full res photo (while also reducing client-side CPU/OS requirements).
I feel like I am missing something important in your comment.
IMO it really depends on the numbers. I'd be OK if my client downloads 50KB extra data for the already-rendered image but I'll also agree that from 100KB and above it is kind of wasteful and should be computed.
With the modern computing devices we all have -- including 3rd world countries, where a cheap Android phone can still do a lot -- I'd say we should default to computation.
I like to consider myself a guest on a client CPU, GPU, and RAM. I should not eat all their food, leave an unflushed turd in their toilet, and hog the remote control. Be a thoughtful guest that encourages feelings of inviting me back in the future.
Load fast, even when cell coverage is marginal. Low memory so a system doesn't grind to a halt from swapping. Animate judiciously because it's polite. Good algorithms, because everyone notices when their cursor becomes jerky.
So either entertainment is wasteful, or if it's not, spending more compute to make the entertainment better is OK.
I did some webgl nonsense like https://luduxia.com/showdown/ and https://luduxia.com/whichwayround/ . This is a experimental custom renderer with DoF, subsurface scattering and lots of other oddities. You are not killed by calculation but memory access, but how to reduce this in blur operations is well understood.
What there is not is semi transparent objects occluding each other, because this becomes a sorting nightmare and you would end up having to resolve a whole lot of dependencies on this dynamically. (Unless you do things with restricting blending modes). Implementing that in the context of widgets that move on a 2D plane with z-index sorting is enormously easier than in a 3D scene though.
the redrawing of anything that changes in your ui requires gpu computation anyway, and some simple blur is quite efficient to add. Likely less expensive than any kind of animations of dom objects thar aren't optimized as gpu layers.
additionally, seeing how nowadays the most simple sites tend to load 1+ mb of JS and trackers galore, all eating at your cpu ressources, Id put that bit of blur for aesthetics very far down on the "wasteful" list
For reference, for every pixel in the input, we need to average 3x^2 pixels, roughly, where 3 is actually pi and x is the radius.
This blows up quite quickly. Not enough that my $5K MacBook really breaks a sweat with this example. But GPUs are one of the most insidious things a dev can accidentally forget to account for not being so great on other people's devices
I put a lot of effort into minimizing content. The images are orders of magnitude larger than the page content but should be async. Other assets barely break 20 kB in total aside from the font (100 kB) which should also load async.
I might just be old - when this was done on the CPU.
I’ll think about it more this morning and see if I can come up with a UX for this that doesn’t interrupt the flow of the article as harshly.
But I'd be happy with a single one with all at the end :)
On a semi-related note, the best in-game UI I’ve ever seen was in Prey 2017. The little computers and terminals you interact with look amazing, and every single I time I used one I was spellbound. The huge amount of effort that goes into small details in games in particular is incredible.
Just use the background color + blur + box shadow or border
With blur, shadow, and light rays alone you can already get _really close_ to that Forza image at the top.
I think that's part of why everyone went to flat design - Windows Vista and glass effects looked great, but they were expensive to compute! Flat designs are aesthetically controversial, and can be more difficult to use than skeuomorphic ones, especially for older users [0][1].
Considering that realism can aid in usability, I think it's totally valid to use effects like this on the web.
[0]: https://www.tandfonline.com/doi/abs/10.1080/0144929X.2020.18...
[1]: https://www.sciencedirect.com/science/article/abs/pii/S01419...
CSS is by far my weakest skill in terms of development, so I am completely unaware of the best/worst practices.
With a quick google search, it looks like you can find some which mimic the 'coke bottle bottom' shape with shadow and light.
Aside from that I haven’t jammed ads or trackers into every nook and cranny of my site which helps a lot with perf.
I think I’ve finally cracked why it’s not supported. The official line is that it’s “too expensive” on the cpu, but that doesn’t hold water when single-core performance of iPhones regularly outpaces Macs.
iOS Safari does one extra “abstraction” phase of an entire web page that allows for instant pinching and zooming of web pages. In order to get background-attachment: fixed working under such a paradigm, you would need to not only calculate where the background image is relative to the viewport, but also the size and placement of a zoomed document in real time. And on top of that, the browser designers would need to make some kind of decision on what a coherent implementation of the feature would even do under such circumstances.
I wish that iOS had a way to just turn off this extra abstraction in CSS altogether. It was a fine crib before responsive design was a thing, but for some pages, it just causes problems now. It’s not needed “everywhere” on the web any more than it’s necessary in all iOS apps.
Though, I feel like there is some level of understanding of HTML/CSS that I will never be able to grasp -- like this demonstration. This person is out here making frosted windows, and I can't even center a div.
The same sort of goes with many visual tricks, although this one is very clean, which makes it all the more impressive.
If I were to make serious use of this on a site, I’d probably opt for the non-JavaScript one class version anyways and optimize for simplicity.
However, the contrast between the glass background and foreground is dependent on background content by design, which is a serious issue for complying with various accessibility guidelines. For enterprise apps if you want to pass the various accessibility reviews it needs to be a user preference to disable this at least, or just don't use this technique to guarantee pass for contrast related questions.
spoiler: and so, I left frontend :D
In many cases this can be the right tradeoff to make. There is also a beauty to its simplicity.
[1] https://raw.githubusercontent.com/geon/estimator/refs/heads/... [2] https://camo.githubusercontent.com/57cf0972c6f6d48c19d969f23...