its -obvious- things are mostly "better"/can be less "annoying" when money/resources are not a concern. i too would like to spend all my time in a world with no scarcity.
the engineering challenge is finding alignments where "better for reader" overlaps with "better for writer" - as google did with doubleclick back in the day.
Besides, just google analytics or something like that wouldn't be that bad (I know the blog author would disagree). A lot of sites go nuts and have like 20 different trackers that probably track the same things. People just tack stuff on, YAGNI be damned, that's a big part of the problem and it's a net drain on both parties.
Google Analytics is the worse. Not on an individual website but by the fact it is almost everywhere. So Google has been getting everyone's web history since more than a decade.
Add Android, gmail, the social "share" or "login with" integrations and any Stasi member would have called you delirious for thinking this kind of surveillance apparatus was possible. Even more that people would willingly accept it.
The bottom line is if you hate ad-based businesses, start paying for things.
No. If your business model requires you to do evil things, your business should not exist.
Anyway, I do pay for services that provide value. I was a paying Kagi customer until recently, for example (not thrilled with the direction things are going there now though).
https://old.reddit.com/r/ukraine/comments/1gvcqua/psa_the_ka...
Either way, thanks for sharing.
The mean value of adverts on a page is in the order of a tiny fraction of a cent per reader, which is presumably enough for the businesses that continue to exist online. If it was possible to pay this amount directly instead, and have an ad-free experience, I suspect many would do so, as the cumulative amount would usually be negligible. Yet so far, no-one’s figured it out.
(I should mention, there are very strong reasons why it’s difficult to figure out currently, but AIUI these are rooted in the current setup of global payments and risk management by credit card companies.)
Music and video streaming services are syndicating content from millions of creators into single subscription services. Why is it so impossible to make mega conglomerates for textual content? Why is nobody doing this?
Right now, creators are forced to make YouTube videos, because that's their most viable path to getting paid for their work. Why does it have to be this way, when a lot of what they do would be better as text instead of as a talking head?
I guess the truth is that the large subscription 'streaming' services (Spotify, YouTube, Netflix, etc.) are effectively micropayment systems, just not quite as transparent and/or direct as the concept I'd envisioned.
As to why there's no 'Spotify for newspapers/magazines/blogs', I don't know. We're definitely not the first to consider the question. Maybe the economics (too few customers?) doesn't make sense? Maybe there's resistance to it amongst socially- and politically-connected owners and journalists who like their position in society? Maybe because it would presumably require centralisation (in terms of where and how it was consumed, akin to using the Spotify app to listen to music) and ultimately commoditisation of the media? Maybe the modern drift away from reading and longer-form media makes it unattractive, leading to a quality drift to the bottom?
A better way to characterize what's happening is that there is a lot of material out there that no one would ever pay for, so those companies instead try to get people's attention and then sell it.
Their bait never was and never will be worth anything. People aren't "paying with ads"; they're being baited into receiving malware, and a different group of people pay for that malware delivery.
Don't they have enough money?
But no matter the cost of a thing, you can always "make more" by adding ads and keeping the cost as is. So eventually, every service seems to decide that, well, you DESERVE the ads, even if you pay.
Sure, competition could solve this, but often there isn't any.
The only problem to be solved here is the fact advertisers are the ones paying the people who make web pages. They're the ones distorting the web into engagement maximizing content consumption platforms like television.
I think a lot of people outside of HN would prefer that Internet way more than what we have now.
My first for pay project was enhancing a Gopher server in 1993.
Don’t romanticize the early internet.
The point being made here is that it wasn’t perfect before but for many it was better.
There were unscrupulous people posting on Usenet for monetary gain before the web
What did they disable exactly?
On the first or second page view of any particular blog, the platform likes to greet you with a modal dialog to subscribe to the newsletter, and you have to find and click the "No thanks" text to continue.
Once you're on a page with text content, the header bar disappears when you scroll downward but reappears when you scroll upward. I scroll a lot - in both directions - because I skim and jump around, not reading in a rigidly linear way. My scrolling behavior is perfectly fine on static/traditional pages. It interacts badly with Substack's "smart" header bar, whose animation constantly grabs my attention, and also it hides the text at the top of the page - which might be the very text I wanted to read if it wasn't being covered up by the "smart" header bar.
Your argument is that writers do this because of "economics", but to the detriment of readers. I don't see how this extends only to HN readers. It applies to all readers in general.
If you give great customer service, you get great customers – and they don't mind paying a premium.
If you're coercing customers, then you get bad customers – and they are much more likely to give you trouble later.
Most business owners are your run of the mill dimwits, because we live in a global feudal economic system – and owning a business doesn't mean you are great at sales or have any special knowledge in your business domain. It usually just means you got an inheritance or that you have the social standing to be granted a loan.
Where I think the post hits on something real is the horrible UI patterns. Those floating bars, weird scroll windows, moving elements that follow you around the site. I don't believe these have been AB tested and shown to increase engagement. Those things are going to lose you customers. I genuinely don't understand why people do this.
Or I guess at that point, you just don’t do styles?
Of all the things some people don’t do with their webpage, I’m the biggest fan of not doing visual complexity.
brave://settings/?search=Speedreader
https://support.brave.com/hc/en-us/articles/360045031392-Wha...
> Web page annoyances that I don't inflict on you here / I don't use visitor IP addresses outside of a context of filtering abuse.
This point bit me personally about 5 years ago. As I browsed HN at home, I found that links to her website would not load - I would get a connection timed out error. Sometimes I would bookmark those pages in the hopes of reading them later. By accident, I noticed that her website did load when I was using public Wi-Fi or visited other people's homes.
I assumed it was some kind of network routing error, so I emailed my Canadian ISP to ask why I couldn't load her site at my home. They got back to me quickly and said that there were no networking problems, so go email the site operator instead. I contacted Rachel and she said - and this is my poor paraphrasing from memory - that the IP ban was something she intentionally implemented but I got caught as a false positive. She quickly unbanned my IP or some range containing me, and I never experienced any problems again. And no, I never did anything that would warrant a ban; I clicked on pages as a human user and never botted her site or anything like that, so I'm 100% sure that I was collateral damage for someone else's behavior.
The situation I saw was a very rare one, where I'd observe different behaviors depending on which network I accessed her site from. Sure, I would occasionally see "verification" requests from megacorps like Google/CAPTCHA, banks, Cloudflare, etc. when I changed networks or countries, but I grew to expect that annoyance. I basically never see specific bans from small operators like her. I don't fault her for doing so, though, as I am aware of various forms of network and computer system abuse, and have implemented a few countermeasures in my work sporadically.
> I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
Agreed, but I would like HN users to submit the HTTPS version. I'm not doing this to virtue-signal or anything like that. I'm telling you, a number of years ago when going through Atlanta airport, I used their Wi-Fi and clicked on a bunch of HN links, and the pages that were delivered over unsecured HTTP got rewritten with injections of the ISP's ads. This is not funny and we should proactively prevent that by making the HTTPS URL be the default one that we share. (I'm not against her providing an HTTP version.)
As for everything else, I am so glad that her web pages don't have fixed top bars, the bloody simulated progress bar (I like my browser's scrollbar very much thank you), ample visual space wasted for ads (most mainstream news sites are guilty), space wasted mid-page to "sign up to my email newsletter", modal dialog boxes (usually also to sign up to newsletter), etc.
It's probably reasonable to use HSTS to force https-aware browsers to upgrade and avoid injection of all the things she hates. Dumb browsers like `netcat` are not harmed by this at all. But even then ... why aren't you using `curl` or something?
There's a broad spectrum between a browser that is "aware" of https and a browser that has all the cipher suites, certificates, etc to load a given page.
Unless I'm at work where there's compliance checkboxes to disallow old SSL versions I'll take whatever you have.
At least if you use HTTP it is blatantly insecure.
Thanks for mentioning this, because I was having the same issue and I was surprised no one was mentioning that the site was (appeared to be) down. Switching to using a VPN made the post available to me.
I use an extension called "Bar Breaker" that hides these when you scroll away from the top/bottom of the page.[0] More people should know about it.
[0] https://addons.mozilla.org/en-US/firefox/addon/bar-breaker/
that when reading on my laptop screen, it takes up valuable vertical space on a small display that is in landscape mode. I want to use my screen's real estate to read the freaking content, not look at your stupid branding bar.
And I don't need any on-page assistance to jump back to the top of the page and/or find the navigation. I have a "Home" key on my keyboard and use it frequently.
On MacOS: Click the top part of the scroll bar
Just kill the fucking dickbar.
It would be better to have a single extension like uBlock origin to handle the browser compatibility, and then release the countermeasures through that. In fact, ublock already has "Annoyances" lists for things like cookie banners, but I don't think it includes the dick bar unfortunately.
Incidentally, these bars are always on sites where the navbar takes 10% vertical space, cookie banner (full width of course) takes another 30% at the bottom, their text is overspaced and oversized, the left/right margins are huge so the text is like 50% of the width... Don't these people ever look at their own site? With many of these, I'm so confused how anyone could look at it and say it's good to go.
1. JS disabled by default, only enabled on sites I choose
2. Filter to fix sites that mess with scrolling:
##html:style(scroll-behavior: auto !important;)
3. Filters for dick bars and other floating elements: ##*:matches-css(position:fixed)
##*:matches-css(position:sticky)
##[class*="part of the name of the annoying class, generally sticky something"]
this rule is amazing to deal with those randomly generated class names
— Joel Spolsky, What is the Work of Dogs in this Country? (2001): <https://www.joelonsoftware.com/2001/05/05/what-is-the-work-o...>
In my practice, I'll try a Jedi mind trick, e.g. "Trying to [state larger goal] makes a lot of sense. An even more effective way to do that is to [state alternate, non-toxic technique]."
Like the User-Scripts of Greasemonkey?
https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/
I'm happy to learn something new about other people's preferences, though. If people prefer scrolling to the top, so be it!
EDIT: It occurs to me that this could be a preference setting. A few of the websites that have let me have my way, I've started generating CSS from a Django template and adding configuration options to let users set variables like colors--with really positive feedback from disabled users. At a fundamental level, I think the solution to accessibility is often configurability, because people with different disabilities often need different, mutually incompatible accommodations.
There can be a logic to keeping the header at the top like a menu bar, and I applaud you if you take an approach that focuses on value to the user. Though I'd still say most sites that use this approach, don't have a strong need for it, nor do they consider smaller viewports except for portrait mobile.
Configuration is great, though it quickly runs into discoverability issues. However it is the only way to solve some things - like you pointed out with colors. I know people who rely on high contrast colors and others that reduce contrast as much as they effectively can.
Unfortunately, the web today has strayed far from its original vision. Yet, we continue to rely on the foundational technologies that were created for that very vision.
If browsers catered to their user's desires more than they cater to developers, the web wouldn't be so shitty.
The primary benefit of web applications is they don't lose your data. Not a single web application UI that exists provides as good a user experience as the native desktop applications that came before. A web where browsers provided their own UIs for various document types, and those document types could not modify their UIs in any way, period, would be a better web. You serve up the document, I get to control how it looks and behaves.
Browsing without reader view enabled by default is like driving your car around with the hand brake engaged.
The biggest problem for me is the randomness between different sites. It's not a problem for Firefox to display a header when I scroll up, since I can predict its behaviour. My muscle memory adapts by scrolling up and then down again without conscious thought. It's a much bigger problem if every site shows its header slightly differently.
I think the key thing is that when I scroll up, 95% of the time I want to see the text up the page, and at most maaaaaaaybe 5% of the time I want to open the menu. This is especially true if I got to your website via a search engine. I don't give a damn what's hidden in your menu bar unless it's the checkout button for my shopping cart, and even then I'd prefer you use a footer for that.
But for UX: (1) Keep it small and simple! It shouldn't be more than ~2 lines of text. (2) Make it CSS-only; if you have to use custom JS to achieve a certain effect, be ready to spend a LOT of time to get the details right, or it'll feel janky. (3) Use `scroll-padding` in the CSS to make sure links to sections/etc work correctly.
I have built a handful of personal sites at this point with no JS, and it's really amazing what modern CSS can do. My favorite trick is using tabindex=0 and :focus-within to make dropdowns (using :focus doesn't handle submenus).
But why this one?
>I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
What is wrong with redirecting 80 to 443 in today's world?
Security wise, I know that something innocuous like a personal blog is not very sensitive, so encrypting that traffic is not that important. But as a matter of security policy, why not just encrypt everything? Once upon a time you might have cared about the extra CPU load from TLS, but nowadays it seems trivial. Encrypting everything arguably helps protect the secure stuff too, as it widens the attacker's search space.
These days, browser are moving towards treating HTTP as a bug and throw up annoying propaganda warnings about it. Just redirecting seems like the less annoying option.
She accepts http AND https requests. So it's your choice, you want to know who you're talking to, or you want speed :)
It means that HTTP/2 will likely degrade performance because of the TLS handshake, and you won't benefit from multiplexing because there is not much to load in parallel. The small improvement in header size won't make up for what TLS adds. And this is just about network latency and bandwidth. HTTP/2 takes a lot more CPU and RAM than plain HTTP/1.1. Same thing for HTTP/3.
Anyways, it matters even less here because this website isn't lacking SSL/TLS, it just doesn't force you to use it.
In terms of round trips, HTTP/1.1 without TLS will do one less than HTTP/2 with TLS, and as much as HTTP/3 with TLS.
For both sides, you need to continually agree on root certificates (think of how the ISRG had to gradually introduce itself to the world - first through cross-signing, then as a root), protocol versions (e.g. TLSv1.3), and cipher suites.
For the server operator specifically, you need to find a certificate authority that works for you and then continually issue new certificates before the old one expires. You might need to deal with ordering a revocation in rare cases.
I can think of a few reasons for supporting unsecured HTTP: People using old browsers on old computers/phones (say Android 4 from 10 years ago), extremely outdated computers that might be controlling industrial equipment with long upgrade cycles, simple HTTP implementations for hobbyists and people looking to reimplement systems from scratch.
I haven't formed a strong opinion on whether HTTPS-only is the way to go or dual HTTP/HTTPS is an acceptable practice, so I don't really make recommendations on what other people should do.
For my own work, I use HTTPS only because exposing my services to needless vulnerabilities is dumb. But I understand if other people have other considerations and weightings.
That's a fair point. HTTP changes more slowly. Makes sense for sites where you're aiming for longevity.
I'm not saying SSL isn't complicated, it absolutely is. And building on top of it for newer HTTP standards has its pros and cons. Arguably though, a "simple" checkbox is all you would need to support multiple types of SSL with a CDN. Picking how much security you need is then left to an exercise to the reader.
... that said, is weak SSL better than "no SSL"? The lock icon appearing on older clients that aren't up to date is misleading, but then many older clients didn't mark non-SSL pages as insecure either, so there are tradeoffs either way. But enabling SSL by default doesn't have to exclude clients necessarily. As long as they can set the time correctly on the client, of course.
I've intentionally not mentioned expiring root CAs, as that's definitely an inherent problem to the design of SSL and requires system or browser patching to fix. Likewise https://github.com/cabforum/servercert/pull/553 highlights that some browsers are very much encouraging frequent expiry and renewal of SSL certificates, but that's a system administration problem, not technically a client or server version problem.
As an end user who tries to stay up to date, I've just downloaded recent copies of Firefox on older devices to get an updated list of SSL certificates.
My problem with older devices tends to be poor compatibility with IPv6 (an addon in XP SP2/SP3 not enabled by default), and that web developers tend to use very modern CSS and web graphics that aren't supported on legacy clients. On top of that, you've HTML5 form elements, what displays when responsive layouts aren't available (how big is the font?), etc.
Don't get me wrong, I love the idea of backwards compatibility but it's a lot more work for website authors to test pages in older or obscure browsers and fix the issues they see. Likewise, with SSL you can test on a legacy system to see how it works or run Qualys SSL checker, for example. Browsers maintain forwards-compatibilty but only to a point (see ActiveX, Flash in some contexts, Java in many places, the <blink> tag, framesets, etc.)
So ultimately compatibility is a choice authors make based on how much time they put into testing for it. It is not a given, even if you use a subset of features. Try using Unicode on an early browser, for example. I still remember the rails snowman trick to get IE to behave correctly.
Oh, if only TLS was that simple!
People fork TLS libraries, make transparent changes (well, they should be), and suddenly they don't have compatibility anymore. Any table with the actually relevant data would be huge.
We would be constantly trying to finish a home we could actually use, and forget about fruits or wood agriculture.
There's something deeply broken about computers. And that's from someone deeply on the camp that "yes, everybody must use TLS on the web".
It's just not that mysterious, if we want our communications to be secure (we do) then we can't reasonably use ciphers that have been broken, since any adversary can insert themselves in the middle and negotiate both sides down to their most insecure denominator, if they allow it.
Maybe intranet sites. Everything else absolutely should.
Sites that need HTTPS: - all of them
If you like it, you better put a lock on it.
And, BTW, the website is as delightfully simple and unobtrusive as the one in the article.
It’s not like ISPs are unknown entities.
Unencrypted connections can be weaponized by things like China’s Great Canon.
Also, something I often see non-technical people fall victim to is that if your clock is off, the entirety of the secure web is inaccessible to you. Why should a blog (as opposed to say online banking) break for this reason?
IE 10 in Windows Server 2008 doesn't support TLS 1.1+ by default.
But the old phone is significantly better at making actual phone calls than the new one.
So? If they still power on and are capable of talking HTTP over a network, and you don't require the transfer of data that needs to be secured, why shouldn't you "let" them online?
Beats me.
I actually have an example myself - an iPad 3. Apple didn't allow anyone else than themselves to provide a web browser engine, and at some point they deliberately stopped updates. This site used to work, until some months ago. I currently use it for e-books, if that wasn't the case I think it by now it would essentially be software bricked.
I acknowledge that owning older Apple hardware is dumb. I didn't pay for it, though.
You're basically saying "oh, _YOUR_ usecase is wrong, so let's take this away from everybody because it's dangerous sometimes"
But yeah, I have many machines which would work just fine online except they can't talk to the servers anymore due to the newer algorithms being unavailable for the latest versions of their browsers (which DO support img tags, gifs and even pngs)
Both Chrome and Firefox will get you to the HTTPS website even though the link starts with "http://", and it works, what more do you want?
You have to type "http://" explicitly, or use something that is not a typical browser to get the unencrypted HTTP version. And if that's what you are doing, that's probably what you want. There are plenty of reasons why, some you may not agree with, but the important part that the website doesn't try to force you.
That's the entire point of this article, users and their browsers know what they are doing, just give then what they ask for, no more, no less.
I also have a personal opinion that SSL/TLS played a significant part in "what's wrong with the internet today". Essentially, it is the cornerstone of the commercial web, and the commercial web, as much as we love to criticize it, brought a lot of great things. But also a few not so great ones, and for a non-commercial website like this one, I think having the option of accessing it the old (unencrypted) way is a nice thing.
I understand the thinking, backwards compatibility of course, and why encrypt something that is already freely available? But this means I can setup a public wifi that hijacks the website and displays whatever I want instead.
TLS is about securing your identity online.
I think with AI forgeries we will move more into each person online having a secure identity. Starting with well know personas and content creators.
Let me explain it to you like this:
The NSA has recorded your receipt of this message.
Trust me, the NSA tracking what you read is MUCH WORSE than Google tracking what you read. Encryption helps defeat that.
I’ve had to be pretty firm in the past with marketing teams that want to embark on a rebrand, and say however the design looks, it can’t include modal windows or animated carousels. And I think people think you’re weird when you say that.
Some small businesses create websites for branding only, and get their business exclusively offline. They just want to have a simple, static site to say "we exist, and we are professionals", so they are fine with the latest in web design.
I didn't realise that hiding dates for the illusion of evergreen-ness was a desirable thing!
On my personal site I added dates to existing pages long after they were uploaded for the very reason I wanted it to be plenty clear that they were thoughts from a specific time.
For example, a bloggish post I wrote where, while I still think it's right, it now sounds like linkedin wank. I'm very happy for that one to be obviously several years old.
I have no idea how true that is but I remember hearing SEO folks talk about it a few years back.
Every time I get hit with a popup by a site I usually just leave. Sometimes with a cart full of items not yet paid for. It's astounding that they haven't yet learned that this is actually costing them business. Never interrupt your customers.
Same goes for stores. If I walk into your store to browse and you accost me with "Can I help you" I'll be looking for an exit ASAP.
And then a week later you'll get an email "Did you forget to buy all those products we're sure you want?..."
(Under the RSS icon.)
<link rel="alternate" type="application/atom+xml" href="/w/atom.xml">
?
It does seem like something's off about the feed. Vienna can read the file, but it comes up empty. But it doesn't seem like the problem is standards non-compliance.
Text littered with hyperlinks on every sentence. Hyperlinks that do on-hover gimmicks like load previews or charts. Emojis or other distracting graphics (like stock ticker symbols and price indicators GOOG +7%) littered among the text.
Backgrounds and images that change with scrolling.
Popups asking to allow the website to send you notifications.
Page footers that are two pages high with 200 links.
Fine print and copyright legalese.
Cookie policy banners that have multiple confusing options and list of 1000 affiliate third parties.
Traditional banner and text ads.
Many other dark patterns.
I haven't seen one that shows charts, but I gotta admit, I miss the hover preview when not reading wikipedia.
In the modern day we've come full circle. Jira uses AI to scan your tickets for non-English strings of letters and hallucinates a definition for the acronym it thinks it means, complete with a bogus "reference" to one of your documents that doesn't mention the subject. They also have RAINBOW underlines so it's impossible to ignore.
Fucking NPR now has ~2--6 "Related" links between paragraphs of a story. I frequently read the site via w3m, and yes, will load the rendered buffer in vim (<esc>-e) to delete those when reading an article.
I don't know if it's oversensitisation or progressive cognitive decline, but even quite modest distracting cruft is increasingly intolerable.
If you truly have related stories, pile them at the end of the article, and put in some goddamned microcontent (title, description, publication date) for the article.
As I've mentioned previously, my "cnn-sanify" script which strips story links and headlines from CNN's own "lite" page, and restructures those into section-organised, time-sorted presentation. Mostly for reading from the shell, though I can dump the rendered file locally and read it in a GUI browser as well.
See: <https://news.ycombinator.com/item?id=42535359>
My biggest disappointment: CNN's article selection is pretty poor. I'd recently checked against 719 stories collected since ~18 December 2024, and of the 111 "US" stories, 54% are relatively mundane crime. Substantive stories are the exception.
(The sense that few of the headlines really were significant was a large part of why I'd written the organisation script in the first place.)
Do you mean metadata?
"Well-written, short text fragments presented out of supporting context can provide valuable information and nudge web users toward a desired action."
<https://www.nngroup.com/articles/microcontent-how-to-write-h...>
Microformats are more a semantic-web type thing. I'm talking of information presented to a non-technical reader through the browser.
Go ahead and load that up, then start reading articles.
From the current headline set, there's "FBI says suspect in New Orleans attack twice visited the city to conduct surveillance"
<https://text.npr.org/nx-s1-5249046>
That has three occurrences of:
Related Story: NPR
Which is specifically what I was criticising.(I gave up on any sort of "text mode" of a site a long time ago)
Some sites even have media, like videos or photo carousels in or before an article, the content of which isn't related to the article at all. So you get this weird page where you're reading an article, but other content is mixed in around each paragraph, so you have no idea what belongs where.
Then add to that all ads and references to other sections of "top stories" and the page becomes effectively unreadable without reader mode. You then left with so little content that you start questioning if you're missing important content or media.... You're normally not.
I don't believe that these pages are meant for human consumption.
This is the biggest hassle associated with reading articles online. I'm never going to click on those links because:
- the linked anchor text says nothing about the website it's linking to - the link shows a 404 (common with articles 2+ years old) - the link is probably paywalled
Very annoying that article writing guidelines are unchanges from the 2000s where linkrot and paywalls were almost unheard of.
> AMD says this delivers groundbreaking capabilities for thin-and-light laptops and mini workstations, particularly in AI workloads. The company also shared plenty of gaming and content creation _benchmarks_. (emphasis mine)
I clicked on "benchmarks", expecting to see some, well, benchmarks for the new CPU, hoping to see some games like Cyberpunk that I might want to play. But no, it links to /tag/benchmark.
1: https://www.tomshardware.com/pc-components/cpus/amds-beastly...
I just looked into this feature and it looks awesome! Is there a way to do this in chrome? If not, are there any available chrome extensions that do this?
A variation of this is my worst offender, the flapping bar. Not only it takes space, it flaps every time I adjust my overscroll by pulling back, and it covers the text I was trying to adjust. The hysteresis to hide it back is usually too big and that makes you potentially overscroll again.
Special place in hell for those who hide the flap on scroll-up but show it again when the scroll inertia ends, without even pulling back.
Can’t say here what I think about people who do the above, but you can imagine.
I remember arguing about it on HN back when I was in uni.
I still hate such things, especially when using a desktop browser.
The number of times this has happened whilst I've been editing a post on some rando site, losing my content ...
<https://github.com/plateaukao/einkbro>
I do have Firefox (Fennic Fox F-Droid) installed on that tablet. The reading experience is so vastly inferior despite numerous capabilities of Firefox (most especially browser extensions) that it's not even funny. Mostly because scrolling on e-ink is a disaster.[1]
Chrome/Chromium of course is an absolute disaster.
EinkBro has incorporated ad-blocking, JS toggle, and cookie rejection, which meet most of my basic extension needs. The fact that it offers a paginated navigation (touch regions to scroll by a full screen) works far better with e-ink display characteristics.
I'll note that on desktop I also usually scroll by screen, though that's usually by tapping the spacebar.
--------------------------------
Notes:
1. The thought does occur that Firefox/Android might benefit by an extension (or set of same) which address e-ink display characteristics. Off the top of my head those would be:
- Paginated navigation. The ability to readily scroll by a full page, rather than touch-and-drag scrolling.
- High-contrast / greyscale optimisation. Tweaking page colours such that reading on e-ink is optimised. Generally that would be pure black/white for foreground/background, and a limited greyscale pallette for other elements. Halftone dithering of photographic images would also be generally preferable.
- An ability to absolutely freeze any animations and/or video unless specifically selected.
- Perhaps: an ability to automatically render pages in reader mode, with the above settings enabled.
- Other odds'n'sods, such as rejecting any autoplay (video, audio), though existing Firefox extensions probably address that.
I suspect that much of that is reasonably doable.
There is an "E-ink Viewable" extension which seems to detect and correct for dark-mode themes (exceedingly unreadable on tablets, somewhat ironically), though it omits other capabilities: <https://addons.mozilla.org/en-US/firefox/addon/e-ink-viewabl...>.
"Edge Touch Pager" addresses navigation: <https://addons.mozilla.org/en-US/firefox/addon/edge-touch-pa...>.
And there's a Reddit submission for improving e-ink experiences w/ Firefox generally, which touches on most of the items I'd mentioned above: <https://old.reddit.com/r/eink/comments/lkc0ea/tip_to_make_we...>.
Future me may find this useful....
Looks as if its current rev is the Note Max, Android 13, and a resolution of 300 dpi (the Max Lumi is 220 dpi, which is already damned good). That's pretty much laser-printer resolution (most are effectively ~300 -- 600 dpi). I wish they'd up the onboard storage (Note Max remains at 128 GB, same as the previous device, mine is 64 GB which is uncomfortably tight).
The Android rev is still a couple of versions old (current is 16, released December 2024), though I find that relatively unimportant. I've mostly de-googled my device, install few apps, and most of those through F-Droid, Aurora Store where that doesn't suffice.
If the Max is too spendy / large for you, the smaller devices are more reasonably priced. I went big display as I read quite a few scanned articles and the size/resolution matter. A 10" or 8" display is good for general reading / fiction, especially for e-book native formats (e.g., ePub). If you read scans, larger is IMO better.
I'm aware and not happy with the GPL situation, but alternatives really don't move me.
Onyx's own bookreader software is actually pretty good and sufficient for my purposes, though you can install any third-party reader through your Android app repo you prefer.
My main uses are e-book reading (duh!), podcasts (it's quite good at this, AntennaPod is my preferred app), Termux (Linux userland on Android). For Web browsing, EinkBro and Fennic Fox (as mentioned up-thread). The note-taking (handwritten) native app is also quite good and I use that far more than I'd anticipated.
If you're looking for games, heavy web apps, video, etc., you won't be happy. If you're looking for a device that gets you away from that, I strongly recommend their line.
I've commented on the Max Lumi and experience (positives and negatives) quite a few times here on HN:
<https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...>
Those are based on Alpine Linux rather than Android, AFAIU, and if you're into Linux are apparently more readily customised and hacked.
(The fact that BOOX is Android is a misfeature for me, though it does make many more apps available. As noted, I use few of those and could replace much of their functionality with shell or Linux-native GUI tools. I suspect battery management would suffer however.)
- When a user scrolls content-up in any way, the header collapses immediately (or you may just hide it).
- When a user scrolls content-down by pulling, without "a kick", then it stays collapsed.
- When a user "kick"-scrolls content-down, i.e. scrolls carelessly, in a way that a when finger lifts, scroll still has inertia -- then it gets shown again. Maybe with a short activation distance or inertia level to prevent ghost kicks.
As a result, adjusting text by pulling (including repeatedly) won't flap anything, and if a user kick-scrolls, then they can access the header, if it has any function to it. It sort of separates content-down scroll into two different gestures, which you just learn and use appropriately.
But instead most sites implement the most clinical behavior as described in the comment above. If a site does that, it should be immediately revoked a dns record and its owner put on probation, at the legislative level.
The bar collapses and then pops up back on ios if you scroll content-up in a non-inertial way.
This is also related to why professional newspapers and magazines lay out text in relatively narrow columns, because they are easy to scan just top-down while hardly moving your eyes left-right.
I do think that vertical phones are too narrow for conveying decent text, but you also can't have completely unbounded page widths because people do run browsers maximized on desktop 4K screens.
I also strongly prefer at least some padding around the edges of pages / text regions, with 5--10% usually much easier to read.
I'd played with making those changes on Rachel's page through Firefox's inspector:
html { font-family: garamond, times, serif; }
body { max-width: 50em; }
.post { padding 2em 4em; }
To my eye that improves things greatly.(I generally prefer serif to sans fonts, FWIW.)
Unless you're banging directly on the framebuffer, logical pixels haven't been tied to device pixels for literally decades. CSS specifies pixels at 1/96 of an inch, a decision that goes all the way back to X11. 1rem == 16px, though this can be changed in CSS (just set font-size on the :root element) whereas you can typically only change pixel scaling in your display settings.
So yes, using rems is better, but pixels are not going to get dramatically smaller on denser displays unless the device is deliberately scaling them down (which phones often do simply because they're designed to be read up-close anyway)
It's also possible to scale text itself to the reader's own preference if any by setting the body font size to "normal". Assuming the reader has set that value in their browser, they get what they expect, and for the 99.99966% percent of people who go with their browser's shitty default, well, they can zoom the page as needed.
(Most people don't change defaults, which is one key reason to use sane ones in products and projects.)
Sites which use px or pt (o hai HN) for scaling of text or fonts absolutely uniformly fail to please for me.
(See my HN madhackery CSS mods links in my profile here, that's what I'm looking at as I type this here. On my principle e-ink browser, those aren't available, and I'm constantly fiddling with both zoom and contrast settings to make HN usable.)
Making pixel-based styling even more janky by not being actual pixels any more seems ... misguided.
The style of the page can use CSS column properties to make use of the width of laptop/tablet displays, instead of defaulting to ugly "mobile size fits all" templates.
I really like writing to readers and not obligating them to anything else. No sales push, no ads, no sign ups. It’s nice that it’s just what I wanted to share.
Just extend the background to the very corners like hacker news does!
Rachel, I'm curious as to your mentions of 'old posts' that may not be compliant, e.g. missing an alt attribute - is this something you've considered scanning your html files for and fixing?
LOL'ed at "dick bar" - seriously that thing is so annoying.
Let's have a look at the websites she's helped build at her job and see how many of those old web principles were applied.
But not everything on the web should be for profit.
javascript:(function(){var styles=document.querySelectorAll('style,link[rel="stylesheet"]');styles.forEach(function(style){style.disabled=true;});document.body.querySelectorAll('*').forEach(function(el){el.style.cssText='';});})();
Additionally the way that the background degrades to a border around the text when using dark reader also causes problems in a similar way (due to the interaction between jagged text and a strong vertical line.
These are subtle points though, and I appreciate the many annoyances that are not there when reading Rachel's stuff.
Sadly, I would argue that this is inaccurate. Especially on mobile browsers, the prevalence of visible scroll bars seems to have dropped off a cliff. I'll happily excuse the progress bar, especially because this one can be done without JavaScript.
Better would be to ditch the absurd footer, but still.
There's nothing wrong with progress. Expecting a user to have a JavaScript enabled browser is reasonable
You don't expect an online retailer to accept mailed-in cash, do you?
I wish there was one more paragraph though:
"I don't use trailing slashes in article URLs. Blog post is a file, not an index of a directory, so why pretend otherwise?"
But then it's http://rachelbythebay.com/w/2025/01/04/cruft/ , so I guess they don't agree.
Later on Web servers made it easier to load foo.html for example.com/foo, but that wasn't always the case.
It's been a couple of decades since I had to do it, but at least that's my memory on why I've been doing this since forever (including on newer websites, which is admittedly a mistake).
But is an article an index to the attached media? Not even "just", but "at all"? Is this the right abstraction? Or do we have a better one, achievable by simply removing the trailing slash?
We discuss this in the context of cruft, user friendliness, and selecting proper forms of expression, which the original article seems to be all about, by the way.
Unlike in file systems, we don’t have the directory–file distinction on the web, from the user perspective. Everything shown in the browser window is a page. We might as well end them with a slash. If anything, there is a page–file distinction (page display in the browser vs. file download). I agree that URLs for single-file downloads (PDFs, images, whatever) should not have a trailing slash.
Note that both trailing slash variant examples you have provided (HN and Google) do redirect to non slash ones.
In fact, personally, I don't expect leading slashes for main pages.
> both trailing slash variant examples you have provided (HN and Google) do redirect to non slash ones
This is incorrect. Chrome (and Firefox by default?) have the broken behavior of showing bare URLs like "google.com" or even "https://google.com". But this is absolutely wrong according to the URL spec and HTTP spec. After all, even if you want to visit "https://google.com", the first line that your browser sends is "GET / HTTP/1.1". Notice the slash there - it is mandatory, as you cannot request a blank path.
Things were better in the old days when browsers didn't mess around with reformatting URLs for display to make them "look" human-friendly. I don't want "www." implicitly stripped away. I don't want the protocol stripped away. I don't want the domain name to be in black but the rest of the text to be in light gray. I just want honest, literal URLs.
In Firefox, this can be accomplished by setting: browser.urlbar.formatting.enabled = false; browser.urlbar.trimURLs = false.
https://theoatmeal.com/comics/design_hell
.
I'm not affiliated with the Toast. But invoking this cartoon, I occasionally describe a web design as "Toasty".
... unless you use GTK, and then it hides the scroll bar because it's sooo clever and wants to bestow a "clean" interface upon you. Yes, I'm looking at you Firefox.
- She doesn't change the color of the scroll handle to make it invisible.
- She doesn't override my browser's font size, making the text too small to read.
- She doesn't configure the page to <expletives deleted> disallow pinch-zooming on mobile.
- I don't store the date in the URL
- I redirect you to https automatically, but perhaps I should rethink that
- My Photos page lazy loads pictures, because it shows over 1000 thumbnails and it took very long to open it on the mobile phone
- Some of my posts link to YouTube videos and embed that video, so this is what comes from a different origin
Yeah, still pretty OK I think.
Works wonders for sites that I visit regularly. StackOverflow: Do I need related posts? The left sidebar (whatever it is they have there, I have forgotten already)? Their footer?
Of course, there are exceptions. If you genuinely need to use a WAF or add client-side challenges, please test your settings properly. There are websites out there that completely break on Linux simply because they are using Akamai with settings that just don't match the real world and were only tested on Mac or Windows. A little more care in testing could go a long way toward making your site accessible to everyone.
My favorite experience was trying to file taxes on Linux in Germany.
Turns out the backend on ELSTER had written code that if Chrome and Linux then store to test account. It wasn't possible to file taxes on Linux for over 6 months until they fixed it when they went online as a mandatory state-funded web service. I can't even comprehend who writes code like that.
Took me also a very long while to explain to the BKA that I did not try to hack them, and that they are just very incompetent people working at DATEV.
The government. Case in point...
- Changing line-height.
- Changing fonts (or trying to, if it is allowed in a web browser).
- Changing colors (likewise).
- Changing body's max-width, margins, paddings.
- Adding a mostly useless header.
I find these less annoying than the ones listed in the article, and they are easily mitigated by the reader view, disabled CSS, or custom global CSS, but there they are.
There is no reason that websites shouldn't have room for some creative expression. For as long as writing has existed, images, fonts, spacing, embellishment, borders, and generally every imaginable axis has been used as additional expression, beyond the literal meaning of the text.
The body width is necessary because web browsers have long since abandoned any pretense of developing html for the average joe. It is normal to use web browsers maximized, so without limiting the body width the text is ridiculously long and uncomfortable to read.
Letting marketing folks on the internet was a mistake.
Everything follows from that, but not just in a bad, dark-pattern profit-optimizing way.
If you provide a paid service, you need auth, and then you damn well better use HTTPS.
If you have anything more complex or interactive than text-publishing, you'll quickly run into absurd limitations without cookies and JavaScript. And animations and things like sticky headers can genuinely improve the usability.
And I feel a lot of those measure have been unnecessary - thinking back to time at enterprise software product vendors, they had myriads of those kind of annoyances to track "engagement" on their page.
The actual customers? Basically the big banks, in one case. Just how much were all those marketing/tracking cookies and scripts doing to secure those sales leads? Each bank had, essentially, its own dedicated salesperson/account manager - I don't think any bank is picking a vendor because one website had more marketnig tracking scripts on it than another.
Obviously the webpage and its full text content is shown to me for 3s before the error message appears and blocks the access.
> I don't load the page in parts as you scroll it. It loads once and then you have it.
Lazy loaded images are helpful for page performance reasons, as done by <img loading="lazy">. I have a script that flips one to eager (so it loads immediately) every few seconds depending on page load speed, so that it you leave the page alone for a while, it fully loads.
> I don't put godawful vacuous and misleading clickbait "you may be interested in..." boxes of the worst kind of crap on the Internet at the bottom of my posts, or anywhere else for that matter.
Most of the posts on my blog[0] are whatever videogame that I was just playing. Often, I'll play through installments in a series, or mention one game while talking about another. While I litter the text with links back to previous entries, I feel that it would be helpful to have a collection of these near the bottom of the page. How else would you know that I've written about the sequel? (I don't like to go back old posts and add links to newer stuff like that.)
I have a "you might be interested in" section. My algorithm: do a search on the post title (up to the first number or colon, but can be customized), then add recent posts from the category you're looking at. Limit 6. I feel that genuinely shows everything relevant that I got and not be 'misleading' or 'clickbait'.
> I don't force people to have Javascript to read my stuff.
Agreed! JS should be used to enhance the experience, not be the experience. This mindset is so baked into how I write it, that most of my blog's JS functions have "enhance" in them.
> I don't force you to use SSL/TLS to connect here. Use it if you want, but if you can't, hey, that's fine, too.
Didn't we learn anything from Snowden? The NSA has recorded your receipt of this message.
1: And Hacker News!
That's one annoyance inflicted on me.