Paul Vixie had a great talk and research about this ~2018: https://www.youtube.com/watch?v=nkoNjntc5Lw
The term "Magic Links" once meant a [futuristic PDA](https://en.wikipedia.org/wiki/Magic_Link). Nowdays, companies like [Auth0](https://auth0.com/docs/authenticate/passwordless/authenticat...) use it to refer to the slightly-magical feat of including a login link in an email.
Last week, the great website you should subscribe to if you haven't already (it's great, when you're not logged out), [404 Media](https://www.404media.co/), posted ["We Don't Want Your Password"](https://www.404media.co/we-dont-want-your-password-3/) in defense of so-called magic links.
Of course, as stated in the article, such email links are harder to phish than passwords, can't lead to a breach of passwords, and protect the site itself against users who might reuse passwords previously compromised.
The article even covers some of my annoyances with this system, but throws out this sentence:
> [We find this to be a much easier login process and wish it was more common across the web where appropriate.](https://www.404media.co/we-dont-want-your-password-3/)
Easier than what? Easier than a long password, without a password manager? Easier than a passkey? Easier than an OTP sent to the same email address?
This sentence reads to me as one written by someone mostly working and _living_ from a single laptop and mobile device. The second part of the sentence, calling for more sites to do this is why I am writing this.
For any scenario with a minimal amount of complexity, like users with multiple computers, and you're looking at a scenario where the site's unwillingness to deal with other login methods shoves friction on the end-user.
### What makes them tragic:
1. Multiple devices. Who doesn't use at least a few computers weekly? I don't have my email on my gaming PC, nor do I have it on my work laptops. 1. Slower. From 2 seconds slower to minutes slower, depending on SMTP delays as well as how awkward it is to get the link to the right browser. 1. Anti-mobile. As mentioned by 404 in their own article, this breaks the ability to use in-app browsers, which is quite annoying especially for RSS reader type apps. It makes interacting with any local link in the RSS feed extremely annoying. 1. Indirect security downsides. Pushing people to access personal email on work devices (or vice-versa) isn't exactly a win for security.
Another annoying _passwordless_ system is to email or SMS an OTP the end user can type in.
While this sucks, it at least allows you to easily log in in situations where you don't have a clear and easy copy/paste path from the email client to the browser you want to log in to.
[Stratechery](https://stratechery.com/), powered by [Passport](https://passport.online), uses this type of scheme (click link OR type in OTP), which is still shifting annoyances onto end-users to free developers from implementing passkeys, but at least has a bit more of an appreciation for end-users.
If you insist on using magic/tragic links by default, at least consider offering a robust alternative, such as [passkeys](https://fidoalliance.org/passkeys/), especially if your audience is technical and privacy-focused.
>Example.com/Verify/5W9GF
If it fails, prompt for OTP on the fallback /Verify/ or /code/ page.Local convenience cookie for authenticating device and permi-cookie for requesting device.
Permanent cookies should be accompanied with a 4 digit numeric PIN between any critical functions unless the session is new.
edit: saw that nicce basically said that a second before I hit post.
Remember that the flow the magic link is part of is one you initiate, that causes you to get an email you are expecting.
That email, and the landing and confirmation page it links you to, can explain very clearly that you are only supposed to authorize this if you are trying to log in on known device in known location that is displaying recognizable number on the screen right now.
That makes it impossible to text or speak it to a phisher.
Bonus points if you show the symbol as a noisy animated glyph, something like [2], or a link to a DRM'd video showing a symbol. That would make it very difficult to view even with screen recording or remote desktop software.
[1] https://www.bombmanual.com/web/index.html#:~:text=On%20the%2...
There's nothing stopping anyone else from initiating the flow assuming the common implementation where only an email is required to initiate sending the link.
Here is the link you requested from ‘Android Device’ in ‘Belarus’ - click here to sign in and allow that device to access your account - only click this if you requested this email
You don’t click the link if you didn’t request it.
This is a simple quick login process, you wouldn’t use it in a place where 2FA is required. It is a mistake to think of this as a substitute for 2FA just because it has some of the same elements as a secondary device authentication. It’s not intended to be a 2 FA flow though! It’s a single factor - ‘does the user have access to a device that can read emails sent to this associated email address’. We aren’t combining it with a password or anything else.
That is the same level of auth used for things on many services like ‘registering for a free account’, and frequently for ‘resetting the password on an account’.
It’s not a complete security solution and you wouldn’t use it everywhere. It would be a bad fit for a banking app or access to a publishing interface. It’s not a bad interface for things like ‘logging in to my subscription on the TV’ or ‘returning as a customer to a website I shopped with once before’.
The victim got a phone call in which the she got manipulated into authorising something in the BankID smartphone app. But what she was actually doing was authorising the attacker to log into her online bank account.
First after several years (of blaming the thousands of victims for their millions lost) did the system start using QR codes on the screen scanned by the smartphone.
That would let you authenticate your desktop browser from an email you opened on your phone if you're on your home network, but without becoming widely exploitable by phishers.
Most sites will have a confirmation once you click the link that includes the browser version and IP address. I have seen that info only in the email itself too with no confirmation afterwords, but not for some time. Have never seen one that is just a link with nothing else that once clicked allows the other device in but supposes could be implemented that way.
The article itself is about not making them the only option (which is fair), and the OP says if they do it should login the device which originally made the request (which I agree). If the implementation is just an email with only a link, no other information with no confirmation (yes, it's fine to let this device in), then I would have to agree with you it's very risky and could allow anyone to login as you (hopefully no sites are doing this, but...)
Sites that send an OTP (crazy-pink-horse-3837) that you can copy, and paste is a good middle ground if implementing the link that just Auths the original request is too difficult.
It's super vague and unclear why things should work this way, and I don't know if this is forced on them by iOS or what. I'm trying to think of why choosing "Safari" in the gmail settings would use the webview instead of the app, and the most-charitable reason I can think of is that they don't want to contribute to the person having hundreds of Safari tabs open...?
Less-charitable reasons might include wanting to keep users in the gmail app for driving "engagement". I read somewhere that when apps use the in-app webview, the app dev can inject arbitrary javascript and thus has full control and can see keystrokes, what the webview's viewport is looking at, etc. I really don't think that's what google is trying to do here, though.
wrt reason : I think that the webview has cookie isolation from the actual app, so using the webview is a bit more privacy-protective. Google being Google that seems unlikely to be the motivating reason, but who knows what good may lurk in the heart of men...
In other words:
1. A malicious individual sends them a fake login link
2. The link can't ask them for a username and password because the site doesn't have passwords, just magic links
3. The site could ask them for your OTP code if they have one, but the bad actor doesn't have their magic link and the OTP code expires in a few seconds anyway
4. Without the bad actor actually getting access to a legitimate magic link nothing happens
It does solve the issue of:
1. You visit the site on your device at the same time as they visit on their device
2. They get two e-mails and maybe click on the one that approves your session instead
3. Your session on your device logs in; theirs doesn't so they figure it's a bug and go click the other one. Now you're both logged in.
If you require the session to be logged in by the link directly, it ensures that only the device you're viewing the e-mail on gets signed in; in the above scenario, your malicious session is never logged in, but their legitimate one is.
The phishing risks for a bank account login are very different than those for a ‘returning player’ login to a casual gaming site for example.
OTP is far better than an actual magic link - you can still include a link that pre-fills the code.
You click the button on the page which knows the session you're logging in from and link code and does a POST which completes the login. This is how all the "login by scanning QR code" flows work.
You still need another method for the first login.
Discord implements this feature, and this phishing scheme is extremely common: bots/scammers will message you saying "to access <some desirable content>, please scan this QR code" -- and if you scan the code, the scammers have just taken over your account. It's not much harder than rickrolling someone unless they're savvy enough to be aware of the scam.
Of course this can be mitigated somewhat by putting a big scary confirmation screen that says "don't click continue unless you're trying to log into your account from another device", but 1) users don't read, they just click "continue"; and 2) the attacker controls the narrative before the user clicks the QR code; they can craft the language to make the scary warning screen make sense to the user ("yes, I am trying to log into this discord server that this person sent me an QR code to").
> 1. Multiple devices. Who doesn’t use at least a few computers weekly? I don’t have my email on my gaming PC, nor do I have it on my work laptops.
"Who doesn’t use at least a few computers weekly?"
I don't. And many, many other people.
See what I did there? I assumed that everyone's like me, just like you did in your blog post. Without data, both of us are wrong.
----
I'd add that magic links also act as a distraction: you open your email client, and it by default opens your inbox, and you start going through all of those unread emails that you just found in your inbox...
Shopify is a big proponent for magic links because they went all-in on their new "Shop" customer accounts. What a disaster. Branding something with such a generic word as "shop" is terrible and average customer doesn't understand that it's supposed to be a brand name.
When you consider that a smartphone is "another" computer (or for many users, the computer that is not the smartphone is the "other" computer), I imagine that number goes way up. Someone using a computer at work and a personal phone, for example.
1. Include a fallback sign-in code in your magic link, in case the user needs to log in on a device where accessing their email isn’t practical.
2. Make sure the sign-in link can handle email clients that open links automatically to generate preview screenshots.
3. Ensure the sign-in link works with email clients that use an in-app browser instead of the user’s preferred browser. For example, an iOS user might prefer Firefox mobile, but their email client may force the link to open in an in-app browser based on Safari.
Any suggestions on what needs to be handled here? My first thought is UA checking to see if it looks like a real browser.
https://www.rfc-editor.org/rfc/rfc7231#section-4.2.1
>The purpose of distinguishing between safe and unsafe methods is to allow automated retrieval processes (spiders) and cache performance optimization (pre-fetching) to work without fear of causing harm. In addition, it allows a user agent to apply appropriate constraints on the automated use of unsafe methods when processing potentially untrusted content.
Exactly the same for email unsubscribe links, or a one click "buy now" link.
I've had to implement a system where if the link was minted x minutes ago, the JavaScript on the landing page is disabled.
It's just another arms race. It shouldn't be this hard, but in email it seems everything is additionally harder to do.
Source?
Another is just counting if a link from an email was clicked. I want friction to be as little as possible. That's done by having some sort of redirect, but you have to use a JavaScript initiated post to weed add false positives. That's already ridiculous, but because of automated link prefetchers, you still need to disable that and show a f'n button.
And then I have to answer to clients that want to know why their clickthrough stats are down precipitously and I don't honestly have the wherewithall to explain the inner workers of every filter that snoops their email before they read it.
I think that a link to a page where you enter a one time code gets around a lot of these issues.
Sending a code goes around a lot of issues.
https://www.webnots.com/how-to-autofill-verification-codes-i...
2) hackers can exploit your system which hurts you (you are a VPS provider and someone mines crypto and you have to wave it for PR) or you run an email service and someone uses your app to spam (which hurts your email rep) etc.
1. Sixty percent seems astronomically high, do you have a source?
and
2. Most "normal" non-tech-savvy people I know who do use a password manager (which I've typically installed for them), are revealed a while later to still use a variation of password reuse : either storing the same password per category of websites, or having a password template they use on all sites, e.g. "IdenticalSecretWord_SiteName"
I definitely don't believe it for the wiser population (my gut, again based on people I know, says the number is more like 10%, maybe 15). Even the 36% figure on the report on security.org posted above seems dubious, I suspect they have some bias in their survey. Unless that is some people who use the iCloud password manager for some things and no password manager for everything else, so it isn't claiming 36% routinely use a password manager away from a few key accounts.
Higher level of security than just user+pass (w/ forgot password)
Email verification
Lifecycle management - in a SAAS when a user no longer has a corporate email, they can defacto not log in, wheras with a user+pass you need to remember to remove their account manually on each SAAS or have integration with your AD (for example)
One-time email verification is not the same as security model as magic links. Magic links require instant access. Many security sensitive sites require a time delay and secondary notification for password reset links, which you can’t reasonably do for login links.
Lifecycle management is an interesting point. There are some underlying assumptions that might not hold though—losing an email doesn’t necessarily mean downstream accounts should be auto disabled too. Think Facebook and college emails, for example.
It could be, depending on how the user has secured their email inbox access. I know I pay a lot more attention to my inbox than some random account. I don't have data, but I think this is true of most people.
I'm also more likely to enable MFA on my email account than I will on every random account I sign up for. And as far as the account providers, I trust the big email providers to be more secure than some random website with an unknown level of security.
You raise some valid points about tying access to a third party and what makes sense. It's not a simple issue.
Personally I'm no fan of magic links.
But the people who do like magic links would say the typical 'forgot password' flow is to send a password reset magic link by e-mail. That means you've got all the security weaknesses of a magic link, and the added weaknesses of password reuse and weak passwords.
Of course you can certainly design a system where this isn't the case. Banks that send your password reset code by physical mail. Shopping websites where resetting your password deletes your stored credit card details. Things like that.
That means you've got all the security weaknesses of a magic link, and the added weaknesses of password reuse and weak passwords.
Is objectively true. I don't really 'like' magic links but I think they're a very easy to implement and simple to use for infrequently accessed systems. Arguably easier than user/pass and certainly more secure.
Re Lifecycle management; Unless you're also linking a phone number or some other "factor" I think in a traditional user+pass scenario you're also SOL if you lose access to your $Email1 before you update your account to use $Email2, as changing your email to $Email2 would usually send a email to $Email1 to confirm the action. In that case you're in the same position as magic link login + email change functionality. Similarly Lifecycle management only comes for free if you don't implement email change functionality.
Just want to make sure magic links work as well as they can.
Different folks have different requirements, and since we're a devtool, we try to meet folks where they are at.
We actually recently added a feature which lets you examine the results of a login, including how the user authenticated, and deny access if they didn't use an approved method.
I've done both in my SaaS product - link is GET with the OTP in the link, the target page checks if the link is in the URL, and if not, then the user can type it in.
Only for signup, though. For sign-in, the default is to always have the user type it in.
That said, here’s how I would mitigate it:
- Like usual, time based limits on the code - Code is valid only for the initiating session, requiring the attacker to create a paper trail to phish
If you do have a magic link & want to use code as backup for authenticating a different device/browser, you could:
- Compare IP and/or session cookie between the initiating and confirming window. On match, offer login button. On mismatch, show the code and a warning stating how it’s different, eg ”You are signing in a different device or browser, initiated from $os $browser in $city, $country, $ip - $t minutes ago.”
It’s not perfect though and may still be prone to phishing.
I assume it generates a session on the post-login screen and authorize that session upon accessing link
People blindly click links all the time. It would have a low success rate, but would be more than 0%.
This works on 99% of magic links I've tried except for cases when they are trying to prevent account sharing. I remember the Bird bike app did this, where they required the magic link to be clicked on the same device login was initiated on. I was using my friends account and he would just forward me the link until one day this stopped working.
I feel the way Netflix did this broke the social contract of profile sharing on purpose - before, if you were a good tenant, you could freeload off another paid account without inconveniencing them at all. Memes and jokes formed of still being on an ex-partner's account or how people would rename themselves "Settings".
Getting an email and being harassed for the code by all those account sharers? Much more open and open for annoyingness.
Have your logging-in session wait for / poll "has visited magic link", and authenticate that session when it's done.
Tons of systems do this. It works great, and it can quite easily work without any web browser at all on the logging-in side because it just needs to curl something -> poll for completion and save the cookies -> curl against the API. A number of oauth flows for e.g. TVs work this way, for instance, because it's a heck of a lot easier than integrating a full browser in the [embedded thing]. Many app-based 2FA (e.g. Duo) works this way too.
Small code copying is also a very good answer though, yes. Roughly as easily manipulated, but nothing's perfect, and it's less "I didn't mean to click that button"-prone.
I don't mean to imply that just visiting the link should be enough to complete a login. That's a GET and there's a LOT of issues with doing anything important on GET. Just "do something on a different machine, then automatically complete login on the one logging in", and magic links to trigger that flow are a rather straightforward option.
There's no reason at all that it has to all occur on the same machine, and many reasons why attempting to require that doesn't work out in practice even when it does happen on the same machine.
TVs etc. are special cases because obviously there is no way to redirect to them, and even there developers will always have some kind of secondary checks like having you enter a code displayed on the screen.
But somehow, the desktop browser and my mobile are tied together for this app. But no other sites have this magic.
Google does it. Paypal does it. Duo does it. Lots of single-sign-on systems do it. All of those including not-TV scenarios, just normal computer-and-phone stuff, as well as sometimes other weird flows. Many of these are far beyond what most would label as "security competent", into "login security is a large part of their business and they have significant numbers of specialists hired".
(it is probably safe to say none are "truly secure" or "actually security obsessed", but I doubt that's actually possible in large quantities. the requirements are too steep, for both implementers and users.)
It's not the most common, certainly, nor anywhere close. But it's very far from nonexistent.
1. Attacker starts a log in and triggers a magic link email 2. Email received and my browser client previews the link without my desire 3. Attacker is now logged in
so I got the magic link on their computer and then I made a qr code
but wait, the email quarantine system had altered the whole link so I had to extract that
but wait the redirect url back to slack was malformed because of the url encoding and i had to fix that and then make the qr code
like wow just give me a qr code or code instead in the original magic link email!
Maybe this one email would have been fine, but if it gets tripped, it’s not worth the headache.
I'm not suggesting this is actually a problem, but that's how an argument could go.
If they specifically ask for documents that are not relevant or if their request is too broad so will produce a lot of irrelevant documents your company's lawyers will tell them no.
By the time someone is actually specifically giving you a list of things to turn over that includes your private email it will only be asking for things that are relevant. Most of your personal email will be excluded.
When McDonald's switched from email/password to magic links I had a hard time getting the magic link to work with the McD app. It usually would just open in the McD website.
Thus was quite annoying because about 98% of the time I eat McD's I would not do so if I could not order via the app [1].
I finally gave up and switched to using "Sign in With Apple" (SIWA). There was no way that I could find to add SIWN to an existing McD account, so had to use the SIWA that hides the real email from McD. That created a new McD account so I lost the reward points that were on the old account, but at least I could again use the McD app.
[1] They have a weekly "Free Medium Fries on Friday" deal in the app available for use on orders of at least $1. Almost every Friday for lunch I make a sandwich at home and then get cookies and the free fries to go with it from McD.
Three large fries ordered at the counter costs over ten dollars.
But the more people use the app, the less cashiers they need and the less ordering kiosks they have to install. Plus customer satisfaction goes up because you can order ahead and your food is ready when you arrive. And getting used to the discounts means you probably won't switch to Burger King or Wendy's.
I think additional user data is a relatively minor part of it.
That just sounds like a great way to get cold McDonald's...
> I think additional user data is a relatively minor part of it.
You're probably right about that, but I've always undervalued user data because I don't think it's ethical to exploit people like that.
I'm sure that a well-timed push notification suggesting a personalized meal deal right around hungry-o'clock is the real goal of pushing this stupid app on their customers.
> That just sounds like a great way to get cold McDonald's...
The idea is to order 3 or 4 minutes in advance, not half an hour before...
Don't they have only the last 4 digits and the issuer of the card? It is likely enough but there will be some noise.
Not to mention any potential legal trouble if they used the card details without explicit consent. App contracts will get around that.
The food does NOT start cooking when you order it if you’re picking up at drive thru. It starts cooking when you pull up to drive thru and give the magic code.
In fact if the food is not easy to prepare you get put in a special parking space, where you wait for your order to be prepared. If it includes soft drinks they might serve those before they make you go park.
At this point, being a fast food chain that doesnt have an app with deals is probably not viable - but I am very skeptical it generates any loyalty.
When they began "value meals" last summer (which don't include their flagship items) they also removed the best deals from the app, the ones that did include Big Mac, QPC, 10-nuggets. I've placed one non-breakfast order in 6-8 months, whenever they started this.
I'm just one person, but if a customer declines from an expected 15-20 visits over a half-year period to 1, and you don't adjust your offer algorithm (and you're the biggest restaurant company in the world so no lack of resources), something is seriously wrong.
Whatever they're doing also isn't working for me.
They've captured the user base with the money that corporate was pumping into the app deals, and are in the process of enshittifying it by transferring the value to themselves instead of the users.
If McDonald's enshittifies its deals while continuing to raise prices, it's way too easy for loyal customers to go elsewhere. I'm saying this as a huge fan and extremely loyal customer of McDonald's for decades... they are at serious risk of losing people like me. As I stated, I've gone from 15-20 visits to 1 since last June/July, whenever they made the big change.
I agree with you that I'd be surprised if Enshittification works as well here as it does in tech, but maybe since there's an app involved, they just think they can get away with it. Who knows.
That's the whole point of data analytics and personalized marketing - even if the value meal works for most people they can still go back to sending me the offers and promotions I responded to previously, in an attempt to reverse my recent decline in spend/visitation. The app makes it possible to send individualized offers. There shouldn't be an entire "B" group where they just say, oh well.
Ask for a “bundle box” next time you’re there. They’re usually named after a local sports team.
Two Big Macs, two cheeseburgers, two fries, and a 10-piece nuggets for $12-15 depending on the market.
I think retail for just the Big Macs is that much these days.
No app required.
This is kind of hilarious and depressing but I live in a high enough cost of living city in the states and I order mcd’s rarely enough that I cannot tell contextually whether your statement indicates this is overpriced or underpriced.
However since the rollout of "value meals" last summer, they took away some of the better deals and now McDonald's is simply expensive (for McDonald's) even with the app.
1) rooted or bootloader-unlocked Android devices are not allowed (granted it's easy enough to get past it for now but the checks are still there). 2) 2FA requirements as if anyone would bother to steal coupons from others
It appears that they want ordering burgers to have the same level of enhanced security as banking apps. Not even crypto or trading apps bother to block unlocked devices in such a way. Blocking rooted devices doesn't even make banking apps more secure but for them I can at least understand the reasoning.
Recently ran into this issue as new mail accounts got confirmed automatically and magic links were invalid when the user clicked them, because Microsoft already logged in with it during checking.
The alternative is to send an OTP in the mail and tell the user to enter that.
In that way there is no link to auto confirm.
However, if you do that ensure that you have a way to jump straight to the page to enter the OTP because (looking at you Samsung) the account registration process can expire or the app is closed (not active long enough) and your user is stuck
The issue that MS tools introduced is broader, because it affects also email confirmation flows during signups. This is less visible, because usually the scanners will confirm emails that the user would like to confirm anyway. But without additional protection steps, the users can be signed up for services that they didn't request and MS tools will automatically confirm such signups.
Thanks for checking if it's the same browser. Some companies don't care about that (cough booking cough) so harmful actors just spam users with login attempts in hope a user will click by accident. And puff, random guy gets full access to your account. I got those every day, if I ever needed to login this way I would not be able to figure out which request is mine.
I think it should check if browser requesting is the same as the one confirming, or just drop that whole dumb mechanism entirely.
Will Microsoft automatically authenticate malicious actors, or block yourself from services built with assumptions that the email client won't auto-click everything?
See also this issue which suggests that all links are opened: https://techcommunity.microsoft.com/discussions/microsoftdef...
Note that this doesn't affect all Outlook users, this Microsoft Defender for Office 365 is a separate product that only some companies use.
Indeed it's a bad thing but how bad?
The admins of some web service get a database of emails, send them those registration links, make their mail software create the accounts and? They end up with a service with accounts that they could create without sending those emails, before they send some emails to solicit users to perform some action on their (long forgotten?) account. There is no additional threat unless I'm missing something.
The admins have only an extra thin layer of protection because of the confirmation step but I think that any court can see through it.
Another example would be if a company hosted a web app for employees that allowed signups only from @company.com addresses. In such case an attacker could be able to signup with such an address.
What an insane policy, why am I surprised Microsoft came up with it…
This problem is ~20 years old from when CMS platforms had GET links in the UI to delete records and "browsing accelerator" browser extensions came along that pre-fetched links on pages, and therefore deleted resources in the background.
At the time the easiest workaround was to use Javascript to handle the link click and dynamically build a form to make a POST request instead (and update your endpoint to only act on POST requests), before the fetch API came along.
Yes. Linking to a form requiring user to press a button to submit an actual POST request is one proper way of doing it, and won't confuse prefetchers, previewers and security scanners - but it lacks the specific "magic" in question, which is that clicking on a link alone is enough to log you in.
Can't really have both - the "magic" is really just violating the "GET doesn't mutate" rule, rebranding the mistake we already corrected 20+ years ago.
(EDIT: Also the whole framing of "magic links" vs. passkeys reads to me like telling people that committing sins is the wrong way of getting to hell, because you can just ask the devil directly instead.)
Your theological analogy is hilarious!
Having a code completely negates that advantage, as attackers can just set up a fake website that asks for the code.
Magic links should log you in on the device you click them, not on the device that requested the login session. Anything else, while being a little bit less annoying, is a security issue and should be treated as such.
I don't like that for a number of reasons.
It's all trade offs, else it would be easy.
If the user opens the magic link in the same browser that initiated the email, then just log them in. Otherwise, present them with the Apple-style "Do you want to authorize a login from 1.2.3.4 using Firefox on iOS possibly located in Portland, Maine? [Authorize] or [Reject]".
Hey, wasn’t Firefox on iOS based on Safari related tech anyways?
https://en.m.wikipedia.org/wiki/Firefox
> However, as with all other iOS web browsers, the iOS version uses the WebKit layout engine instead of Gecko due to platform requirements.
I do agree with what you’re saying though! Just those two in particular will probably have pretty good compatibility, which I was amused to find out when I looked into it.
Obviously, your mileage may vary but it was a good reminder to always validate your assumptions, especially in your critical user flows.
How are you tracking login success rates?
You can use Mixpanel or Heap, which have mechanisms for mapping the non-logged-in user to your verified user on login, though you might need a bit of custom code to do it.
I've not tried June, so I can't say for sure, but it's a pretty common feature for product analytics. I'll be surprised if it's not possible.
With Stratechery, once you get to the website with the magic link, I can then copy the authenticated podcast RSS feed to Overcast and the authenticated RSS feed for the articles to NetNewsWire.
Those subscriptions are then synced to Overcast and NNW on my iPad and Mac via iCloud.
Each podcast RSS link is personalized and you go to the show notes page and click on the link to Manage your account. It will take you to the website using the embedded browser where you can manage your subscription and get access to the various feeds.
Speaking of Overcast, even though its doesn’t create a username and password by default, you can create one. But it’s only to access the web version of Overcast.
It will give you all of the links to all of your podcasts. I did this from the “Dithering” podcast notes
This gets the best of both worlds: the security of passkeys on existing devices, and the passwordless setup and account recovery for new devices.
Bonus: it even avoids vendor lock-in where cloud providers have all your passkeys.
Also, when logging in from a new device, many accounts which use password-based auth today send a confirmation email and ask users to either enter the emailed code or click on the link. This is part of their existing security protocol. So we are not introducing a new unique thing here.
As long as the user keeps a relatively stable set of devices and knows to be suspicious if they get asked for an OTP on a device that they know has a passkey. If they don't know to be suspicious (which let's be real, most people won't), they'll happily follow the instructions and fork over the OTP to a phisher who can use it to complete the authentication somewhere on their end.
Magic links without an OTP fallback are more secure as the initial setup process because they can't be phished unless someone's actually MITM'ing their HTTPS traffic (at which point nothing can save you anyway). A phisher can get someone to send themselves a magic link, but it's much harder to get them to provide the link to them.
It's not that much harder. 'Due to security reasons, please copy and paste the entire link that we just sent you into the following input box. If you don't, your account will be compromised!'
Since the application only sends a weekly email (a markdown template for goal/task tracking) it seemed easier to just use a magic link, only.
I am happy at how much easier the auth code ended up, and fail to see much downside for such an application.
I'm not sure it would be a good system for more complex apps and services.
If you want strong security, offer passkey login. It's safer than email and much more user friendly especially with FaceID/TouchID on Apple devices.
Would it be possible to bookmark the login link so that in the future I don't first have to go to my email in order to log into the service?
Shopify works this way where buyers don't have passwords and only log in with codes sent via SMS/Email.
Every implementation of passkeys I've seen has presented me with the option to create a passkey after I've already logged in with some other method. I'll admit that I haven't dug into it deeply, but the UX I've been presented with consistently makes passkeys appear to be an alternative to the "Remember this computer" button, not to passwords in general. Somehow the service has to know that this new device is authorized. I know depending on the provider there's such a thing as passkey syncing, but that doesn't solve the problem of getting the initial authentication done.
The key insight with magic links is that your security system is no stronger than its recovery mechanism. We are never going to get to a world where passkeys are treated as the only authentication mechanism—there will always be a recovery mechanism, and in most cases an automated one via email. Given that that is the case, magic links simplify things by just not pretending that we have a more secure layer on top. By making the recovery mechanism the primary means by which you interact with the authentication flow you're being more honest about the actual security of your auth system.
Edit: filmgirlcw has a link to an article that is much better than this one that explains how the two actually complement each other: https://news.ycombinator.com/item?id=42628226
Passkeys support authentication via a secondary device over Bluetooth (and this is supported in every major browser on every major platform). So you can login to a site on a machine that’s completely disconnected from your personal passkey store by scanning a QR code with your personal phone.
The login flow basically goes “request login with passkey” -> “browser recognises it doesn’t have the needed passkey, and offers a QR code to scan” -> “scan QR code with phone” -> “phone and browser handshake via Bluetooth” -> “passkey handshake happens between website and phone” -> “login completes”.
I’ve personally used this flow with my work laptop and my personal iPhone many times. iOS has built in support for the Passkey QR codes, so you can scan the code with the standard camera app. Additionally iOS supports allowing 3rd party passwords managers to take over the Passkey flow once you’ve scanned the QR code. So in my case I complete the flow with 1Password.
End-to-end the flow is pretty damn seamless, I’ve never personally had it fail, and take 30seconds to complete. The most annoying part is trying to remember where my phone is.
> take 30 seconds to complete
also, ouch.
There are definite UX problems around passkeys that could be improved and I think exporting will make syncing across systems a lot better (one of the reasons I use 1Password as my primary password and passkey system is so I can use my passkeys across devices; of course it helps that my employer uses 1Password as our system so I am logged into my personal and enterprise accounts and can auth then from personal or work devices, provided additional auth or enrollment isn't needed) -- but if the problem as 404 defines it is that they don't want to be responsible or even have to worry about storing your passwords/auth controls, I think passkeys is at least better for a subset of users than Magic Links.
But again, like Ricky, I don't think it should be viewed as either or. It should be both.
[1]: https://rmondello.com/2025/01/02/magic-links-and-passkeys/
> though I don't know if making your email an even stronger attack vector is necessarily one of them
I'm unconvinced that magic links do make your email an even stronger attack vector. Essentially every service that would be inclined to use magic links would already have a way to reset your password entirely once the email is compromised. All magic links do is make this the primary way to interact with the auth flow.
The bad guys already know that your email is the best target. Magic links just make that very explicit.
That's a good point. I guess my rationale is that it being explicit makes me feel less comfortable for my parents/non tech-savvy friends, who already may not follow best-practices for email hygiene (and may not use email providers that enforce stricter hygiene like 2FA or other methods of protection) and thus, systems like this, make their email even more explicitly the ultimate place to go for access to stuff.
making people feel less comfortable is probably a good thing.
i've managed to convince my dad to start taking his email security more seriously by reminding him a few times that if somebody gets access to his email, they can reset his password on every site where he uses that email address. it's good to remind people of why email security matters, and that it's not just about the personal messages from friends.
Well, don't do that.
Social key recovery is an underutilized solution as well.
Of course, any website's auth system is as weak (or strong) as their recovery process. Different sites will implement this differently.
Like what? I'm failing to come up with a single benefit (for the user).
Give it a few more years and I suspect we will start to see services start with creating a passkey and never collecting a password. The passkey portability specs will be implemented, and hopefully Gnome/KDE implement passkey support.
There are a few things unique to passkeys though. You can register multiple passkeys for the same account so you could in theory have a physical USB key and cloud synced passkeys. Not many people would do this I would think though it would be easier than memorizing every password. There are also data portability specs in progress right now that let you export/import passkeys between services.
But at the end of the day I would suggest that it should be straight up illegal for a company to freeze your account without letting you export your data. It probably actually is by the GDPR. This problem also already exists for email too. If Google bans you, you'll find a lot of your accounts become unusable. Anything with email OTPs wont work, and some services like Discord won't allow updating your email without access to the existing one.
> But at the end of the day I would suggest that it should be straight up illegal for a company to freeze your account without letting you export your data.
This would be great but it only addresses the least likely failure mode out of the ones that I brought up.
And note that in many cases we're currently better off under the existing system if Gmail does ban you than we would be in your proposed world: only services that send OTPs on every login would be immediately inaccessibile, so you'll have time for most services to log in and switch to a new email address.
Which means that any service that claims to be passkey-only but supports email resets should just acknowledge that they support both magic links and passkeys as options—they're kidding themselves and their users if they pretend otherwise.
For more sensitive accounts like bank accounts and government services. You'd probably have to go through some other reset process involving real ID and possibly an in person visit to a support location.
sigh TBH, I hope not. Maybe optionally, but for now the friction might keep companies from going passkey only, which (I think) would be a total nightmare from a security and usability perspective.
Ricky Mondello wrote a really great blog last week[1] about how passkeys, as OP alludes to at the end, can be used alongside Magic Links, that I think is worth a read.
[1]: https://rmondello.com/2025/01/02/magic-links-and-passkeys/
I'm still used to Apple people being almost completely invisible publicly.
The fact is that even in the best of times, e-mail isn't reliable. Things go to your junk folder. Links get blocked by work spam filters. Mailboxes get full (I assume? it's been a while).
Personally, I have my e-mail on my iPhone and anywhere else (work laptop or gaming PC) I have to log into icloud.com to check my e-mail; it's cumbersome. Let me put in a password. Let me scan a QR code like embedded devices do. Give me at least one other option.
Agreed with some other folks that Passkeys is not a replacement for email verification.
I seriously HATE magic links. My email inbox is barely better a social network's time suck. Lots of urgent, little important, wrecks any flow I had.
Forcing me into my inbox is highly likely to cause me to forget about the reason I was there (to get into your app). Or, at best, it slows me way down and nearly always breaks my flow.
Perhaps this is acceptable for the security boost (?) for the average user, but man, when I get forced into magic links I sometimes just abandon the app altogether.
Disclaimer: 1. I have/pay for a password manager, which helps with the forgotten password problem a lot. It also allows me to have extremely hard-to-crack passwords.
I'd even say magic link emails border on misuse of email; they're a fundamentally different form of communication from all other uses of email. It's not easy on neurodivergent brains to deal with that combination of pollution (magic links in my inbox) and distraction (actual emails in my face when I'm trying to log in and was not trying to check my email). Protonmail's client could really make my day if they found a way to reliably separate those 2 channels so I didn't have to even open my inbox to get login codes/links.
What I don't understand is why I've never been prompted to use a password manager by any site with a signup flow. It seems easier to normalize their use through messaging than keep acting like passwords are supposed to be something you consciously remember. Nobody should remember their passwords, except for maybe 2-3. But now we're moving toward a world where login just means more friction and less control instead...
But something simple could work. Already you usually have a note under a password field, "Must contain at least 8 characters and at least one special character" or something to that effect. It could also have some note about "We suggest a randomly generated password from your password manager."
I'm not building this out so I don't need every hole poked in the idea, just seems like it could work.
They required the password to be changed monthly, have at least 10 characters, at least one number and at least one special character. On top of that – they locked out password managers and pasting. "We need to make sure you are the one logging in and not a hacker that hacked your password manager" they explained when I asked.
Out of spite I went for "Password12!" the first month and "Password123!" the month after, at which point I received an email from the IT department explaining to me that my choice of password was endangering the corporations security.
Sounds like they were logging/storing passwords in plaintext.
And password managers (keepassxc anyways) have a pretty nifty auto-type feature that gets around that anyways.
Many home users are pretty good about protecting important scraps of paper. The government gives us plenty to hold onto. Even if they’re a grandma that doesn’t understand all this password manager mumbo jumbo, they can deal with a notebook and be better off than using the same password on every site.
I wish magic links would go away, but if they need to stay, that approach was the least terrible.
Almost everyone outside of some HN users use email regularly. They have it open on a second monitor and it is an important part of their workflow.
If their companies are not super tech savvy and not using SSO, the users probably at least have a company email address they’re logged into.
I don’t think it’s worth over optimizing for a small percentage of users. Worst case scenario you need to contact support.
99% of enterprise users will be fine with magic links, compared to dealing with people who use horribly weak passwords. Most of them seem to prefer them to passwords.
SSO is always best option if available but magic links are definitely second.
I'm building something for a very tech illiterate audience, and everybody loves the simplicity of it.
I could understand requiring a third factor to authenticate if signing in from a different location or a different ISP than I've been using for the past 5 years, but it's ridiculous to do so if nothing has changed (except the final octet of my DHCP-assigned address) since I last signed in yesterday. I use a different computer (via SSH) to read my email than I do for web browsing, and cutting-and-pasting a signin link that's hundreds of characters long (spanning multiple lines in Emacs, so I have to manually remove \ where it crosses line boundaries) is a PITA.
Adding friction on every sign-in colors all subsequent interactions I have with an app, and makes me hate using it.
You shouldn’t get the device verification requirement if you’ve used the device before (we store a permanent cookie to check this) or for the same IP. Any chance your cookies are being cleared regularly?
We added this after attackers created clones of http://mercury.com and took out Google ads for it. When customers entered their password and TOTP on the phishing site, the phisher would use their credentials to login and create virtual cards and buy crypto/gold/etc. The phisher would also redirect the user to the real Mercury and hope they figured it was a blip.
This device verification link we send authorizes the IP/device you open it on, which has almost entirely defeated the phishers.
Since WebAuthn is immune to this style of phishing attack, we don’t require device verification if you use it. I highly recommend using TouchID/FaceID or your device’s flavor of WebAuthn if you can—it’s more convenient and more secure. You can add it here: https://app.mercury.com/settings/security
That said, we are talking internally about your post and we do recognize that as IPv6 gets more traction IPs will rotate much more regularly, so we’ll think if we should loosen restrictions on being a same-IP match.
I wasn't aware that WebAuthn didn't have this requirement. I prefer TOTP because I actually like having a second factor in addition to a credential stored on my computer's hard drive (whether a password or a private key in my password manager), but I might be willing to reduce my security posture to get rid of this annoyance.
One suggestion: the link would be half as annoying if it was easily cut-and-pasteable rather than a long email-open-tracking link spanning multiple lines. This is what it looks like when I copy it out of my email:
https://email.mg.mercury.com/c/eJxMzs1u4jAUBeCncXZB9vVfvPACZshoWIwYoiasdgkra2KV_JCGqPTpK-imq7xxx40vlO9IKia6ggL6zUlQHObdF6\
JI0alRHBWQvWKRuD4loLZxsJSRXZAwfNBQeQWozasdgeWsMyFZozE4RKZ4d151NOFtuq9w6IqLb-d5fGdyzaBmUIdx_NkzqBeacrqXkZaMxGSNQyQmf7_9GW7\
Hf1cJ8zW9TshAwwba3ccLuN3u_r_PR9j_GkxxxmadDu32c59jMfkYFmKKP0baIT0vzP4ynHN_-yyhZOTy9jmPPQn6gL-VLMfvvIA_XxbywRYhUbZUp0RpVCUC\
qDsbasJHeObFMZ4YrFw1cAAAD__4XPZXw
I have to manually remove the backslashes and re-combine the lines before pasting into my web browser.Edit to add: looks like email.mg.mercury.com is hosted by Mailgun. Are you intentionally sharing these authentication tokens with a third party by serving them through this redirect? Do your security auditors know about this?
Authentication tokens (even tertiary ones) usually are supposed to have pretty strong secrecy guarantees. I've done multiple security audits for SOC, PCI, HIPAA, etc., and in every case the auditors would have balked if I told them signin tokens were being unnecessarily logged by a third-party service.
(Also: I strongly disagree that the only way to get reliable delivery is via a third-party email service, especially at Mercury's scale, but that's a digression from the topic at hand.)
That said, our security team and I agree there is no security issue here. Mailgun already can see the text of the emails we send.
And again, I wasn't saying that you can't do all of this nonsense, but users who see it as nonsense should be able to turn it off.
The attack wasn't that the attacker has my second factor, the attack was that the attacker tricked me into verifying a single login/transaction using my two factors, on their behalf.
They probably judged that the inconvenience of the verification email affects few enough users that it is worth it. Most users don't switch IP addresses very often. And those that do, probably don't all clear their cookies after every session.
Adding SMS in addition to email would be obviously useless, as you point out.
Even if they were, almost all email goes through third parties which are trusted implicitly. That's not great, but email is the only federated system in existence capable of implementing this type of decentralized login at scale.
Maybe someday we'll be able to use something like Matrix, Fediverse OAuth, or ATProto OAuth instead, but those are all a ways off.
The vendor might not be the only party to use an HTTP redirect service too! My email goes through a security screen by $EMPLOYER, which also rewrites links to get processed through their redirect service. Sure, it's for company-approved reasons, but it's still another party that has access to the login token.
To be clear this is what we're trying to avoid. An easily typeable code like that can be typed into a phisher's website.
I appreciate you guys are trying to protect people, but no other financial institution I deal with requires this level of annoyance, and at some point I'd rather switch to a less "secure," but more usable service.
(I put secure in scare quotes, because some suggestions, like trading true 2FA, where I have two separate secrets on two separate devices, for a single WebAuthn factor, are actually accomplishing the opposite, at least for those of us who don't click links in emails and don't use ads on Google for navigation.)
Edit to add: or maybe save the third factor for suspicious activity, such as "new device adding a new payee," rather than every signin. It's been months since I onboarded a new vendor, and I'd be OK with only having to do the cut-and-paste-the-link dance a couple of times a year, rather than every single time I want to check my balance.
They could use a custom subdomain for this click tracking and "hide" the mailgun url from you, but we're finding that for some reason Mailgun doesn't just use a let's encrypt certificate, so some users will complain that the tracking links are "http" (and trigger a browser warning when clicked).
Anyway, even with click tracking disabled and links going straight to mercury.com, the security issue would remain the exact same (since Mailgun logs all outgoing email anyway).
But my understanding is that the contents of that email and its link do not provide "login" capability but "verification" capability. As such, a Mailgun employee accessing your data, or an attacker accessing your Mailgun logs, would only be able to "verify" a login that they had already initiated with your password AND your OTP —which means that's effectively a third hurdle for an attacker to breach, not a one-step jump into your account.
javascript:void(window.location.href = window.prompt().replace(/\\\n\s*/g, ''));
I've seen passkeys support something like what you're after. The browser will produce a QR code you scan with your phone, and then you authenticate with the passkey via the phone, which then authorizes the original browser.
I'm not absolutely certain that this is part of the spec or how it actually works. I'd like to know. It solves a couple different usability issues.
You could always use something like a Yubikey.
This is the option I prefer, but only on sites that allow me to enroll more than one device (primary, and backup for if the primary gets lost or damaged). AFAICT, Mercury only allows a single security key.
I have an encrypted offline backup of my TOTP codes, so if I drop my phone on the ground, I don't get locked out of all my accounts. I keep this separate from the encrypted offline backup of the password manager on my computer, and as far as I know, neither has ever been uploaded to anyone else's "cloud." Malware would have to compromise two completely separate platforms to get into my accounts, rather than just iCloud or whatever credentials.
I understand the desire for phish-proof credentials, but—given that I don't click links in emails—my personal threat model ranks a compromised device (via attack against a cloud service provider, or software supply chain attack against a vendor with permission to "auto-update," or whatever) much higher likelihood than me personally falling victim to phishing. I readily admit that's not true for everyone.
We allow multiple security keys. You can add more here: https://app.mercury.com/settings/security
I completely understand that. I'd actually be interested in reading anything practical you might have on that topic if you don't mind. I asked some experts who gave a talk on supply chain security last year ... they didn't have a lot of positive things to say. Developing software feels like playing with fire.
The development environment where I'm downloading random libraries is on a completely separate physical machine than my primary computer. I generally spin up a short-lived container for each new coding project, that gets deleted after the resulting code I produce is uploaded somewhere. This is completely separate from the work-supplied machine where I hack on my employer's code.
On my primary computer, my web browser runs in an ephemeral container that resets itself each time I shut it down. My password manager runs in a different, isolated, container. Zoom runs in a different, also isolated, container. And so on.
Wherever possible, I avoid letting my computer automatically sync with cloud services or my phone. If one is compromised, this avoids spreading the contagion. It also limits the amount of data that can be exfiltrated from any single device. Almost all of the persistent data I care about is in Git (I use git-annex for file sync), so there's an audit trail of changes.
My SSH and GPG keys are stored on a hardware key so they can't be easily copied. I set my Yubikey to require a touch each time I authenticate, so my ssh-agent isn't forwarding authentication without a physical action on my part. I cover my webcam when not in use and use an external microphone that requires turning on a preamp.
I try to host my own services using open source tools, rather than trust random SaaS vendors. Each internet-facing service runs in a dedicated container, isolated from the others. IoT devices each get their own VLAN. Most containers and VLANs have firewall rules that only allow outbound connections to whitelisted hosts. Where that's not possible due to the nature of the service (such as with email), I have alerting rules that notify me when they connect somewhere new. That's a "page" level notification if the new connection geolocates to China or Russia.
I take an old laptop with me when traveling, that gets wiped after the trip if I had to cross a border or leave it in a hotel safe.
I have good, frequent backups, on multiple media in multiple offline locations, that are tested regularly, so it's not the end of the world if I have to re-install a compromised device.
Something like VS Code remote dev with a container per project? Just plain docker/podman for containers?
> On my primary computer, my web browser runs in an ephemeral container that resets itself each time I shut it down. My password manager runs in a different, isolated, container. Zoom runs in a different, also isolated, container. And so on.
Qubes, or something else? I've been looking at switching to Linux for a while, but Apple Silicon being as good as it is has made making that leap extremely difficult.
I live inside Emacs for most things except browsing the web, either separate instances via SSH, or using TRAMP mode.
If you switch to Linux, I highly recommend configuring your browser with a fake Windows or MacOS user agent string. Our Cloudflare overlords really, really hate Linux users and it sucks to continually get stuck in endless CAPTCHAs. (And doing so probably doesn't hurt fighting against platform-specific attacks, either.)
Not sure why we suddenly went from 2 factors (password + TOTP) to 1 factor (passkey), even if passkeys themselves are better.
TOTP should at least be an option for the users.
security = 1/convenience
but also vice versa
unfortunately, only few ISPs do IPv6 correctly by assigning a fixed prefix to customers. most of the ISPs apply the ipv4 logic when adding ipv6 planning hence this situation.
hopefully this will improve in the future and more stable prefixes will be given to users.
What? You have your email on literally every device -- be honest.
I think it's interesting that the author has chosen to not have email on PCs, but I can see why. I also completely get why they'd opt to not have private email on a work laptop.
Even something small thing like email -> hit enter -> then we show password input, will cause me to stop using your service.
1. enter username
2. choose password or magic link (select password)
3. enter password properly
4. Thank you for logging in. Please click your magic link to log in.
Why did you waste my time putting in a password when the magic link was the only option?
There's even cooler ways that are already working including nsec bunkers.
This is the way of the future IMHO, most people just don't know it yet.
They can present it as a "more secure" login method, obscuring the reason they actually like it.
Auth is the worst part of building a service and sucks all the fun out of it. API auth is a mess because people can’t keep a token string secret. Now we need JWTs, OAuth, token refreshing, and a whole bunch of BS that no one enjoys.
One reason why OpenAI and Anthropic APIs are so much more fun to use than Google and AWS offerings is that you get a token and are responsible for keeping it safe. It makes the entire workflow dead simple. I’m not creating a new project or fiddling with IAM just to try out an endpoint.
When links in email come into mind, so does phishing.
I hate these magic links a lot.
The point is not to click suspicious links. If you know a magic link was sent, it's not suspicious.
That being said, I hate them just for the delay.
Click that stupid magic link for a service we use, and they’re asked for their Office 365 credentials… all the while I’m telling them not to click links in emails.
™Kelly Shortridge 2021 (https://x.com/swagitda_/status/1503751776134180873)
Otherwise it's been a while I haven't seen an reset link instead of a reset code. Copy/pasting is not much of a hassle, and it works even if the mail is checked on a different device.
The only real link I had to deal with were app callbacks that were explicitly labeled as such (with instructions from the app to explain what to expect)
apple's email privacy scheme seems interesting (apple always loads all images), but I don't know if there are drawbacks.
In this case, what alternative is there than having a magic link in the footer of that email that says "unsubscribe" that includes a token unique to that email address that acts as proof of owning an email account when you then click that link and ask to unsubscribe?
But using them as the only option to login is really, really annoying. Mails can get trapped in spam filters, delayed by intermediate server overload or spam filters that sometimes take 10 minutes, servers doing graylisting... Plus all the other annoyances listed in the article (e.g. multi-device users, in-app browsers). At the very least, support passkeys if you really don't want to store (hashed) passwords. And no, SMS is not an alternative: I was several times barred from logging in to a service because SMS wasn't properly working (can happen easily while roaming abroad).
Unfortunately blocked on my (work) network -- classified as miscellaneous / unknown category.
If you check the early comments on the thread I posted the full content for someone else who could not reach .zip domains.
We dumped them for a host of reasons, but included in there was their use of tragic link logins.
Absolute clowns. Glad to see this practice getting the negative attention it deserves.
Anthropic has been the once exception to this personal policy simply because Claude is the best LLM out there. But it's a mountain of pain every time I have to re-login, and I've complained to them multiple times about this.
Is it though? Majority (if not all) services I frequently use have email as recovery option for forgotten passwords.
In any case the correct approach here is to fix password reset/account recovery (e.g. with social key recovery) rather than reduce everything to the lowest common denominator.
It also can be said to lower security because it instills the behavior of clicking on links in incoming emails as a standard practice.
Don't send me a link, tell me where to find it, after I log in.
On the other hand, training users to expect and use hard-to-read login-links in emails is not really good either. It promotes a broad range of scams, phishing, and potential malicious code exploits, even if the a particular sender's site has been hardened somehow. (e.g. a TOTP app on a phone.)
1. Some users (0.1%) just don't ever get the email. We tried sending from our IP, sending from MailGun, sending from PostMark, having a multi-tier retry from different transactional tools. Still, some people just will not ever be able to log in.
2. People click old Magic Links and get frustrated when a 6-month old link "doesn't work". We've decided to remedy that by showing them a page that re-sends the link and explains the situation (like Docusign does) instead of an error message.
3. People will routinely mis-spell their email and then blame the system when they don't get the code.
All of this still results, I feel, in way fewer support tickets than the email+password paradigm, so I'm still in favor of Magic links.
I never tried to add magix links, but I added Google Sign in to my SaaS several month ago, and since then, it accounts for more than 90% of new sign-ups (users are devs, so rather tech savvy and privacy aware). I'm now convinced that no other method is a priority (I still have email/password of course).
Player 1 gets the same support request over and over, does nothing about it, ("hey, that's what the user entered, they should be more careful!"), complains about it online, and who knows how many hours are wasted in the back and forth with the customers.
Player 2 simply makes the necessary change on the backend, the users don't even realize they made a typo, totally seamless flow.
Hat tip to you. Hope you screenshot these two comments and bring this up in every interview to exemplify the contrast between "technically correct" and high-efficiency problem solving.
This is the way. The user can benefit from feedback that they got something wrong, in addition to a helping hand.
Which I'm not entirely enthusiastic about as it leaks all user emails to some random service.
In general, I do understand that use of SSO is due to convenience. Especially since in many cases websites provide less friction when signing up via SSO instead of using username+password.
I do it for services I don't care about. In my mind it is more privacy for me. Keeps you out of my real inbox and my password out of your system and I believe that I can - to some extend - remove myself without having to go through whatever crap account deletion process that services has tried to cobble together.
Worst offenders let me login with google and then immediately asks for name and phone number or email and asks me to verify it.
This shouldn't be a factor because your password should be a random series of characters that are unique to that site.
> I believe that I can - to some extend - remove myself without having to go through whatever crap account deletion process that services has tried to cobble together.
To an absolute minimal extent: you can make it so Google won't tell them whatever it was they already told them again. But you can't make them delete the data that they already lifted from your Google account.
For keeping surfaces out of your inbox, that's what email aliases are really good for. Register with an alias and then block that alias if they abuse it.
I am shocked, shocked, by the number of different K. Strauser people who have typed that email address into some random website or another. I've gotten bank notifications, loan documents, Facebook signup info, meeting minutes from some random volunteer work, and all kinds of other things. When I can figure out from context who the intended recipient is, I try to let them know so they can fix it. On one occasion, the person sent me back a swear-laden diatribe for "hacking their email". Sigh.
I think this has made me a better engineer, though. When someone says something in a meeting like "...as long as they type their email correctly", I can jump in and address that myth head-on. No, people will not type it correctly. If it's a minor pain in the neck for me, with an uncommon name, I can only imagine the traffic that the world's John Smith's get.
I'm listed as the email address for _many_ utility bills, doctors offices, and more political campaigns than I can count.
Comical how many people mess up their own email address.
Username+password (or passkeys) with a password manager (which ensures that credentials are used on the correct domain) via HTTPS is probably the only end-to-end encrypted way of exchanging credentials with good UX for general public.
90% of web apps don't handle that kind of information, and for them a magic link is at least as good as passwords (as this article explains). Those that do handle things like personally identifiable information (beyond an email address) really should be enforcing 2FA or proper electronic IDs.
There is a whole profession writing recommendations about information security, and every web developer needs to be able to do this kind of analysis at a rudimentary level. We don't need to wing it, we can analyze security requirements in a systematic way.
It's not like the rest of the customer's data is not valuable? If you don't feel comfortable storing passwords, the amount of data I'd trust you with is strictly zero.
Planning for a breach doesn't make you more likely to have one—if anything it makes you less likely!
I had this case recently: Sending a job application, so wanting to check what my LinkedIn actually says (I don't use or even update LinkedIn regularly). Now LinkedIn thinks my login looks suspicious and sends a confirmation email. (It does that nearly every time the rare cases I log in, probably because I delete cookies.) But the mail doesn't arrive. My email provider is usually very reliable, but later I learned that just in this moment they experienced multi-hour delays. While this was not a magic link it shows that any login requiring a quick email delivery can fail in the worst moment.
The "email is authentication" pattern
https://news.ycombinator.com/item?id=41475218
Some users use email flows, such as "magic links", instead of bothering with passwords at all.The most-devices people I know are those who have a laptop, phone and tablet. That's it, I literally cannot think of anyone I know with more then this, and most of those with tablets are using it for games or reading or for the kids.
Magic links are indeed the best solution for the average user. Type in your email with autocomplete, get a notification from the mailbox, click, click, and you're in.
My autocomplete can fill a password or passkey in too. Don’t waste my time.
I creates a bar management/sales platform for our group of friends. It's self service so people purchase their products on their phone and pay later.
People get... intoxicated... after which passwords appear to become quite the problem. Magic links solved that.
To solve the multi device and in-app browser problem people can also open the links on another device. That'll show a short code they can enter on the original device to actually log in. It's not perfect, but it works.
I do fully agree that passwords should always be an option as well.
To be fair, in-app browsers should die, especially those without an "open in regular browser" opt-out – which RSS readers should readily offer anyway.
[1] https://en.wikipedia.org/wiki/RSS
[2] https://en.wikipedia.org/wiki/Comparison_of_feed_aggregators
Throw every product manager responsible for forcing in-app browsers upon their users in jail.
I've received magic links to my Gmail account that belong to other people, for accounts that have ordered flight tickets, or clothing, or digital services.
Those people, I guess they now have no way to access their online account, as they cannot password reset (if that was the fallback), or change their email (usually requiring confirmation), or receive their magic link.
There's nothing I can do here, except to delete the email, I don't have any indication as to what the correct email should be, and the person's name is the same as my legal name and there are a lot of people with that name in the World.
Few services verify an email during sign-up, because I'm sure data shows that added friction during sign-up results in fewer people signing up.
Frankly, if somebody else uses my address for a service and I'm receiving anything other than email verification from that service, I'm reporting it as spam on both Gmail and Fastmail because that's what it is.
I have my own domains for email so I haven't had the issue of someone else entering my email but I keep hearing from friends getting that.
I'm quite fast at passwords and 2fa. The whole thing is second nature, I have a password scheme to deduce the password for any site but keep them long and high entropy, and I can do 2fa calculations from any trusted device without taking my hands off the keyboard (thanks to oathtool), and anyway my passwords are sync'd securely and I can look them up with hands on keyboard.
This is strictly better than "single point of email failure". Why force me to be less secure and less usable.
Please, just allow me to use passwords and regular old TOTP.
Our response to above: https://wideangle.co/blog/passwordless-authentication-magic-...
Conclusions:
Magic Links good? Yes.
Magic Links the best? No.
In the US, because the Fifth Amendment Self-Incrimination Clause, passwords cannot be demanded. Passwords are testimonial evidence. [United States v. Hubbell (2000); re Grand Jury Subpoena Duces Tecum (11th Cir. 2012)]
Biometrics on the other hand are not. The court ruled that a defendant could be compelled to unlock a phone with biometrics because it is not testimonial. [Commonwealth v. Baust (Virginia, 2014); State v. Diamond (Minnesota, 2017)]
Basically, passwords cannot be compelled to be disclosed, while biometrics can.
There is similar legal stance in Canada, UK, Australia, India, Germany, and Brazil to name a few.
Finally, under duress, passwords can be held, while biometrics cannot, without self harm.
I assume that would work for the situations you have in mind.
I assume that would work for the situations you have in mind.
Imagine you are not in a a relatively "democratic" nation.
(0) You are asleep. You phone is on the nightstand. At 4:00 in the morning, you wake up with a rifle stuck in your face.
(1) You are walking down the street, middle of the day. Your phone is in you jacket inside pocket. Two burly individuals grab each of your hands, tie them and then toss you into a van that just pulled up.
(2) You are walking around, let wind on your face and feel it in your hair. Your cell phone is in your jilbab or burqa, you changed out of. A rock hits your head and you black out.
(3) you walk into the public WC/bathroom in the bar, but you do not take your phone in with you because it is just ... ick. You come back out and the phone is in the hands of a local law enforcement agent.
Each one of these have happened in real life. There are just a myriad of real scenarios where someone is not in reach of their cell phones.
Nothing happens out of the blue. People don't get searched randomly except some rare places where an iPhone is the source of danger itself being a valuable possession.
If someone feels that such events could happen it is mandatory to do OPSEC. If not, bad for this someone. Anyway, a proper torture will reveal the password in a "not so democratic country". Which also happens in the real life.
If it dialed immediately, I'd be in jail already, going by the amount of times I managed to trigger the "call 911?" screen by accident in the last year or so.
I extrapolated this as anything that is in the mind (PIN, password, some secret) cannot be demanded, while anything outside of the mind, biometrics, geolocation, physical object (key) can.
Again, I am just a hairless monkey smashing rocks together. Consult experts.
There is not a similar stance in the UK. You can be compelled to provide a password. Section 49 of the Regulation of Investigatory Powers Act 200 (RIPA and let that doublespeak sink in a second) allows the police to compel it subject to a warrant from a judge.
The sentence (subject to sentencing guidelines) is up to two years in prison or 5 years for national security / child indecency cases.
You can claim you don't remember/know it as a defence, but in most cases that's not going to be believed by a jury.
In theory once you got out you could be re-served with the notice and face another 2-5 years. Rinse and repeat.
Is there no concept of double-jeopardy in UK jurisprudence?
as per RIPA 2000 Section 50, 2 a)
To do this, they'd likely need some evidence to persuade the jury, beyond reasonable doubt, that the encryption system had such a feature.
We want regulation to be for the benefit of all so we attach an emotional meaning to it but nothing about the word says it has to be beneficial.
Is there are least some argument of reasonability? I have an old Runescape account I would love to be able to get back into, but I don't even remember the email it was tied to, much less the password. I was a kid back then so even the card that paid for membership was my parents. Is there some expectation that the prosecutor has to show the account was accessed in the last X years, or is this effectively a backdoor to keep someone in prison indefinitely?
Seeing passkeys as a dedicated login on their own is...strange. For all of the reasons that you indicate.
In the UK the Regulation of Investigatory Powers Act (RIPA) makes it a criminal offence to not divulge a password if compelled via a RIPA notice.
Can the judge really throw you, and re-throw you multiple times to jail because the password you keep providing did not work?
But none of this has much to do with the biometric auth you do with passkeys, because passkeys are used in places passwords would be used — logging into apps and websites. Which you see only doing when your device is already unlocked and you are actively using it.
Also pressing the lock button five times in a row.
Magic links and OTPs have become common for many other sites I use -- Udemy, Teachable etc. come to mind.
Recently I bought a cheap "smart watch" for my kid. Mostly for the digital display with configurable clock faces and simple step counting. The app would refuse to activate the watch unless we provide a valid mobile number and OTP. Why the hell do I need to give them a working mobile number just to use a smartwatch. Even if I wanted (which I did not) to get notifications / calls / texts / caller ID / contacts from my paired smartphone ... the smartwatch app does not need to know my phone number for that functionality to work. Feel so powerless.