I've always wondered if it is 10 different orgs doing the campaigns, or the same one. If the same one, why send 10?
My bet is that one criminal group is selling software to enable this, with very similar default settings. Then ten groups by the software, and each one ends up sending you a very similar email.
Social engineering (and I include spearphishing) has always been powerful and hard to mitigate. Now it can be done automatically at low cost.
If it was done without target consent, it would certainly be unethical.
It could be kinda-secure if the header had to have a payload which matched a certain value pre-approved for a time-period. However an insider threat could see the test going on and then launch their own campaign during the validity window.
The situation involves institutions happy to opaque links to email as part of their workflow. What could change this? All I can imagine is state regulation but that also is implausible.
We have sandboxing on mobile apps. Why can't we have the same for desktop?
After the limited success of the windows store you can now get the same in standalone installers. It has been adopted by approximately nobody
However, it reasonable to expect a single hole to be fixed. The "email hole" has been discussed for decades but here we are.
At that scale, expecting a core issue to be quickly (or ever) fixed is just unrealistic. I honestly wonder if fundamentally it will ever be fixed, or if instead we get a different communication path to cover the specific use cases we do care about security.
PS: the phone is now 2 century olds, and we sure couldn't solve scamming issues...
Morally? No reason why, and people are working on it (slowly).
Practically? Because sandboxing breaks lots of things that users and developers like, such as file picking (I hate snaps), and it takes time to reimplement them in a sandbox in the way that people expect them to work. If it requires the developers' cooperation, then it's even slower, because developers have enough APIs to learn as it is.
I think (but am not sure) that something using trust networks from the ground up would be better in the long term. Consider anything dodgy until it has built trust relationships.
Eg email servers can’t just go for it. You need time to warm up your IP address, use DKIM etc. People can’t just friend you on FB without your acceptance so it’s a lot safer than email, if still not perfect. A few layers of trust would slow bad actors down significantly.
A trust network wouldn’t be binary. Having eg a bunch of spam accounts all trust each other wouldn’t help getting into your social or business network.
Thoughts from experts?
But this is fundamental to an open Internet. Yes going whitelist-only would stop bad actors but it would also hand over the entire internet to the megacorps with no avenue for individual success.
Email as it is presently is a constant opening to phishing and spear fishing. Browser exploits are common too but it's harder (not impossible) to make them personal. And phishing doesn't have to rely on a browser exploit - a fake login page is enough.
It's logical to have a whitelist (or disallow) email links but still allow browsers to follow links.
Eg certs. Let’s Encrypt equivalent for credibility, where I can trust you as we interact more, and borrow from your trust networks. Send spam and you reduce your cred. (Letscred.com is available right now if anyone is very bored :)
Gotta be tested very carefully so you don’t end up with a black mirror episode, of course.
IT wasn’t amused when I reported it as phishing attempt.
In a previous client,the CIO complained about the low click rate for their security training, every one thought it was some spams.
It also does little against compromised mailboxes - heck, a sufficiently advanced spear fish might even have better chances if the user misunderstands the security improvements this would provide.
But I think other than this, there's not much else to fix. Some people are malicious, others get compromised. No fixing that.
I’ll die on this hill.
or by your “friend” mentioning a highly personal issue that only you two were supposed to know, asking you to phone someone on their behalf
or by your “relative”, etc.
Same. I found a setting in legacy outlook to force all e-mails to be in plain text. So every corporate email I reply to, converts the product owners html formatted emails into junk.
Gives me a little joy that the e-mail they worked so hard on gets mangled by my outlook replies :)
How would that help? You can put links in plain text.
The only people who want to send HTML emails are marketers, advertisers, trackers, scammers, hackers, and that clueless manager who wants the cornflower blue background. (most of these actors are the same people, except for that last one).
Chrome 0-days are expensive and aren't going to be wasted on the masses. They'll be sold to dodgy middle eastern countries and used to target journalists or whatever.
If you aren't a high value target you can click links. It's fine.
I have to wonder if, in the near future, we're going to have a much higher perceived cost for online social media usage. Problems we're already seeing:
- AI turning clothed photos into the opposite [0]
- AI mimicking a person's voice, given enough reference material [1]
- Scammers impersonating software engineers in job interviews, after viewing their LinkedIn or GitHub profiles [2]
- Fraudsters using hacked GitHub accounts to trick other developers into downloading/cloning malicious arbitrary code [3]
- AI training on publicly-available text, photo, and video, to the surprise of content creators (but arguably fair use) [4]
- AI spamming github issues to try to claim bug bounties [5]
All of this probably sounds like a "well, duh" to some of the more privacy and security savvy here, but I still think it has created a notable shift from the tech-optimism that ran from 2012-2018 or so. These problems all existed then, too, but with less frequency. Now, it's a full-pressure firehose.
[0]: https://www.wsj.com/politics/policy/teen-deepfake-ai-nudes-b...
[1]: https://www.fcc.gov/consumers/guides/deep-fake-audio-and-vid...
[2]: https://connortumbleson.com/2022/09/19/someone-is-pretending...
[3]: https://it.ucsf.edu/aug-2023-impersonation-attacks-target-gi...
[4]: https://creativecommons.org/2023/02/17/fair-use-training-gen...
[5]: https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...
AI now means much less skilled people can be as good as she was. Karla as a Service. We are doomed.
Sounds right. I assume we will all have AI agents triaging our emails trying to protect us.
Maybe we will need AI to help us discern what is really true when we search for or consume information as well. The amount and quality of plausible but fake information is only going to increase.
"However, the possibilities of jailbreaks and prompt injections pose a significant challenge to using language models to prevent phishing."
Gives a hint at the arms race between attack and defense.
For instance, there is a very good classical algorithm for preventing password brute-forcing - exponential backoff on failure per IP address, maybe with some additional per-account backoff as well. Combined with sane password rules (e.g. correct horse battery staple, not "you must have one character from every language in Madagascar), make password brute-forcing infeasible, and force attackers to try other approaches - which in the security world counts as success. No AI needed.
My process when I see a sketchy email is to hover over the links to see the domain. Phishing links are obvious to anyone who understands how URLs and DNS works.
But working for a typical enterprise, all links are “helpfully” rewritten to some dumbass phishing detection service, so I can no longer do this.
At my current company I got what I assumed was a phishing email, I hovered over the links, saw they were pointing to some dipshit outlook phishing detection domain, and decided “what the hell, may as well click… may as well see if this phishing detection flags it” [0]…
… and it turns out it was not only not legit, but it was an internal phishing test email to see whether I’d “fall for” a phishing link.
Note that the test didn’t check if I’d, say, enter my credentials into a fraudulent website. It considered me to have failed if I merely clicked a link. A link to our internal phishing detection service because of course I’m not trusted to see the actual link itself (because I’d use that to check the DNS name.)
I guess the threat model is that these phishers have a zero-day browser vulnerability (worth millions on auction sites) and that I’d be instantly owned the moment I clicked an outlook phishing service link, so I failed that.
Also note that this was a “spear phishing” email, so it looked like any normal internal company email (in this case to a confluence page) and had my name on it. So given that it looks nearly identical to other corporate emails, and that you can’t actually see the links (they’re all rewritten), the takeaway is that you simply cannot use email to click links, ever, in a modern company with typical infosec standards. Ever ever. Zero exceptions.
- [0] My threat model doesn’t include “malware installed the moment I click a link, on an up to date browser”, because I don’t believe spear phishers have those sort of vulnerabilities available to burn, given the millions of dollars that costs.
Now I send all of these types of email to spam and don't give a fuck. Anything "internal" with a link to click goes to spam unless it's directly from my boss. Turns out 99% of it is not that important.
It came from [email protected], with [email protected] cc. All red flags for phishing.
I googled it because it had all the purchase information, so unless a malicious actor infiltrated Meta servers, it has to be right. And it was, after googling a bit. But why do they do such things?i would expect better from Meta.
Is there anything which validates that the information from whois is actually accurate?
Yes, like you say, there's always the chance that someone hijacked an official domain - that's where other things like a formal communication protocol ("we will never ask for your password", "never share 2FA codes", "2FA codes are separate from challenge-response codes used for tech support") and rules of thumb like "don't click on shortened links" come in. Defense in depth is a must, but the list of official addresses should be the starting point and it isn't.
My bank doesn't tell me that. It's this kind of incompetence and lack of responsibility on their part that's leading to scams and phishing being so unnecessarily successful.
I have confirmed the fraud department one was legitimate, but haven't bothered with the others.
I'm surprised you would expect better.
Everything I hear about their processes, everything I experience as a user, says their software development is all over the place.
Uploading a video on mobile web? I get the "please wait on this site" banner and no sign of progress, never completes. An image? Sometimes it's fine, sometimes it forgets rotation metadata. Default feed? Recommendations for sports teams I don't follow in countries I don't live in. Adverts? So badly targeted that I end up reporting some of them (horror films) for violent content, while even the normal ones are often for things I couldn't get if I wanted to such as a lawyer specialising in giving up a citizenship I never had. Write a comment? Sometimes the whole message is deleted *while I'm typing* for no apparent reason.
Only reason I've even got an account is the network effect. If the company is forced to make the feed available to others, I won't even need this much.
If they stopped caring about quality of their core product, what hope a billing system's verification emails?
Most companies are already splitting domains for customer and corporate communication, that's a step in the same direction.
While you're right it sounds fishy as hell, it's also mildly common IMO and understadable, especially when e-commerce is not the main business, and could be a reflection of how anti-phishing provisions are pushing companies to be a lot more protective of the email that comes from their main domain.
If I talk to Peter, Paul has no business getting any information about that or discussing it with me, untl Peter introduces me to Paul.
We teach our kindergarteners this rule!
If I ask my bank for a debit/credit card, they'll pass my request to another partner which will do my background check and potentially contact me for additional info.
If I order a delivery from IKEA it will be probably handled by some local company I'll have no idea how precisely they're bound to IKEA. Some complete stranger will be at my doorstep with a truck waiting behind.
There might be some mention of involved third parties in the contracts, but we usually don't read them.
So we used to get random phone calls from unknown numbers claiming to be associated with a reputable entity, and be actually legit even as it sounds completely fishy.
For what it's worth, I get scam messages claiming to be about usps, dhl, et cetera even when I'm not expecting a package. Recently, I have a couple claiming to be about a package failing to clear customs (but if I just pay a quick fee...).
I almost just hung-up because you have the rep urging you to do it and I'm trying to vet that every link is what it says it is before I enter any Amazon info. The cookies from my already signed in session did not apply to thia SSO either. Ended up working out in the end.
I could not believe they have the same flow as the scammers do. This is the same company that regularily sends out warning emails about phishing to me. Go figure.
There's a name for this. Scamicry [0].
In my experience it's because getting a subdomain set up inside large companies is a MAJOR bureaucratic nightmare, whereas registering a new domain is very easy.
"We take safety very seriously. Look how much safer our SOTA model is based on our completely made up metrics. We will also delay releasing these models to the public until we ensure they're safe for everyone, or just until we need to bump up our valuation, whichever comes first."
As those two are run by companies actively trying to prevent their tools being used nefariously, this is also what it looks like to announce they found an unpatched bug in an LLM's alignment. (Something LessWrong, where this was published, would care about much more than Hacker News).
Also, in section 3.6 of the paper, they talk about just switching fishing email, to email helps.
Or said differently, tell it that it's for a marketing email, and it will gladly write personalized outreach
If I receive a unique / targeted phishing email, I sure will check it out to understand what's going on and what they're after. That doesn't necessarily mean I'm falling for the actual scam.
They all pass DKIM, SPF, etc. Some of them are very convincing. I got dinged for clicking on a convincing one that I was curious about and was 50/50 on it being legit (login from a different IP).
After that, I added an auto delete rule for all the emails that have headers for our phish testing as a service provider.
There was yet another study titled "Why Employees (Still) Click on Phishing Links" by NIH. (2020) https://pmc.ncbi.nlm.nih.gov/articles/PMC7005690/
Given the pathology, clicking is the visible and obvious symptom.
- 3 new email chains from different sources in a couple weeks, all similar inquiries to see if I was interested in work (I wasn't at the time, and I receive these very rarely)
- escalating specificity, all referencing my online presence, the third of which I was thinking about a month later because it hit my interests squarely
- only the third acknowledged my polite declining
- for the third, a month after, the email and website were offline
- the inquiries were quite restrained, having no links, and only asking if I was interested, and followed up tersely with an open door to my declining
I have no idea what's authentic online anymore, and I think it's dangerous to operate your online life with the belief that you can discern malicious written communications with any certainty, without very strong signals like known domains. Even realtime video content is going to be a problem eventually.
I suppose we'll continue to see VPN sponsorships prop up a disproportionate share of the creator economy.
In other news Google routed my mom to a misleading passport renewal service. She didn't know to look for .gov. Oh well.
That's where we're headed. Bad actors paying for DDoS attacks is more or less mainstream these days. Meanwhile the success rate for phishing attacks is incredibly high and the damage is often immense.
Wonder what the price for AI targeted phishing attacks would be? Automated voice impersonation attempts at social engineering, smishing, e-mails pretending to be customers, partners, etc. I bet it could be very lucrative. I could imagine a motivated high-schooler pulling off each of those sorts of "services" in a country with lax enough laws. Couple those with traditional and modern attack vectors and wow it could be really interesting.
I don't think this is a bad thing, because even with peer-reviewed papers, there are cases where they can be completely fabricated but still get published. You shouldn't rely on a single paper alone and should do further research if you have doubts about its content.
employer.git.pension-details.vercell.app
Why do these companies make this stuff so hard!?