That's going quite far. Even with all the details of it documented and open, there's a relatively small number of people who can actually verify that both the implementation is correct and the design is safe. Even though I can understand how it works, I wouldn't claim I can verify it in any meaningful way.
Alternatively: it's trivial for people sufficiently experienced with cryptography. And that's a tiny pool of people overall.
Or go back to Dual_EC_DRBG.
Unless DJB has blessed it, I'll pass.
Avoiding this is obviously a huge effort.
How much effort would it be for the US government to force Google to ship a different APK from everyone else to a single individual?
VS
"You must backdoor the operating system used on billions of devices. Nobody can know about it but we somehow made it a law that you must obey."
Come on, that's not the same amount of efforts at all.
Anything you can buy retail will for sure fuck you the user over.
Hence the security afforded by Signal is very weak in-practice and questionable at best.
Perfect security isn't possible. See "reflections on trusting trust".
> ANOM was a trap
Yes, ANOM was intended to be a trap.
> and most closed encryption schemes are hideously buggy
Yes they are. Hence some of us use open encryption schemes on our closed-market devices.
> You're actually better off with Android and signal.
I am better off with closed-market devices than I am with any retail device.
> If we had open baseband it would be better
And the ability to audit what is loaded on the handset, and the ability to reflash, etc. In the real-world all we have so far is punting this problem over to another compute board.
> Perfect security isn't possible.
Perhaps, but I was not after "perfect security", I was just after "security" and no retail device will ever give me that, but a closed-market device already has.
> See "reflections on trusting trust".
Already saw it. You're welcome to see:
- https://guix.gnu.org/blog/2020/reproducible-computations-with-guix/
- https://reproducible-builds.org
- https://guix.gnu.org/en/blog/2023/the-full-source-bootstrap-building-from-source-all-the-way-down/
discuss an exceedingly clear assassination plot against the President exclusively over signal with yourself between a phone that's traceable back to you, and a burner that isn't. if the secret service pays you a visit, and that's the only way they could have come by it, then you have you answer.
You want to use this, by all means.
Lessons Learned
We believe that all of the vulnerabilities we discovered have been mitigated by Threema's recent patches. This means that, at this time, the security issues we found no longer pose any threat to Threema customers, including OnPrem instances that have been kept up-to-date. On the other hand, some of the vulnerabilities we discovered may have been present in Threema for a long time.
I believe the Session referred to is here ... https://getsession.org/
Tox is here ? https://tox.chat/
The Matrix i found seems to have been closed down earlier this month ... https://en.m.wikipedia.org/wiki/Matrix_(app) ... that's assuming I found the correct "matrix".
If it matters to you don't take my word for those being the correct points of contact, that's just me searching for two minutes.
As a side rant, I wish people would choose less generic names for their projects, calling something "session" ? You might as well call it "thing".
This is obviously technically impossible, but the desire for that end state makes a ton of sense from the IC’s perspective.
Secrets fail unsafe. Maybe an alternative doesn't.
Government keeps trying to mandate it in various ways. With predictably bad results.
Salt Typhoon - which this discussion is about - is an example. Tools for tracking people that were supposed to be for our side, turn out to also be used by the Chinese. Plus the act of creating partial security often creates new security holes that can be exploited in unexpected ways.
Either you build things to be secure, or you have to assume that it will someday be broken. There is no in between.
Yup. The attack hit the CALEA backdoor via a wiretapping outsourcing company. Which one?
* NEX-TECH: https://www.nex-tech.com/carrier/calea/
* Substentio: https://www.subsentio.com/solutions/platforms-technologies/
* Sy-Tech: https://www.sytechcorp.com/calea-lawful-intercept
Who else is in that business? There aren't that many wiretapping outsourcing companies.
Verisign used to be in this business but apparently no longer is.
[1] https://www.google.com/search?client=firefox-b-d&q=calea+sol...
That seems pretty clear.
wiretap systems are on the telecom provider side and it a bunch of different and in many cases ordinary networking equipment that can be easily misconfigured.
TTP (aka companies listed above) are optional and usually used by companies that don't have their own legal department to process warrants/want to deal with fine details of intercepts
Is it a great idea to give all that info to India as well?
Nice, we do not what the CEOs of these telcos have to give up their bonuses. So we force them to do the just bare minimum. Isn't capitalism great.
This has nothing to do with capitalism. The Soviet Union wasn’t a paragon of information security.
The goal is to make the number at the bottom of the piece of paper bigger by a large enough margin in the next ninety days. If you can prove that there's the imminent risk of a specific cyberattack in the next 90 days and that it will have an adverse impact on getting that number bigger, fine, company leadership will pay attention, but that's rarely the case. Most cyberattacks are obviously clandestine in nature, and by the time they're found, the move isn't to harden infrastructure against known unknowns, but to reduce legal exposure and financial liability for leaving infrastructure unsecured. It's cheaper, and makes the number at the bottom of the piece of paper bigger.
1. Capitalists seem pretty content with money losing ventures for far more than "the next ninety days", as long as they think it'll bring them future profits. Amazon and Uber are famous examples.
2. You think the government (or whatever the capitalism alternative is) aren't under the same pressure? Unless we live in a post scarcity economy, there's always going to be a beancounter looking at the balance sheet and scrutinizing expenses.
Sometimes thought-terminating quips are not enough.
Funny that Venmo won't let me use a voip number, but I signed up for Tello, activated an eSIM while abroad and was immediately able to receive an SMS and sign-up. For the high barrier cost of $5. Wow, such security. Bravo folks.
Every single one works with GVoice, except Venmo. Chase, Cap1, Fidelity, etc. Not small players.
So while I think you make a fair enough argument for sure, it doesn't seem to be the case when nobody else does it, and makes Venmo seem like a pain in the arse.
That is a closing window and the case in fewer and fewer places. It wont be long until most people would need to fly across the globe or get involved with organised crime to pull that off...
The idea that scammers don't have digital money laying around just waiting on being spent on something is so absurdly out of touch on how everything in cyber works.
Corporations "eat" money.
Entities that can feed a corporation, are treated as peers, i.e. "people".
Thus, on shitter, if you can pay, you are a person (and get a blue checkmark).
$5 is at least 5x the cost of a voip number. I'm not a bank, but if I'm spending money to verify you control a number, I feel better when you (or someone else) has spent $5 on the number than if it was $1 or less.
VoIP is so well known (and automated) to do, even at $.10, it would be a magnitude easier to do.
Banks are always slow, and behind the times - because they are risk adverse. That has pros and cons.
there are the ones that closely follow software updates and you get to complain that things are breaking all the time.
and there are the stable distros, now you get to complain how old and out of date everything is.
This is exactly it.
All of these auth mechanisms that tie back to "real" phone numbers and other aspects of "real identity" are not for you - they are not for your security.
These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
So, when twilio (for instance) refuses to let you 2FA with anything other than tracing back to a real mobile SIM[1] (how ironic ...) it is not to help you - it is designed to slow down abusers.
[1] The "authy" workflow is still backstopped by a mobile SIM.
Relevant reading.
Basically comes down to: the costs of acceptable levels of fraud < the cost of eliminating all fraud.
There are processes that would more or less eliminate all fraud, but they are such a pain in the ass that we just deal with the fraud instead.
>These companies have a brutal, unrelenting scam/spam problem that they have no idea how to solve and so the best they can do is just throw sand in the gears.
Sure does a great job for all the various online social media places that ostensibly have nothing to do with transacting money, still want my phone number, and still get overrun with spam and (promotion of) scams....
Requiring a deposit would be more direct, but administration of deposits would be a lot of work, and you have an uphill battle to convince users to pay anything, and even if they want to pay, accepting money is hard. And then after all that, some abusers will use your service to check their stolen credit cards.
I don't care. I know it's a numbers game. I know they don't care about me. But companies absolutely lose my business because of this bullshit.
A PROCESS for verifying the number isn't used for fraud and allowing use. I don't know, maybe the fact that I've been a customer for YEARS, use that number, and have successfully done thousands of dollars in transactions over a platform without any abnormal issue?
One time a company retroactively blocked VOIP numbers, which was really stupid.
I'd say that with Google, chances are that they just stop offering the service.
But, I worry about what happens if I somehow get locked out of the account…
So which would you prefer:
(A) A low-level customer service representative can restore your access, but said representative is arguably susceptible to social engineering and other human weaknesses.
(B) Your account can be protected be physically 2FA key (yubikey), but on the case of loss/compromised account processes for recovery are hard to navigate and may not yield successful recovery?
In the case of (A) you have little security. In the case of (B) you can do a LOT to prevent account loss, but if bad things happen (whether your fault or not) you are locked out by default.
From a privacy point of view, I'm not sure that (B) is such a bad option.
But you could make the argument you should do backup of cloud services, the same way you do backup of hard drives.
For my Workspace account, I backup with Google Takeout every 2 months to Backblaze B2. I also sync (with rclone) My Drive to a local directory, which is weekly uploaded to B2.
All of my 2FA Mules[1] are USMobile SIMs attached to pseudonyms which were created out of thin air.
It helps a lot to run your own mail servers and have a few pseudonym domains that are used for only these purposes.
Some companies have much lower thresholds for their KYC, but end up being facilitators of crime and draw scrutiny over time by both their more regulated partners and their governments.
I’d note that the US is relatively lax in these requirements compared to Singapore, Canada, Japan, and increasingly the EU. In many jurisdictions you need to prove liveliness, do photo verification, sometimes video interviews with an agent showing your documents.
When vtuber-esque deepfakes become trivial for the average person, I wonder what the next stage in this cat-and-mouse becomes. DNA-verficiation-USB-dongles?
I actually had an issue with this and ended up sending a notarized letter by snail mail, since I didn't feel like making a special 1hr each way trip during business hours to the closest branch.
Seriously, you see this in any country of any size. Remote may just mean 300km/186mi off coast. Politicians go where the votes are of course, but this just means disregarding rural areas is a self fulfilling prophecy. The more you do it, the more remote they become.
Then you have to be ready to accept that there are advantages and disadvantages to your choice of where you live, and that is one of the latter.
There's a reason rural property is so cheap. It comes with a lot of disadvantages and inconveniences and costs that city-dwellers don't need to pay.
Except that person you’re responding to explains succinctly how this is security theater that accomplishes little and ultimately is just a thinly veiled tactic for harassing users / coercive data collection. And the person above that is commenting that unnecessary data collection is just an incentive for hackers.
Comments like this just feel like apologism for bad policies, at best. Does anyone really think that people need to be scrutinized because most money laundering is small transactions from individuals, or, is it still huge transactions from huge customers that banks want to protect?
The issue though boils down to governments don’t want the financial infrastructure in their jurisdiction to allow unfettered crime. I’ve never seen a single government (granted I’ve never seen what happens in extremely oppressive regimes as we don’t generally do business there due to sanctions controls) who actively collects KYC outside of large transactions, the regulations exist to ensure a minimum baseline of KYC so the companies themselves can comply and reduce their own losses and instability as someone is often kiss liable in fraud and in money laundering or sanctions evasion some institution is subject to fines for facilitation.
But to be frank I think very little of what’s done is materially successful against most competent criminals and the consequences of being caught is usually just being blocked until they find a way around. To that end it’s a bit of not security theatre but compliance theatre. On the other hand it does act as a high pass filter as most fraud and financial crime is NOT competent. By and large retail finserv is a minimization effort not a prevention effort.
The regulations that are effective at prevention are usually so restrictive and so difficult to implement that they’re absurd for both the finserv to implement and for the participants to get through the hurdles.
I don’t know there’s any perfect solutions, and what exists is generally dumb, but the intentions are at the core well intended. It’s foolish tho to look at something as complex as financial infrastructure and wave it away as harassment and coercion rather than well intentioned incompetence.
Like, the only reason I don't answer the phone and say "this is <Dad's name>", is because I'm honest. You'll never keep a bad guy out that already knows all the information that you ask for - he'll just lie and claim to be the business/account owner.
> he'll just lie and claim to be the business/account owner.
He can lie, but he doesn't have another person's passport to prove his lies.
And you don't need a passport. I've never met a company that will require full KYC-level video-identification with you on every call. You say that you're you (it doesn't matter whether you actually are you), you give them the secret code and they're happy.
Actually, just the other day I encountered this and Dad just came on the line and authorized me. If I'd have lied, it would have went more smoothly.
In all the cases this has happened to me, the most verification they've ever needed is the last 4 of his SSN, which he has told to me.
We must implement as LAW that a SIM card can provide and only provide a Zero Knowledge Proof of "this SIM is valid for this cellular/data plan up to a specific date".
If they want to track us all the time, whatever, if they can't keep that data safe from the Chinese Communist Party, then they aren't competent enough to have it.
Now is a good time to remind everyone that a SIM card is a full blown computer with CPU, RAM and NV storage.
Further, your carrier can upload and execute code on your SIM card without your knowledge or the knowledge of the higher level application processor functions of your telephone.
Answer delayed by hours due to HN rate limiting.
And, hopefully your USB stack, or your phone's equivalent to SIM interface, doesn't have vulnerabilities that the small computer that is the SIM card could exploit.
Operating systems that center their efforts on protecting high risk users like Qubes dedicated a whole copy of Linux running in a Xen VM to interface with USB devices.
It'd be great if more information were available on how devices like Google's Pixel devices harden the interface for SIM cards.
How do you implement bandwidth quotas with this?
It risks a lot of "noise" to do it this way. Why not just bribe employees to listen in on high profile targets? Why try to hit them all and create a top level response at the Presidential level?
This feels optics-driven and political. I'm not sure what it means, but it's interesting to ponder on. Attacking infrastructure is definitely the modern "cold war" of our era.
Sadly even most people in security are woefully unaware of the scope and scale of these operations, even within the networks they are responsible for.
The "noise" here was not from the attacker. They don't want to get caught. But sometimes mistakes happen.
> Why not just bribe employees to listen in on high profile targets?
Developing assets is complicated and difficult, attacking SS7 remotely is trivial, especially if you have multiple targets to surveil
There's a huge selection bias factored into what attacks make the news.
You could be an incredibly competent and highly motivated crook and bad luck in the form of an intern looking at logs or a cleaning lady spotting you entering a building could take you down.
PRC Targeting of Commercial Telecommunications Infrastructure
https://news.ycombinator.com/item?id=42132014
AT&T, Verizon reportedly hacked to target US govt wiretapping platform
Stupidity and banality is a far greater threat than conspiracy.
Clearly, the counter-intel part of the US government effort has been less successful than the surveillance and intelligence gathering effort. But that doesn't mean that the US government wants all those other nations to be able to gather data from these systems. Our government wants nothing more than to be the only national government capable of gathering data from these systems.
Getting them to actually use them is hard, especially when the whole point of the app is to communicate with other people, and literally none of the people they regularly communicate with other than yourself use (or even know about) Signal.
End to end encryption has proven to be unworkable in every context it's been tried. There are no end-to-end encrypted systems in the world today that have any use, and in fact the term has been repurposed by the tech industry to mean pseudo encrypted, where the encryption is done using software that is also controlled by the adversary, making it meaningless. But as nobody was doing real end-to-end encryption anyway, the engineers behind that decision can perhaps be forgiven for it.
I'd say there's a very real use for this, though, which is that with mobile applications it's more complicated to compromise a software deployment chain than it is to compromise a server-side system. If you're a state-level attacker and you want to coordinate a deployment of listening capabilities on Signal, say, you need to persistently compromise Signal's software supply chain and/or build systems, and do so in advance of other attacks you might want to coordinate with, because you need to wait for an entire App Store review cycle for your code to propagate to devices. The moment someone notices (say, a security researcher MITM'ing themselves) that traffic doesn't match the Signal protocol, your existence has been revealed. Whereas for the telcos in question, it seems it was possible to just compromise a server-side system to gain persistent listening capabilities, which could happen silently.
Now, this can and should be a lot better, if, say, the Signal app was built not by Signal but by Apple and Google themselves, on build servers that provably create and release reproducible builds straight from a GitHub commit. It would remove the ability for Signal to be compromised in a non-community-auditable way. But even without this, it's a nontrivial amount of defense-in-depth.
As the article points out, there are many other adversaries to be concerned about. Protecting against them would be good. Don’t give up so quickly.
Aside - not the main point —>
I actually do not know if we are at the level of “forced speech” in the US. Publishing hacked apps would fall under that category. Forced silence is something and less powerful. Still bad, obviously.
https://berthub.eu/articles/posts/5g-elephant-in-the-room/
So is that not the case for USA telecoms ?
Yeah sure, except giving the NSA access and complying with the CLOUD Act.
That's amusing. I'll grant that US companies haven't outright surrendered, and are still at least permitted to engage in lip service on the issue. But actual "fighting"? That would mean a tech world that looks very different than what we have today, and would fatally conflict with no end of "interests" in the US.
But if you think it is, I encourage you to run Yggdrasil.
It's not a totally silly suggestion and it's not totally sensible either. Light hearted. I doubt any exec in any telco outside of Jio or maybe Comcast would go there. Amongst other things, they'd destroy a lot of capital value doing the Ripley. Well.. the liberated v4 sell replaces some of that until the price crashes..
I guess Starlink could easily geolocate every 4G/5G phone IMIE with huge direct-to-celll attennas
Answer delayed by hours due to HN rate limiting.
None? As I said I have not seen SS7 for a decade+ in USA/Canada. IMEI catches has nothing to do with SS7.
https://www.youtube.com/watch?v=wVyu7NB7W6Y
Are you saying the SS7 messages they're looking at of a Canadian telephone subscriber just aren't there?
And this is the EFF saying in July 2024 that the FCC should really make telcos address vulnerabilities in SS7:
https://www.eff.org/deeplinks/2024/07/eff-fcc-ss7-vulnerable...
Are you saying they're just wrong, those SS7 networks don't exist in the USA?
I mean, the article links the FCC request-for-comment on SS7 networks. Just as a quote: https://docs.fcc.gov/public/attachments/DA-24-308A1.pdf
The Signaling System 7 (SS7) and Diameter protocols play a critical role in
U.S. telecommunications infrastructure supporting fixed and mobile service
providers in processing and routing calls and text messages between networks,
enabling interconnection between fixed and mobile networks, and providing call
session information such as Caller ID and billing data for circuit switched
infrastructure. Over the last several years, numerous reports have called
attention to security vulnerabilities present within SS7 networks and suggest
that attackers target SS7 to obtain subscribers’ location information.
This is dated March 2024. It's talking about the very thing you say you haven't seen for more than a decade. To me, it sounds like that thing (the SS7 network) is alive and well in the USA, and the federal government is concerned about its lax security allowing spies to discover phone users' location information - the very topic we're discussing.It sounds like you're talking mince.
If your claim is that there is literally no SS7 in US and Canadian telephone networks, then that is straight-up wrong. It exists in every network that still supports 2G/3G wireless protocols and classic PSTN standards. It was replaced in 4G/5G and SIP, but that requires your operator only supports those protocols and doesn't continue to support the old protocols. If it does, it will still have SS7 signalling and will still be susceptible to attacks (though it is free to run its own security to block them).
If your claim is that you haven't seen SS7 in a decade, then sure, maybe you haven't. But given there is actual, ongoing spying, impersonation, etc., that can be demonstrated in North America in 2024, and everyone involved says "it's due to SS7", and you're out here saying it's-so-rare-you-haven't-seen-in-a-decade, then what exactly is happening? What are the hackers using then, when the experts say they're exploiting SS7, if you insist it's not there?
Why did the GSMA publish this security paper in 2019? https://www.gsma.com/solutions-and-impact/technologies/secur...
Why are they promoting a Code of Conduct for GT lessees? https://www.gsma.com/solutions-and-impact/technologies/secur...
If you claim there are no SS7 networks in the USA or Canada, please explain:
1) why the FCC believes they exist and need to be secured, as per their March 2024 note
2) what the UMTS networks, still operational in Canada, are using for messaging (note the 2025 dates in https://en.wikipedia.org/wiki/3G#Phase-out for Canada; 2G/3G is still alive and well there. And I note that most of the 3G phase out in the USA was in 2022, not in 2014 which is what they'd have to be for you to not have seen SS7 for a decade)
3) what the POTS networks, still operational in the USA and Canada, are using for messaging (noting that FCC 19-72 only removes the requirement on ILECs to provide UME Analog Loops to CLECs, and does not require them to shut down POTS networks entirely by August 2022. For example, AT&T only plans to have transitioned 50% of its POTS network by 2025)
SS7 only gets into the picture after the handset has connected to the home network, from what I understand (n.b. not a telco engineer). The IMEI is exposed to the network, but only to your network and only after the handset sets up an encrypted and authenticated connection with it.
5G uses a thing called a GUTI to identify handsets, not an IMEI. Think of it like a GUTI being a temporary IPv6 address allocated for a few hours by DHCP, and the IMEI being like a browser cookie. IMEI is exposed to your home network and networks you roam onto, but merely being in range of a tower doesn't expose it, and it's never transmitted in the clear over the air.
Also, within a network most of the components don't get access to the IMEI either.
The federal government wouldn't pay hundreds of millions of dollars[0] to catch one or two fishing boats.
[0] https://www.usaspending.gov/award/CONT_AWD_N6600122C0065_970...
The NSA/CIA need to start making systems more secure by default and stop thinking spying on their own populations is a top priority.
The digital has been running for quite a while, and there won't be a real one. China has nothing to gain from starting one. I mean seriously...why would you shoot your customer?
It depends on your goal. If it is strictly a commercial relationship, “shooting your customer” could be advantageous for preserving a revenue stream. Customer lock-in Could be seen as a form of “shooting your customer"
If your goal is political, "shooting your customer" may enable a regime change that is friendlier to you. We have done this multiple times in the Middle East, Central America, and South America.
The US has done what it has done in the regions you list because they're already unstable (particularly the Middle East) and have no way of striking decisive blows against US territory.
The NSA and CIA are neither able nor authorized to defend all privately-owned critical infrastructure. While concerns about agency oversight are warranted, I can assure you that spying on the population is not their top priority. It's abundantly clear that foreign threats aren't confined to their own geographies and networks. That can't be addressed without having the capability to look inward.
Secure by Design is an initiative led by CISA, which frequently shares guidance and threat reporting from the NSA and their partners. Unfortunately, they also can't unilaterally secure the private sector overnight.
These are difficult problems. Critical infrastructure owners and operators need to rise to the challenge we face.
Yet when one reads these articles it's just, "China, China, China!!!"
Anyone have a link to actual evidence?
Plainly I have no real evidence for this, other than the constant lack of evidence for their claims, and the doubts that are cast within the infosec community when data is available.
As such, even if Xi Jinping himself had stood up at the UN and claimed responsibility for a particular Windows kernel-mode rootkit, that still wouldn't be incontrovertible evidence.
That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Until the data breaches lead to serious $$$ impact for the company, the impact of these breaches will simply be waved off and pushed down to users. ("Sorry, we didn't protect your stuff at all. But, here's some credit monitoring!") Even in the profession of software development and engineering, very few people actually take data security seriously. There's lots of talk in the industry, but also lots of pisspoor practices when it comes to actually implementing the tech in a business.
Even during the best of times people simply do not give a fuck about privacy.
Honestly, if there is a problem at all I would say it's the uselessness of the Intelligence Community when actually posed with an espionage attack on our national security. FBI and CISA's response has been "Can't do; don't use." and I haven't heard a peep from the CIA or NSA.
I've seen the same thing at previous jobs; I had a lot to do and knew a lot of security issues that could potentially cause us problems, but management wasn't willing to give me any more resources (like hiring someone else) despite increasing my workload and responsibilities for no extra pay. Surprise, one of our game's beta testers discovered a misconfigured firewall and default password and got access to one of our backend MySQL servers. Thankfully they reported it to us right away, but... geez.
Well I care. I’d pay a premium to a telco that prioritized security and privacy. But they all are terrible, hovering up data, selling it indiscriminately and not protecting it. If they all suck then the default is to use the cheapest.
It’s definitely why I use Apple devices because I can buy directly from Apple and they don’t allow carriers to install their “junkware”.
So we could make the PII less valuable by not using for things that attract fraudsters.
> That's part of the problem. But companies also are unwilling to pay to do any of the things that you've described. There is no punishment or fine that is actually punitive. Protecting (short term) profit is more important than protecting users' data --- it's even more important than protecting the (long term) profit potential of a company with a good reputation.
Frankly, any company that says they're a technology or software business should be building these kinds of systems. They can grab FOSS implementations and build on top or hire people who build these kinds of systems from the ground up. There's plenty of people in platform engineering in the US who could use those jobs. There's zero excuse other than that they don't want to spend the money to protect their customers data.
Telecoms will not get fined for this breach, or fined at amount that is meaningful, so they are not going to care.
Politics has historically incentivized job creation.
As SRE, I'm just over everyone running around acting like another tool is going to solve the problem. It's not, incentives need to be present not to be completely terrible at their job.
Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
> Also, I guess I should admit, I have strong aversion to IDPs. They always become some grue that eats me.
I am a SRE. I stopped using that title professionally some time ago and started focusing on what makes companies reach for SRE when the skillset is the same as a platform engineer.
A post I wrote on the subject: https://ooo-yay.com/blog/posts/2024/you-probably-dont-need-s...
People were shitting a brick over a pretty minor change in photo and location processing at Apple. That’s because they don’t screw up like this.
(Google, on the other hand, is the opposite.)
But, as far as I can tell, the only reason why Apple does this is because privacy these days can be sold as a premium, luxury feature.
In principle the insurance company then dictates security requirements back to the company in order to keep the premiums manageable.
However, in practice the insurance company has no deep understanding of the company and so the security requirements are blunt and ineffective at preventing breaches. They are very effective at covering the asses of the decision makers though... "we tried: look we implemented all these policies and bought this security software and installed it on our machines! Nobody could possibly have prevented such an advanced attack that bypassed all these precautions!"
Another problem is that often the IT at large enterprises is functionally incompetent. Even when the individual people are smart and incentivised (which is no guarantee) the entire department is steeped in legacy ways of doing things and caught between petty power struggles of executives. You can't fix that with financial incentives because most of these companies would go bankrupt before figuring out how to change.
I don't see things improving unless someone spoon-feeds these companies solutions to these problems in a low risk (ie. nobody's going to get fired over implementing them) way.
Often the end result is having just enough red tape to turn a 2 week project into an 8 month project, and yet not enough as to make sure it's impossible for someone to, say, build a data lake into a new cloud for some reports that just happen to have names, addresses and emails. Too big to manage.
It is often much easier to use an email address or a SSN when a randomly generated id, or even a hash of the original data would work fine.
I'm not saying that we shouldn't put more effort into reducing the amount of data kept, but it isn't as simple as just saying "collect less data".
And sometimes you can't avoid keeping PII.
How would they then enforce this in a large company with 50k programmers? This was what the previous post was discussing.
Not to mention, a lot of this data is necessary. If you're invoicing, you need to store the names and many other kinds of sensitive data of your customers, you are legally required to do so.
It’s not easy, but it can move the needle over time.
Audit trails (of who did/saw what in a system) and PII-reduction (so you don't know who did what) are fundamentally at odds.
Assuming you are already handling "sensitive PII" SSNs/payroll/HIPPA/creditcard# data appropriately, which constitutes security best practice: PII-reduction or audit-reduction?
A BS in CS has maybe one class on security, and then maybe employees have a yearly hour-long seminar on security to remind them to think about security. That isn't enough. And the security team and engineers that put the effort into learning more about security and privacy often aren't enough to guard against every possible problem.
Leak or lose a customer's location tracking data? That'll be $10,000 per data point per customer please.
It would convert this stuff from an asset into a liability.
The current state is clearly broken and unsustainable, but good luck getting any significant penalties through legislation with a far-right government.
Same principle as fines for hard-to-localize pollution.
But don't worry, as soon as this catastrophe is over we'll be back to encryption is bad, security is bad, give us an easy way to get all your data or the bad guys win.
I have to admire those pioneers for seeing this and being right about it. I also admire them for influencing companies like Apple (in some cases by working there and designing things like iMessage, which is basically PGP for texts.) It doesn’t fix a damn thing when it comes to the traditional telecom providers, but it does mean we now have backup systems that aren’t immediately owned.
She was not amused or empathetic to their plight in the slightest. Population of at least 2 I guess.
Who put the backdoor there? The US government did.
A telecommunications carrier may comply with CALEA in different ways:
The carrier may develop its own compliance solution for its unique network.
The carrier may purchase a compliance solution from vendors, including the manufacturers of the equipment it is using to provide service.
The carrier may purchase a compliance solution from a trusted third party (TTP).
https://www.fcc.gov/caleaThe SS7 protocol provides the ability to determine which RNC/MMC a phone is paired with at any given time: it's fundamental to the nature of the functioning of the network. A sufficiently sophisticated adversary, with sufficient access to telephony hardware, could simply issue those protocol instructions to determine the location.
Somewhat of a tangent: does anyone have any resources on designing/implementing E2E encryption for an app where users have shared "team" data? I understand the basics of how it works when there's just one user involved, but I'm hoping to learn more about how shared data scenarios (e.g. shared E2E group chats like Facebook Messsenger) are implemented.
It should give you some ideas on how it's done.
[1] https://nfil.dev/coding/encryption/python/double-ratchet-exa...
If they had the option the telecommunication companies would love to encrypt traffic and obscure it so much that they have no plausible way of figuring out what is going on. Then they can take customer money and throw their hands up in honest confusion when anyone wants them to moderate their customer's behaviour.
They don't because that would be super-illegal. The police and intelligence services demand that they snoop, log and avoid data-minimisation techniques. It is entirely a question of regulatory demand and time that these sort of breaches happen; if the US government demands the data then sooner or later the Chinese government will get a copy too. I assume that is a trade off the US government is happy to make.
Intelligence agencies also stockpile software vulnerabilities that they don't report to the vendor because they want to exploit the security flaw themselves.
We'll never have a secure internet when it's being constantly and systematically undermined.
Currently, with proprietary software, there's an incentive for companies to not even acknowledge bugs and it costs them money to fix issues, so they often rely on security through obscurity which is not much of a solution.
If it is an LI attack the answer to which networks are compromised is: All of them that support automated LI.
That's a nasty attack because LI is designed to not be easily detectable because of worries about network operators knowing who is being tapped.
They aren't saying that more have been hacked, they are saying that more have been discovered related to that hack. Any adversary at this level would be monitoring the news, and would take appropriate actions (for gain) or roll up the network rather than allow reverse engineering of IOCs.
More than likely this was not an LI based attack, but rather they don't know for sure how they got in. Nearly all of the guidance is standard cybersecurity best practices for monitoring and visibility, and lowering attack surface with few exceptions (in the CISA guidance).
The major changes appear to be the requirements to no longer use TFTP, and the referral to the manufacturer for source of truth hashes (which have not necessarily been provided in the past). A firmware based attack for egress/ingress seems very likely.
For reference, TFTP servers are what send out the ISP configuration for endpoints in their network, the modems (customers), and that includes firmware images (which have no AAA). Additionally as far as I know the hardware involved lacks an ability to properly audit changes to these devices (by design), and TR-47 is rarely used appropriately, the related encryption is also required by law to be backward compatible with known broken encryption. There was a good conference talk on this a few years ago, at Cyphercon 6.
https://www.youtube.com/watch?v=_hk2DsCWGXs
The particular emphasis on TLS1.3 (while now standard practice) suggests that connections may be being downgraded, and the hardware/firmware at CPE bridge may be performing MITM to public sites in earlier versions transparently, if this is the case (its a common capability needed).
The emphasis on using specific DH groups, may point to breaks in the key exchange of groups not known to be broken (but are broken), which may or may not be a factor as well.
If the adversary can control, and insert malicious code into traffic on-the-fly targeting sensitive individuals who have access already, they can easily use information that passes through to break into highly sensitive systems.
The alternative theory while fringe, is maybe they've come up with a way to break feistel networks (in terms of cryptographic breaks).
Awhile back the NSA said they had a breakthrough in cryptography. If that breakthrough was related to attacks on feistel network structures (which almost all modern cryptography is built on), that might explain another way (although this is arguably wild speculation at this point). Nearly every computer has a backdoor co-processor built-in in the form of Trustzone, Management Engine, or AMD's PSP. Its largely only secured by crypto without proper audit trails.
It presents a low hanging concentrated fruit into almost every computation platform on earth, and by design, its largely not auditable or visible. Food for thought.
Quantum computer breaks a single signing key for said systems, acting like a golden key back door to everything. All the eggs in one basket. Not out of the realm of possibility at the nation state level. No visibility means no perception or ability to react, or isolate the issues except indirectly.
The problem with the shared secret model isn’t that it can be stolen, it’s that it is globally shared within a provider network. You can’t root it in a hardware device. You can’t do forensics to see from what node it was stolen.
We are talking about an industry where they still connect console servers, often to serial terminal aggregators that are on the internal network alongside the management Ethernet ports, which have dumb guessable passwords, often the same one on every box, that all their bottom tier overseas contractors know.
It’s just sad.
Its true that those protocols are basically running shared secrets, but those areas all have some visibility with auditing and monitoring.
You crack a root or signing key at the co-processor level and you can effectively warp and control what anyone sees or does with almost no forensics being possible.
It fundamentally allows a malevolent entity the ability to alter what you see on the fly with no defense possible. Such is the problem with embedded vulnerabilities, its just like that NewAg train thing.
Antitrust and bricking for monopolistic benefit is far more newsworthy then say embedding a remote radio-controlled off-switch with no plausible cover, that can brick the trains as they move harvests, food stuffs, or military equipment.
Its corruption, not national security. Would many believe that its the latter over the former when it does both?
It is sad that our societal systems have become so brittle that they cannot check or effectively stop the structural defects and destructive influences within itself.
Anyone who has ever worked in networking will understand what I mean.
The networking industry is comically bad. They use ssh but never ever verify host keys, use agent forwarding, use protocols like RADIUS or SNMP which are completely insecure once you pop a single box and use the almost always global shared secret. Likewise the other protocols.
Do they use secure boot in a meaningful way? So they verify the file system? I have news for you if you think yes.
It’s kind of a joke how bad the situation is.
Twenty years ago someone discovered you could inject forged tcp resets to blow up BGP connections. What did the network industry do? Did they institute BGP over TLS? They did not. Instead they added TCP MD5 hashing (rfc: https://datatracker.ietf.org/doc/html/rfc2385 in 1999) using a shared secret because no one in networking could dream of using PKI. Still true today. If deployed at all, which it usually isn’t. 2010!!
If you want to understand the networking industry consider only this: instead of acknowledging how dumb the situation is and just using tls, instead we got this - https://datatracker.ietf.org/doc/html/rfc5925 - which is almost as dumb as 2385 and just as bad in actual deployment because they just keep using the same deployment model (the shared tuple). Not all vendors that “support” 5925 support the whole RFC.
As an aside this situation is well known. People have talked about it for literal decades. The vendors have shown little to no interest in making security better except point fixes for the kind of dumb shit they get caught on. Very few security researchers look at networking gear or only look at low end junk that doesn’t really matter.
Sounds like the root of the issue.