In qBittorrent, the DownloadManager class has ignored every SSL certificate validation error that has ever happened, on every platform, for 14 years and 6 months since April 6 2010 with commit 9824d86.
This looks quite serious. void downloadThread::ignoreSslErrors(QNetworkReply* reply,QList<QSslError> errors) {
// Ignore all SSL errors
reply->ignoreSslErrors(errors);
}
https://github.com/qbittorrent/qBittorrent/commit/9824d86a3c...EDIT: The author of the PR[0] (who is one of the top qBittorrent contributors according to GitHub[1]) that fixed this also came to this conclusion:
> I presume that it was a quick'n'dirty way to get SSL going which persisted to this day. It's also possible that back in the day Qt4 (?) didn't support autoloading ca root certificates from the OS's store.
[0]: https://github.com/qbittorrent/qBittorrent/pull/21364 [1]: https://github.com/qbittorrent/qBittorrent/graphs/contributo...
Much more likely is that someone knew they had implemented this temporary solution while they implemented OpenSSL in a project which previously never had SSL support - a major change with a lot of work involved - and every programmer knows that there is nothing more permanent than a temporary solution. Especially in this case. I can understand how such code would make it into the repo(I think you do too), and it's very easy for us to say we would then have immediately amended it in the next version to properly verify certs.
Having been in contact with the maintainers, I have to say I was disappointed in how seriously they took the issue. I don't want to say any more than that.
Source: author of the article
As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.
On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.
> the vast majority of security vulnerabilities in software are not actively exploited
However I’d say your explanation that it’s
> because no one knows about them
is not necessarily the reason why.
If the vendor or developer isn’t fixing things, going public is the correct option. (I agree some lead time / attempt at coordinated disclosure is preferable here.)
Then I think we are in agreement overall. I took your initial comment to mean that as soon as you discover a vulnerability, you should make it public. If we agree that the process should always be to disclose it to the project, wait some amount of time, and only then make it public - then I think we are actually on the exact same page.
Now, for the specific amount of time: ideally, you'd wait until the project has a patch available, if they are collaborating and prioritizing things appropriately. However, if they are dragging their feet and/or not even acknowledging that a fix is needed, then I also agree that you should set a fixed time as a last ditch attempt to get them to fix it (say, "2 weeks from today"), and then make it public as a 0-day.
Wouldn’t a “good criminal” just exploit it forever without getting caught? Your timeline has no ceiling.
However, if you don't know that it is being actively exploited, then the right course of action is to disclose it secretly to the creators, and work with them to coordinate on a timely patch before any public disclosure. Exactly how timely will depend on yours and their judgement of many factors. Even if the team is showing very bad judgement from your point of view, and acting dismissively; even if you have a history with them of doing this - you still owe it to the users of the code to at least try, and to at least give some unilateral but reasonable timeline in which you will disclose.
Even if you don't want to do this free work, the alternative is not to publicly disclose: it's to do nothing. In general, the users are still safer with an unknown vulnerability than they are with a known one that the developers aren't fixing. You don't have any responsibility to waste your own time to try to work with disagreeable people, but you also don't have the right to put users at risk just because you found an issue.
It’s unethical to users who are at risk to withhold critical information.
If McDonalds had an e-coli outbreak and a keen doctor picked up on it you wouldn't withhold that information from the public while McD developed a nice pr-strategy and quietly waited for the storm to pass, would you?
Why is security, which seriously is a public safety issue, any different?
The point of a disclosure window is to allow a fix before _all_ bad actors get access to the vulnerability.
Medicine and biosafety are PvE. Cybersecurity is PvP.
Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.
That's hilarious. It's all theoretical until it's getting exploited in the wild...
Maybe immediate disclosure would cause a few users to change their behavior, but no one is tracking security disclosures on all the software they use and changing their behavior based on them.
The caveat here is in case you have evidence of active exploitation, then immediate disclosure makes sense.
The real problem came later when the next generation of developers saw this HTTPClient class and thought, "Hey, what a nifty little helper!", and soon they were using it to talk to pretty much everything, including financial systems. I was shocked when I discovered it. An inconsequential temporary workaround had turned into a huge security hole.
Edit (just noticed this was the author): I'm curious what torrent client do you prefer? I like Deluge but mostly go to it because it's familiar.
See a full list: https://doc.qt.io/qt-6/index.html
I understand temporary, but 14 years seems a bit ...too long
“Oh, it’ll have millions of eyes on it”… except no one looks.
Even companies that are supposed to get security right have constant screw ups that are only fixed when someone goes poking around where they probably shouldn't and thankfully happens to not be malicious.
I dont think it replies to what the user asks though. It seems reasonable expecting widely used open source software to be studied by many people. If thats true it would be good to question why this wasnt caught by anyone. Ignoring all ssl errors is not something you need to be an expert to know is bad...
From a security perspective there are only two kinds of code bases: open & closed. By deduction one of those will have more eyeballs on the codebase than the other even if "nobody looks".
Case in point: It may have taken 14 years but someone looked. Had the code base been closed source that may never have happened because it might not have been possible to ever happen. It's also very easy to point to the number of security issues that never made it into production because it was caught in an open source code review by passerbys and other contributors while the PR was waiting to be merged.
The fact it was caught at all is a point for open source security - not against it. Even if it took 14 years.
Is that the classification that matters? I'd think that there are only following two kinds of code bases: those that come with no warranty or guarantee whatsoever, and those attached to a contract (actual or implied) that gives users legal recourse specific party in case of damages caused by issues with that code (security or otherwise).
Guess which kind of code, proprietary or FLOSS, tends to come with legal guarantees attached? Hint: it's usually the one you pay for.
I say that because it's how safety and security work everywhere else - they're created and guaranteed through legal liability.
The publicly known lawsuits seem to come from data breeches and the large majority of those data breeches are due to non-code lapses in security. Leaked credentials, phished employee, social engineering, setting something Public that should be Internal-only, etc.
In fact, in many proprietary products they rely on FLOSS code which resulted in an exploit and the company owning the product may be sued for the resulting data breeches as a result. But that's an issue with their product contract and their use of FLOSS code without code review. As it turns out many proprietary products aren't code reviewing the FLOSS projects they rely on either despite their supposed potential legal liability to do so.
> I say that because it's how safety and security work everywhere else - they're created and guaranteed through legal liability.
I don't think the legal enforcement or guarantees are anywhere near as strong as other fields, such as say... actual engineering or the medical field. If a doctor fucks up badly enough they can no longer practice medicine. If a SWE fucks up bad enough they might get fired? But they can certainly keep producing new code and may find a job elsewhere if they are let go. Software isn't a licensed field and so is missing a lot of safety and security checks that licensed fields have.
Reheating already cooked food to sell to the public requires a food handler's card which is already a higher bar than exists in the world of software development. Cybersecurity isn't taken all that serious by seemingly anyone. I wouldn't have nearly as many conversations with my coworkers or clients about potential HIPAA violations if it were.
Crowdstrike comes to mind? Quick web search tells me there's a bunch of lawsuits in flight, some aimed at Crowdstrike itself, others just between parties caught in the fallout. Hell, Delta Airlines and Crowdstrike are apparently suing each other over the whole mess.
> The publicly known lawsuits seem to come from data breeches and the large majority of those data breeches are due to non-code lapses in security.
Data breaches don't matter IMO; there rarely if ever is any obvious, real damage to the victims, so unless the stock price is at risk, or data protection authorities in some EU countries start making noises, nobody cares. But the bit about "non-code lapses", that's an important point.
For many reasons, software really sucks at being a product, so as much as possible, it's seen and trades as a service. "Code lapses" and "non-code lapses" are not the units of interest. The vendor you license some SDK from isn't going to promise you the code is flawless - but they do promise you a certain level of support, responsiveness, or service availability, and are incentivized to fulfill it if they want to keep the money flowing.
When I mentioned lawsuits, that was a bit of a shorthand for an illustration. Of course you don't see that many of them happening - lawsuits in the business world are like military actions in international politics; all cooperation ultimately is backed by threat of force, but if that threat has to actually be made good on, it means everyone in the room screwed up real bad.
99% of the time, things get talked out without much noise. Angry e-mails are exchanged, lawyers get CC-d, people get put on planes and send to do some emergency fixing, contractual penalties are brought up. Everyone has an incentive in getting themselves out of trouble, which may or may not involve fixing things, but at least it involves some predictable outcomes. It's not perfect, but nothing is.
> I don't think the legal enforcement or guarantees are anywhere near as strong as other fields, such as say... actual engineering or the medical field. If a doctor fucks up badly enough they can no longer practice medicine. If a SWE fucks up bad enough they might get fired? But they can certainly keep producing new code and may find a job elsewhere if they are let go. Software isn't a licensed field and so is missing a lot of safety and security checks that licensed fields have.
Fair. But then, SWEs aren't usually doing blowtorch surgery on live gas lines. They're usually a part of an organization, which means processes are involved (or the org isn't going to be on the market very long (unless they're a critical defense contractor)).
On the other hand, let's be honest:
> Cybersecurity isn't taken all that serious by seemingly anyone.
Cybersecurity isn't taken all that serious by seemingly anyone, because it mostly isn't a big problem. For most companies, the only real threat is a dip in the stock price, and that's if they're trading. Your random web SaaS isn't really doing anything important, so their cybersecurity lapses don't do any meaningful damage to anyone either. For better or worse, what the system understands is money. Blowing up a gas pipe, or poisoning some people, or wiping some retirement accounts, translates to a lot of $$$. Having your e-mail account pop up on HIBP translates to approximately $0.
The point I'm trying to make is, in the proprietary world, software is an artifact of a mesh of companies, bound together by contracts. Down the link flows software, up the link flows liability. In between there's a lot of people whose main concern is to keep their jobs. It's not perfect, and corporate world is really good at shifting liability around, but it's doing the job.
In this world, FLOSS is a terminating node. FLOSS authors have no actual skin in the game - they're releasing their code for free and disclaiming responsibility. So while "given enough eyeballs, all bugs are shallow", most of those eyes belong to volunteers. FLOSS security relies on good will and care of individuals. Proprietary security relies on individual self-preservation - but you have to be in a position to threaten the provider to benefit from it.
No, I don't think that's what they were saying.
If someone wants to rob you - a door lock isn't going to stop them. Likewise if someone wants to pwn you - a little obfuscation isn't going to stop them.
Security by obscurity only works in the case that you aren't known to be worth the effort to target specifically and so nobody bothers. Much like very few people bother to beat my CTF. I'm sure if I offered a $1,000 reward for beating it the number would increase tenfold because it is suddenly worth the effort to spend a bit of time attacking. But as it stands with no monetary incentive the vast majority (>99%) give up after a few days.
How many fifteen year old plus problems exist in closed source bases?
Ignoring bad SSL certs in particular is one issue that can be reliably and easily tested regardless of how available the source of a given software is. It's a staple in Android app security testing even.
1. https://dev.deluge-torrent.org/ticket/3201
2. https://deluge.readthedocs.io/en/latest/intro/01-install.htm...
So much "just works" because no one is paying attention. Of course now that the spotlight is on the issue it's all downhill from here for anyone who doesn't [auto-]update.
IMHO close to 0 --- and for those who were affected, it would've likely been a targeted attack.
[1]: Because only other way to exploit it would be noticed by everyone else. Like python.org domain would need to be hijacked or something similar.
I'd still guess zero times though.
Still, I doubt anyone noticed this, and you'd also still need the victim to use qBittorrent and go through this flow that downloads python.
Zero seems pretty likely, yeah.
Still the easiest way to MitM random people is to set up your own free WiFi. I've done that in the past, and it works, but HSTS and certificate caching mean it's pretty useless.
I think there's a kind of vaccination effect - nobody is going to put much effort into MitMs because it's useless most of the time, so it isn't as critical when people don't validate certificates.
Fucking hell, how often do you use torrents in coffee shops let alone install new torrent client while you're at it?
Any public wifi network setup not by a complete idiot today has fully isolated clients.
https://news.ycombinator.com/item?id=37961166
Read this and tell me if you really think it unlikely that whoever performed the mitm there wouldn't be able to or interested enough in doing similar things to known seedbox hosts, distributors, or just whoever is distributing information they'd rather not be.
Qbittorrent is one of the most be popular choices for hosted bittorrent seeders across the world. This was trivially exploitable for anyone with access to the right network path for >10years. Sure it'd have to be targeted to qbittorrent users but I don't think much individual targeting is needed if you aim for dozens, hundreds, thousands, or just as many as you can of them.
Besides sketchy government-related entities with legal wiretapping capabilities, you also have well-funded private interest groups on the malicious side.
Generally not. Seedbox services are heavily cost-driven; running a Windows install for each client would add a lot of unnecessary hardware and licensing costs.
Second, attacker here had a valid certificate, it was only noticed when certificate expired (so 6 months after, since it was LE cert).
> Besides sketchy government-related entities with legal wiretapping capabilities, you also have well-funded private interest groups on the malicious side.
If you're targeted by goverment-related entities you probably shouldn't run windows and torrent software.
My comment was about Python.org and I think that it wouldn't be unusual for a student to start doing some work in a coffee shop and get MITMd.
However, it'd be quite easy for someone to have setup QBitTorrent to auto-start on their laptop and then to forget about it when they're doing something else at an airport, coffee shop or other place where you would expect to use someone's wifi. Note that it doesn't even have to be wifi setup by the business - it could be a bad actor setting up an access point that just looks like it belongs there.
Again, this vulnerability can't exploited unless attacker is able MitM you or python.org is hijacked.
It's very hard to exploit in real-life en-masse. Targeted attack is possible, but it requires attacker to:
1) Be able to do MitM in the first place
2) You need to use qBitTorrent
3) You need to use Windows
4) You must not have python version installed that supported by qBitTorrent
Without all 4 this can't be exploited.
Worked well enough then we promptly forgot how to do it again when we needed it.
And DCC being Direct Cable Connection.
??
S/He did reply "I am" though.
That said, I think torrents are still a great way to share files, perhaps IPFS[2]. I use LocalSend[3], too, at times, although not for large files.
Has an optional step to password-protect the contents if you have any qualms with security-by-obscurity of using an unlisted torrent on a public tracker.
I use a web browser for web browser stuff... and I'll only open a torrent application when I want to download a manually downloaded .torrent file.
Not sure about the details, but a decade ago I used to seed all files below 100MB on many private trackers for seed bonus points, and yea, deluge ui (might have been the web ui, not sure) became very slow. :D
>https://www.qbittorrent.org/news
There should be a security notice IMO.
Nearly all the time, the tool doesn't accept the certificate format or it wants a chain instead of just the root because the other side doesn't supply a chain or the CA bundle doesn't match the CA you used or it doesn't use the CA system at all or the fingerprint format is the wrong hash or it wants a file instead of just a command-line fingerprint or there isn't an "at least do TOFU" flag so for testing you resort to "okay then just accept everything"... it's very rarely smooth sailing from the point of "okay I'm ssh'd into the server, now what do I run here to give this tool something it can use to verify the connection"
Makes me think of how hard PGP is considered to be. Perhaps key distribution in any asynchronous cryptographic system is simply hard
9 is 3^2, 27 is 3^3
So many questions were about SSL issues, people would just ask how to disable errors/warnings from not having the correct certificate chain installed. It was insane how many "helpful" people would assist in turning them off instead of simply fixing the problem.
I started showing people the correct way to fix the issue and also created documentation to install the internal certificate server on our Ubuntu servers (I think they had it working on some of the RHEL machines). I was a contractor so I received an $80 bonus for my efforts.
Your view is probably skewed because you were the expert but I can assure you that fixing certificate issues is not a simple process for the vast majority of us, especially 15 years ago.
See the sibling comment by lucb1e for a description of what the typical experience is like when trying to solve such issue.
I learned the other day that Python doesn't support AIA chasing natively.
https://bugs.python.org/issue18617
(Certs configured that way are technically incomplete, but because other browsers etc. handle it, it's now a "python breaks for certificates that work for other pieces of software" situation)
I can understand why it wouldn't be supported, but you also see why users and developers experience this as just "SSL/TLS just gives you these weird errors sometimes" and pass around solutions to turn off verification.
No such thing when certificates are involved.
You basically have two options to do it "correctly":
1) Walk a path of broken glass and razorblades, on your naked knees, through the depths of hell, trying to get a complex set of ancient tools and policies that no one truly understands to work together. One misstep, the whole thing seizes up, and good luck debugging or fixing it across organizational boundaries.
2) Throw in the towel and expose the insides of your org, and everyone you come into contact with, on the public Internet, so you can leverage "Internet-standard" tools and practices.
One of the fundamental issues is that doing SSL properly breaks a basic engineering assumption of locality/isolation. That is, if I'm making a tool that talks to another tool (that may or may not be made by me too) directly, I should only care about the two tools and the link between them. Not the goddamn public Internet. Alas, setting SSL means either entangling your tool with the corporate universe, or replicating a facsimile of the entire world locally, just so nothing in the stack starts whining about CAs, or that self-signed certs smell like poop, or something.
Like seriously. You make a dumb internal tool for yourself, with a web interface. You figure you want to do HTTPS because browsers whine (o. Apparently the correct way of doing this is... to buy a domain and get a cert from LetsEncrypt. WTF.
The whole philosophy around certificates is not designed to facilitate development. And guess what, I too sometimes get requests to give ability for a tool to skip some checks to make product testing possible, and it turns out that the whole communication stack already has flags for exactly that, for exactly that reason.
EDIT:
Imagine an arm broke off your coat hanger. You figure you'll take a metal bracket and two screws and fix it right there. But as you try, your power drill refuses to work and flashes some error about "insecure environment". You go on-line, and everyone tells you you need to go to the city council and register the drill and the coat hanger on a free Let's Construct build permit.
This is how dealing with SSL "correctly" feels.
Also, not everything is - or should be - on the Internet; there exists more than one network. Different systems have different needs and risk profiles. Failing to recognize that fact, and trying to apply the same most strict security standards to everything doesn't lead to more security - it leads to people caring less, and getting creative with workarounds.
that solves the problem!
https://github.com/kubernetes-sigs/metrics-server/issues/196
The number of comments and blogs/guides that recommend this is astonishing. And the lack of a proper solution is frustrating.
> 1. Malicious Executable loader with stealth functionality
TL;DR the client downloads Python from python.org over HTTPS. This isn't great (especially since it's hard-coded to 3.12.4), but there's no obvious exploit path which doesn't involve both MITM and user interaction.
> 2. Browser Hijacking + Executable Download (Software Upgrade Context)
TL;DR the client downloads an RSS file over HTTPS and will conditionally prompt the user to open a URL found in that file. This is even lower risk than #1; even if you can MITM the user and get them to click "update", all you get to do with that is show the user a web page.
> 3. RSS Feeds (Arbitrary URL injection)
The researcher seems confused by the expected behavior of an RSS client.
> 4. Decompression library attack surface (0-click)
If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.
Any (e.g. http) server supporting stream compression comes to mind.
Since the original issue is that the ssl errors are ignored, then all those https downloads are downgraded to http downloads in practice (no need to mitm to attack).
Or to say it another way, due to ignoring ssl errors, all those https urls were giving a wrong sense of security as reviewers would think them secure when they were not (due to lack of validation of ssl).
What's next, are we going to declare web browsers to have a "RCE Vulnerability" because they allow you to download programs from any site which may or may not be secure and then execute them?
Or, hey everyone, did you know that if you live in an authoritarian state, the Government can do bad things to you?
Syncthing does this too (though presumably with a certificate check). Automatic unattended autoupdate is logically indistinguishable from a RAT/trojan.
It literally is not.
What about: the same people do the automatic unattended autoupdate that you downloaded the original program from, or not?
Think at scale of years, and think of e.g. Microsoft of Adobe when pondering this question.
That said, you really shouldn't be running outdated torrent clients, like any network-connected programs. Case in point - the topic of this thread.
If I download source and build and run it, and it downloads binaries from Microsoft and runs those, that isn’t remotely “the same people”.
But also, SSL certificates don't certify the people you are connecting to but instead certify control over a domain which can change hands for various reasons.
I have to disagree here, the vulnerability part is that it can be exploited by a third party. Auto-update itself isn’t really an RCE vulnerability because the party you get the software from has to be trusted anyways.
Which is a big problem in itself, that's rarely talked about in such terms.
Me getting some software only means I trust the party I got it from at that moment of time, for that particular version of the software. It doesn't imply I trust that party indefinitely. This is the reason why so many people hate automatic updates (and often disable them when possible): they don't trust the vendor beyond the point of initial installation. They don't trust the vendor won't screw them up with UX "improvements" or license changes or countless other things that actively make users' life miserable.
Think about Windows and Microsoft. You can't at the same time say you don't trust them because of their track record of screwing with their users and enshittifying their products, and at the same time, say they're a trusted first party in your Windows installation. They aren't - they can and will screw you over with some update.
In this sense, it's not a stretch to compare unattended updates with RCE vulnerabiltiy. Just because the attacker is the product vendor, doesn't mean they're not going to pwn your machine and make you miserable. And just because their actions are legal, doesn't make them less painful.
https://userdocs.github.io/qbittorrent-nox-static/artifact-a...
Here's a verification of the latest build:
gh attestation verify x86_64-qbittorrent-nox -o userdocs
Loaded digest sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 for file://x86_64-qbittorrent-nox
Loaded 1 attestation from GitHub API
Verification succeeded!
sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 was attested by:
REPO PREDICATE_TYPE WORKFLOW
userdocs/qbittorrent-nox-static https://slsa.dev/provenance/v1 .github/workflows/matrix_multi_build_and_release_qbt_workflow_files.yml@refs/heads/master
No shit.
Yet another case of "security" people making a mountain out of a molehill for making a name for themselves.
Linus was right :p
Automatic updates and/or checks to a domain from a desktop app is a security angle that doesn't seem to be given as much attention overall. There are other scenarios like a hostile domain takeover (e.g. the original author stops renewing) which I haven't found a good solution to.
If the old key expires before a new key is delivered, then you have a problem. This has happened to me a few times and it is a pain in the butt. You basically have to disable key checking in order to receive the new key, which breaks the security.
so auto-updaters are out?
Edit to note I don't quite agree with GP either, I see their point but cert-based security is pretty much the best we've got as far as I'm aware, likely what I'd use if designing this system.
Even Deluge, which is written in Python, relies on libtorrent which is written in C++.
I don't suppose there is a modern fork of the old Java-based Azureus client? Many BitTorrent clients nowadays split the GUI from the daemon process handling the actual torrenting, so using Java for the daemon and connecting it to a native GUI could strike a good balance between security, performance and user experience.
Where would one start in building an alternative to libtorrent? Have there been any attempts (or successes)? Any functional clients that use other implementations?
But it does include, I see now, exactly what I was asking for – apparently there's an actively developed fork of Azureus called BiglyBT[1].
[0] https://en.wikipedia.org/wiki/Comparison_of_BitTorrent_clien...
But I do also generally expect it to be safer than C++. The race detector prevents a lot of badness quite effectively because it's so widely used.
Qubes OS: Shut it, I'm LARPing (in minecraft)!
Pages long un-commented functions, single spaced, appearing most like the prison notebooks of a wrongly-incarcerated psychotic. No testing of any return values at all (in the small part - a few packed pages - of the code that I looked at).
There was some field and if it got a correct 3-char (instead of the usual correct 2-char) value, the program would crash or something a minute or so later (I forget). As I was paid to program C++ ~~once~~ twice about 20 years ago, and from a "why don't _you_ have a look at it" message from a maintainer (which was 100% fair-enough, I thought) I ran it in a debugger. I got to the wrong & correct value(s) being read in from the GUI . . . and started following it/them . . . and then . . . so now there a -1 being passed around, and now everything just carries on, for a while.
Eventually the wrong valued-run would crash in some somewhat remote function with a wrongly-incarcerated psychotic's error message.
one of the real qbittorrent programmers did, then, fix it next release. But any how ...