RCE Vulnerability in QBittorrent
375 points by udev4096 7 days ago | 184 comments
  • alecco 6 days ago |

      In qBittorrent, the DownloadManager class has ignored every SSL certificate validation error that has ever happened, on every platform, for 14 years and 6 months since April 6 2010 with commit 9824d86.
    
    This looks quite serious.
    • SushiHippie 5 days ago |
      Noteworthy that this wasn't a bug, but a "feature":

        void downloadThread::ignoreSslErrors(QNetworkReply* reply,QList<QSslError> errors) {
          // Ignore all SSL errors
          reply->ignoreSslErrors(errors);
        }
      
      https://github.com/qbittorrent/qBittorrent/commit/9824d86a3c...
      • perching_aix 5 days ago |
        Is the motivation behind this known?
        • SushiHippie 5 days ago |
          As the commit message was "Fix HTTPS protocol support in torrent/rss downloader" I suppose it was a quick fix to make things work, and as things worked no one ever took a look at it until now.

          EDIT: The author of the PR[0] (who is one of the top qBittorrent contributors according to GitHub[1]) that fixed this also came to this conclusion:

          > I presume that it was a quick'n'dirty way to get SSL going which persisted to this day. It's also possible that back in the day Qt4 (?) didn't support autoloading ca root certificates from the OS's store.

          [0]: https://github.com/qbittorrent/qBittorrent/pull/21364 [1]: https://github.com/qbittorrent/qBittorrent/graphs/contributo...

          • 0xsee4 5 days ago |
            To be fair, this function ignoreSslErrors is not from the authors of qBittorrent, it comes from QT framework. The idea behind the function is that you provide it a small whitelist of errors you wish to ignore, for example in a Dev build you may well want to ignore self-signed errors for your Dev environment. The trouble is, you can call it with no arguments and this means you will ignore every error. This may have been misunderstood by the qBittorrent maintainers, maybe not.

            Much more likely is that someone knew they had implemented this temporary solution while they implemented OpenSSL in a project which previously never had SSL support - a major change with a lot of work involved - and every programmer knows that there is nothing more permanent than a temporary solution. Especially in this case. I can understand how such code would make it into the repo(I think you do too), and it's very easy for us to say we would then have immediately amended it in the next version to properly verify certs.

            Having been in contact with the maintainers, I have to say I was disappointed in how seriously they took the issue. I don't want to say any more than that.

            Source: author of the article

            • bdelay 5 days ago |
              How much notification did you give the developers before you disclosed? Did you enforce a timeline?
              • metadat 5 days ago |
                Warning shots across the bow in private are the polite and responsible way, but malicious actors don't typically extend such courtesies to their victims.

                As such, compared to the alternative (bad actors having even more time to leverage and amplify the information asymmetry), a timely public disclosure is preferable, even with some unfortunate and unavoidable fallout. Typically security researchers are reasonable and want to do the right thing with regard to responsible disclosure.

                On average, the "bigger party" inherently has more resources to respond compared to the reporter. This remains true even in open source software.

                • tsimionescu 5 days ago |
                  This is a pretty dangerous take. The reality is that the vast majority of security vulnerabilities in software are not actively exploited, beause no one knows about them. Unless you have proof of active exploitation, you are much more likely to hurt users by publicly disclosing a 0-day than by responsibly disclosing it to the developer and giving them a reasonable amount of time to come out with a patch. Even if the developers are acting badly. Making a vulnerability public is putting a target on every user, not on the developer.
                  • wisemang 5 days ago |
                    Your take is the dangerous one. I don’t disagree that

                    > the vast majority of security vulnerabilities in software are not actively exploited

                    However I’d say your explanation that it’s

                    > because no one knows about them

                    is not necessarily the reason why.

                    If the vendor or developer isn’t fixing things, going public is the correct option. (I agree some lead time / attempt at coordinated disclosure is preferable here.)

                    • tsimionescu 4 days ago |
                      > (I agree some lead time / attempt at coordinated disclosure is preferable here.)

                      Then I think we are in agreement overall. I took your initial comment to mean that as soon as you discover a vulnerability, you should make it public. If we agree that the process should always be to disclose it to the project, wait some amount of time, and only then make it public - then I think we are actually on the exact same page.

                      Now, for the specific amount of time: ideally, you'd wait until the project has a patch available, if they are collaborating and prioritizing things appropriately. However, if they are dragging their feet and/or not even acknowledging that a fix is needed, then I also agree that you should set a fixed time as a last ditch attempt to get them to fix it (say, "2 weeks from today"), and then make it public as a 0-day.

                  • dgfitz 5 days ago |
                    > Unless you have proof of active exploitation

                    Wouldn’t a “good criminal” just exploit it forever without getting caught? Your timeline has no ceiling.

                    • tsimionescu 4 days ago |
                      My point is: if you found a vulnerability and know that it is actively being exploited (say, you find out through contacts, or see it on your own systems, or whatever), then I would agree that it is ethical to publicize it immediately, maybe without even giving the creators prior notice: the vulnerability is already known by at least some bad actors, and users should be made aware immediately and take action.

                      However, if you don't know that it is being actively exploited, then the right course of action is to disclose it secretly to the creators, and work with them to coordinate on a timely patch before any public disclosure. Exactly how timely will depend on yours and their judgement of many factors. Even if the team is showing very bad judgement from your point of view, and acting dismissively; even if you have a history with them of doing this - you still owe it to the users of the code to at least try, and to at least give some unilateral but reasonable timeline in which you will disclose.

                      Even if you don't want to do this free work, the alternative is not to publicly disclose: it's to do nothing. In general, the users are still safer with an unknown vulnerability than they are with a known one that the developers aren't fixing. You don't have any responsibility to waste your own time to try to work with disagreeable people, but you also don't have the right to put users at risk just because you found an issue.

                • dcow 5 days ago |
                  100%

                  It’s unethical to users who are at risk to withhold critical information.

                  If McDonalds had an e-coli outbreak and a keen doctor picked up on it you wouldn't withhold that information from the public while McD developed a nice pr-strategy and quietly waited for the storm to pass, would you?

                  Why is security, which seriously is a public safety issue, any different?

                  • dinosaurdynasty 5 days ago |
                    It's different because bad actors can take advantage of the now-public information.

                    The point of a disclosure window is to allow a fix before _all_ bad actors get access to the vulnerability.

                    • dcow 5 days ago |
                      And some may already be taking advantage. This is a perfect example where users are empowered to self mitigate. You’re relatively okay on private networks but definitely not on public networks. If I know when the bad actors know then I can e.g. not run qbittorrent at a coffee shop until it’s patched.
                  • TeMPOraL 5 days ago |
                    What about a pre-digital bank? If you came across knowledge of a security issue potentially allowing anyone to steal stuff from their vault, would you release that information to the public? Would everyone knowing how to break in make everyone's valuables safer?

                    Medicine and biosafety are PvE. Cybersecurity is PvP.

              • 0xsee4 5 days ago |
                In total it was about 45 days or so from the initial conversation. I waited for a patched version to be released, because the next important milestone after that would be finished backports to older versions still in use, which is clearly going to take a long time as it is not being prioritized, so I wanted to inform users.

                Initially I had said 90 days from the initial report, but it seemed like they were expanding the work to fill that time. I asked a number of times for them to make a security advisory and got no answer. Some discussions on the repo showed they were considering this as a theoretical issue. Now it's CVE-2024-51774, which got assigned within 48 hours of disclosing.

                • perching_aix 5 days ago |
                  > Some discussions on the repo showed they were considering this as a theoretical issue.

                  That's hilarious. It's all theoretical until it's getting exploited in the wild...

                  • hsbauauvhabzb 5 days ago |
                    Any proof that actually happened or you just wearing a tin foil hat? Crypto enforcement en masse matter, intercepting highly specific targets using BitTorrent does not.
                    • tga_d 4 days ago |
                      I feel as though there is a generational gap developing between people who do and do not remember how prolific Firesheep used to be.
                    • perching_aix 4 days ago |
                      I think a better question is: why are you looking for evidence (not proof!) on me for something you are supposing?
                    • Jerrrrrrry 4 days ago |
                      Lol wait til you get personally targeted by a 0'day in extremely popular software for that sentiment to make you look stupid both ways.
                • dcow 5 days ago |
                  Honestly I think full disclosure with a courtesy heads-up to the project maintainers/company is the most ethical strategy for everyone involved. “I found a thing. I will disclose it on Monday. No hard feelings.” With ridiculous 45-90 day windows it’s the users that take on most all the risk, and in many ways that’s just as if not more unethical than some script kids catching wind before a patch is out. Every deployment of software is different and downstream consumers should be able to make an immediate call as to how to handle vulns that pop up.
                  • MikeHolman 5 days ago |
                    Strongly disagree. 45 days to allow the authors to fix a bug that has been present for over a decade is not really much added risk for users. In this case, 45 days is about 1% additional time for the bug to be around. Maybe someone was exploiting it, but this extra time risk is a drop in the bucket, whereas releasing the bug immediately puts all users at high risk until a patch can be developed/released, and users update their software.

                    Maybe immediate disclosure would cause a few users to change their behavior, but no one is tracking security disclosures on all the software they use and changing their behavior based on them.

                    The caveat here is in case you have evidence of active exploitation, then immediate disclosure makes sense.

                  • rustcleaner 4 days ago |
                    What if we changed the fundamental equation of the game: no more "responsible" disclosures, or define responsible as immediate and as widely published as possible (ideally with PoC). If anything, embargoes and timelines are irresponsible as they create unacceptable information asymmetry. An embargo is also an opportunity to back-room sell the facts of the embargo to the NSA or other national security apparatus on the downlow. An embargoed vulnerability will likely have a premium valuation model following something which rhymes with Black Scholes. Really, really think about it...
            • kgeist 5 days ago |
              Temporary solutions can become more dangerous with time. Years ago, in one of our projects, someone wrote a small helper class, HTTPClient, to talk to one of our internal subsystems. The subsystem in the dev environment used self-signed certificates, so one of the devs just disabled SSL validation. Whether SSL errors were ignored or not was specified in a config. Later, someone messed up while editing the configs, and SSL validation got disabled in the live environment, too. No one noticed, because nobody writes tests to check if SSL validation is enabled. But that's only part of the story, this HTTPClient class was still only used to communicate with our internal subsystem on our own network.

              The real problem came later when the next generation of developers saw this HTTPClient class and thought, "Hey, what a nifty little helper!", and soon they were using it to talk to pretty much everything, including financial systems. I was shocked when I discovered it. An inconsequential temporary workaround had turned into a huge security hole.

            • xelamonster 5 days ago |
              This is interesting, I haven't ever used the Qt framework but I'm surprised that it would even have an SSL implementation, sounds a bit out of scope for a GUI toolkit. I think I'd prefer to do all my networking separately and provide the fetched data to Qt.

              Edit (just noticed this was the author): I'm curious what torrent client do you prefer? I like Deluge but mostly go to it because it's familiar.

              • ripdog 5 days ago |
                Qt isn't just a GUI toolkit - it's an everything toolkit. It's somewhat intended to be used (potentially) alone with C++ to allow the creation of a wide variety of apps. It includes modules like Bluetooth, Network, Multimedia, OAuth, Threading and XML.

                See a full list: https://doc.qt.io/qt-6/index.html

                • pjmlp 2 days ago |
                  In the spirit of the C++ compiler frameworks that were quite common during the 1990's before C++98, and then we got quite a thin standard library instead, and a mess of how to manage third parties that is still being sorted out.
            • tagyro 4 days ago |
              (most?) programming languages have a way to handle these scenarios, something like `#warning | TODO | FIXME` ...

              I understand temporary, but 14 years seems a bit ...too long

          • SV_BubbleTime 5 days ago |
            Another point against the “security” of open source software.

            “Oh, it’ll have millions of eyes on it”… except no one looks.

            • Nadya 5 days ago |
              As opposed to the “security” of closed source software? Where severe vulns are left in as long as they aren't publicized because it would take too much development time to justify fixing and the company doesn't make money fixing vulns - it makes money creating new features. And since it isn't a security-related product any lapses in security are an "Oopsy woopsy we screwed up" and everyone moves on with their lives?

              Even companies that are supposed to get security right have constant screw ups that are only fixed when someone goes poking around where they probably shouldn't and thankfully happens to not be malicious.

              • LittleShaman 5 days ago |
                I think your comment works as a reply to claiming closed source is more secure than open source - you try to bring them both to the same level.

                I dont think it replies to what the user asks though. It seems reasonable expecting widely used open source software to be studied by many people. If thats true it would be good to question why this wasnt caught by anyone. Ignoring all ssl errors is not something you need to be an expert to know is bad...

                • Nadya 5 days ago |
                  Codebases outside of security-contexts are rarely audited, much less professionally so. The culture of code reviewing PR's from 14 years ago is a little different from today and is also why any "quick hacks to make things work" should always have some form of "//HACK: REVIEW OR REMOVE BY <DATE>" attached to it to make it easy to find.

                  From a security perspective there are only two kinds of code bases: open & closed. By deduction one of those will have more eyeballs on the codebase than the other even if "nobody looks".

                  Case in point: It may have taken 14 years but someone looked. Had the code base been closed source that may never have happened because it might not have been possible to ever happen. It's also very easy to point to the number of security issues that never made it into production because it was caught in an open source code review by passerbys and other contributors while the PR was waiting to be merged.

                  The fact it was caught at all is a point for open source security - not against it. Even if it took 14 years.

                  • TeMPOraL 5 days ago |
                    > From a security perspective there are only two kinds of code bases: open & closed. By deduction one of those will have more eyeballs on the codebase than the other even if "nobody looks".

                    Is that the classification that matters? I'd think that there are only following two kinds of code bases: those that come with no warranty or guarantee whatsoever, and those attached to a contract (actual or implied) that gives users legal recourse specific party in case of damages caused by issues with that code (security or otherwise).

                    Guess which kind of code, proprietary or FLOSS, tends to come with legal guarantees attached? Hint: it's usually the one you pay for.

                    I say that because it's how safety and security work everywhere else - they're created and guaranteed through legal liability.

                    • Nadya 4 days ago |
                      Can you cite an example where a company was sued over bad code? I want to agree with you and agree with your reasoning (which is why I upvoted you as I think it is a good argument) but cannot think of any example where this has been the case. Perhaps in medical/aviation/government niches but not in any niche I've worked in or can find an example of.

                      The publicly known lawsuits seem to come from data breeches and the large majority of those data breeches are due to non-code lapses in security. Leaked credentials, phished employee, social engineering, setting something Public that should be Internal-only, etc.

                      In fact, in many proprietary products they rely on FLOSS code which resulted in an exploit and the company owning the product may be sued for the resulting data breeches as a result. But that's an issue with their product contract and their use of FLOSS code without code review. As it turns out many proprietary products aren't code reviewing the FLOSS projects they rely on either despite their supposed potential legal liability to do so.

                      > I say that because it's how safety and security work everywhere else - they're created and guaranteed through legal liability.

                      I don't think the legal enforcement or guarantees are anywhere near as strong as other fields, such as say... actual engineering or the medical field. If a doctor fucks up badly enough they can no longer practice medicine. If a SWE fucks up bad enough they might get fired? But they can certainly keep producing new code and may find a job elsewhere if they are let go. Software isn't a licensed field and so is missing a lot of safety and security checks that licensed fields have.

                      Reheating already cooked food to sell to the public requires a food handler's card which is already a higher bar than exists in the world of software development. Cybersecurity isn't taken all that serious by seemingly anyone. I wouldn't have nearly as many conversations with my coworkers or clients about potential HIPAA violations if it were.

                      • TeMPOraL 4 days ago |
                        > Can you cite an example where a company was sued over bad code?

                        Crowdstrike comes to mind? Quick web search tells me there's a bunch of lawsuits in flight, some aimed at Crowdstrike itself, others just between parties caught in the fallout. Hell, Delta Airlines and Crowdstrike are apparently suing each other over the whole mess.

                        > The publicly known lawsuits seem to come from data breeches and the large majority of those data breeches are due to non-code lapses in security.

                        Data breaches don't matter IMO; there rarely if ever is any obvious, real damage to the victims, so unless the stock price is at risk, or data protection authorities in some EU countries start making noises, nobody cares. But the bit about "non-code lapses", that's an important point.

                        For many reasons, software really sucks at being a product, so as much as possible, it's seen and trades as a service. "Code lapses" and "non-code lapses" are not the units of interest. The vendor you license some SDK from isn't going to promise you the code is flawless - but they do promise you a certain level of support, responsiveness, or service availability, and are incentivized to fulfill it if they want to keep the money flowing.

                        When I mentioned lawsuits, that was a bit of a shorthand for an illustration. Of course you don't see that many of them happening - lawsuits in the business world are like military actions in international politics; all cooperation ultimately is backed by threat of force, but if that threat has to actually be made good on, it means everyone in the room screwed up real bad.

                        99% of the time, things get talked out without much noise. Angry e-mails are exchanged, lawyers get CC-d, people get put on planes and send to do some emergency fixing, contractual penalties are brought up. Everyone has an incentive in getting themselves out of trouble, which may or may not involve fixing things, but at least it involves some predictable outcomes. It's not perfect, but nothing is.

                        > I don't think the legal enforcement or guarantees are anywhere near as strong as other fields, such as say... actual engineering or the medical field. If a doctor fucks up badly enough they can no longer practice medicine. If a SWE fucks up bad enough they might get fired? But they can certainly keep producing new code and may find a job elsewhere if they are let go. Software isn't a licensed field and so is missing a lot of safety and security checks that licensed fields have.

                        Fair. But then, SWEs aren't usually doing blowtorch surgery on live gas lines. They're usually a part of an organization, which means processes are involved (or the org isn't going to be on the market very long (unless they're a critical defense contractor)).

                        On the other hand, let's be honest:

                        > Cybersecurity isn't taken all that serious by seemingly anyone.

                        Cybersecurity isn't taken all that serious by seemingly anyone, because it mostly isn't a big problem. For most companies, the only real threat is a dip in the stock price, and that's if they're trading. Your random web SaaS isn't really doing anything important, so their cybersecurity lapses don't do any meaningful damage to anyone either. For better or worse, what the system understands is money. Blowing up a gas pipe, or poisoning some people, or wiping some retirement accounts, translates to a lot of $$$. Having your e-mail account pop up on HIBP translates to approximately $0.

                        The point I'm trying to make is, in the proprietary world, software is an artifact of a mesh of companies, bound together by contracts. Down the link flows software, up the link flows liability. In between there's a lot of people whose main concern is to keep their jobs. It's not perfect, and corporate world is really good at shifting liability around, but it's doing the job.

                        In this world, FLOSS is a terminating node. FLOSS authors have no actual skin in the game - they're releasing their code for free and disclaiming responsibility. So while "given enough eyeballs, all bugs are shallow", most of those eyes belong to volunteers. FLOSS security relies on good will and care of individuals. Proprietary security relies on individual self-preservation - but you have to be in a position to threaten the provider to benefit from it.

              • perching_aix 5 days ago |
                > As opposed to the “security” of closed source software?

                No, I don't think that's what they were saying.

              • lofaszvanitt 5 days ago |
                Security by obscurity works, it works, no matter how hard people regurgitate the bs that it's not working.
                • Nadya 4 days ago |
                  The contexts of security by obscurity is usually in regards to data that would attract people who would specifically target you for being a mark that will make them a lot of money rather than opportunistically target you because you are an easy mark that will make them a quick & easy profit of unknown value.

                  If someone wants to rob you - a door lock isn't going to stop them. Likewise if someone wants to pwn you - a little obfuscation isn't going to stop them.

                  Security by obscurity only works in the case that you aren't known to be worth the effort to target specifically and so nobody bothers. Much like very few people bother to beat my CTF. I'm sure if I offered a $1,000 reward for beating it the number would increase tenfold because it is suddenly worth the effort to spend a bit of time attacking. But as it stands with no monetary incentive the vast majority (>99%) give up after a few days.

                  • lofaszvanitt 19 hours ago |
                    Yeah, but how will an attacker know to target you if they don't even know you have anything valuable, and you are flying under the radar, hm?
            • hildolfr 5 days ago |
              Except this was found eventually.

              How many fifteen year old plus problems exist in closed source bases?

              • perching_aix 5 days ago |
                You mean those that too "get found eventually"?

                Ignoring bad SSL certs in particular is one issue that can be reliably and easily tested regardless of how available the source of a given software is. It's a staple in Android app security testing even.

              • EasyMark 5 days ago |
                seems like some thing like this might be searchable by regex's? "/.*ignore.*ssl/i"* , at least in reasonably popular packages like qbittorrent or transmission. I'm sure some regex gurus could come up with some good ones**
        • beeboobaa3 5 days ago |
          A guess that's probably correct: Many torrent sites (where the client can download .torrent files from when given an URL) their infra sucks. This includes expired certificates. Users don't want to deal with that shit. Developers don't want to deal with users complaining. It's not really considered a risk because lots of those torrent sites (used to) just use HTTP to begin with, so who cares, right?
  • atomicnumber3 5 days ago |
    I've used deluge for longer than I've used almost any other program, I think. I've been pretty happy with their track record (from the perspective of... I've never seen a private tracker ban specific versions of deluge or anything to that effect. Which they've done for many other clients when big vulns drop for them.)
  • password4321 5 days ago |
    It would be incredible to learn how many have actually been affected by this issue in that past ~15 years... how important is SSL validation to those able to blend in with the crowd even on the sketchy-ish side of the internet?

    So much "just works" because no one is paying attention. Of course now that the spotlight is on the issue it's all downhill from here for anyone who doesn't [auto-]update.

    • userbinator 5 days ago |
      It would be incredible to learn how many have actually been affected by this issue in that past ~15 years

      IMHO close to 0 --- and for those who were affected, it would've likely been a targeted attack.

    • dgfitz 5 days ago |
      I had the exact same thought. Actually having the data seems almost impossible, it sure would be fun to see.
    • 0x457 5 days ago |
      Probably zero? That thing was reposible for downloading python from python.org. It's possible to exploit, but would need to be pretty targeted and would require already some access to the target[1].

      [1]: Because only other way to exploit it would be noticed by everyone else. Like python.org domain would need to be hijacked or something similar.

      • mcmcmc 5 days ago |
        You don’t need to hijack the whole domain to poison DNS for a given client
        • 0x457 5 days ago |
          Yes, that's what I meant by other way requires _some_ access to the target.
      • crtasm 5 days ago |
        It makes a MITM attack possible, that doesn't require access to the target or the website it's contacting.

        I'd still guess zero times though.

        • account42 3 days ago |
          A MITM attack requires some kind of access to the target or the server. You can't just intercept connections of whoever you want on the Internet.
      • TheDong 5 days ago |
        The "some access to the target" bit could just being on the same unsecure wifi network as them, such as a coffee shop or library.

        Still, I doubt anyone noticed this, and you'd also still need the victim to use qBittorrent and go through this flow that downloads python.

        Zero seems pretty likely, yeah.

        • IshKebab 5 days ago |
          Does ARP spoofing still actually work? I would have assumed that modern routers block it.

          Still the easiest way to MitM random people is to set up your own free WiFi. I've done that in the past, and it works, but HSTS and certificate caching mean it's pretty useless.

          I think there's a kind of vaccination effect - nobody is going to put much effort into MitMs because it's useless most of the time, so it isn't as critical when people don't validate certificates.

        • 0x457 5 days ago |
          > The "some access to the target" bit could just being on the same unsecure wifi network as them, such as a coffee shop or library.

          Fucking hell, how often do you use torrents in coffee shops let alone install new torrent client while you're at it?

          Any public wifi network setup not by a complete idiot today has fully isolated clients.

      • sneak 5 days ago |
        No. The lack of certificate checking means anyone with access to the network in between; a rogue AP is sufficient.
        • 0x457 5 days ago |
          If you're connecting to a rogue AP, then you are already lost.
          • detaro 5 days ago |
            Only if software you use is badly broken, like QBittorrent here. For the majority of applications today, a rogue AP can't do much interesting that won't immediately cause alerts.
      • baobun 5 days ago |
        That is an extremely naive take.

        https://news.ycombinator.com/item?id=37961166

        Read this and tell me if you really think it unlikely that whoever performed the mitm there wouldn't be able to or interested enough in doing similar things to known seedbox hosts, distributors, or just whoever is distributing information they'd rather not be.

        Qbittorrent is one of the most be popular choices for hosted bittorrent seeders across the world. This was trivially exploitable for anyone with access to the right network path for >10years. Sure it'd have to be targeted to qbittorrent users but I don't think much individual targeting is needed if you aim for dozens, hundreds, thousands, or just as many as you can of them.

        Besides sketchy government-related entities with legal wiretapping capabilities, you also have well-funded private interest groups on the malicious side.

        • ndriscoll 5 days ago |
          Are hosted servers typically running Windows? The Linux version doesn't download Python (generally your package manager would do that). I would expect updates to qbittorrent are also handled by the package manager on Linux.
          • duskwuff 4 days ago |
            > Are hosted servers typically running Windows?

            Generally not. Seedbox services are heavily cost-driven; running a Windows install for each client would add a lot of unnecessary hardware and licensing costs.

        • 0x457 5 days ago |
          First of all those are linux boxes that not effected by this.

          Second, attacker here had a valid certificate, it was only noticed when certificate expired (so 6 months after, since it was LE cert).

          > Besides sketchy government-related entities with legal wiretapping capabilities, you also have well-funded private interest groups on the malicious side.

          If you're targeted by goverment-related entities you probably shouldn't run windows and torrent software.

      • ndsipa_pomu 5 days ago |
        It's perfectly feasible for someone to set up a poisoned DNS in a place like an airport or a coffee shop and MITM anyone who's not using a VPN etc.
        • 0x457 5 days ago |
          Yes, I fucking love going to the coffee shop and airport, then proceed to download QBitTorrent do download some linux ISOs. Because those places always have highly reliable WiFi, high speed and definetly not filtering traffic.
          • ndsipa_pomu 5 days ago |
            Fine strawman you're building there.

            My comment was about Python.org and I think that it wouldn't be unusual for a student to start doing some work in a coffee shop and get MITMd.

            However, it'd be quite easy for someone to have setup QBitTorrent to auto-start on their laptop and then to forget about it when they're doing something else at an airport, coffee shop or other place where you would expect to use someone's wifi. Note that it doesn't even have to be wifi setup by the business - it could be a bad actor setting up an access point that just looks like it belongs there.

          • eptcyka 5 days ago |
            You wouldn’t download QBitTorrent, you would use QBitTorrent on unsafe networks, which is not far fetched at all.
            • duskwuff 4 days ago |
              But ordinary use of qBitTorrent is fine. The only part with a clear path to code execution (assuming MITM and no certificate verification) is the initial install of Python - which is only required for certain features, only installs once, and requires user confirmation to start.
            • 0x457 4 days ago |
              The download with unverified certificate only triggered on windows if there isn't "good enough" version of python installed. If it's already installed then nothing needs to be downloaded.

              Again, this vulnerability can't exploited unless attacker is able MitM you or python.org is hijacked.

              It's very hard to exploit in real-life en-masse. Targeted attack is possible, but it requires attacker to:

              1) Be able to do MitM in the first place

              2) You need to use qBitTorrent

              3) You need to use Windows

              4) You must not have python version installed that supported by qBitTorrent

              Without all 4 this can't be exploited.

    • sieabahlpark 5 days ago |
      I think torrenting is one of those things that people understand is sketchy without it actually being sketchy. People also don't just leave it open forever, there usually leeching or seeding and then close the program when it's done. You're probably more likely to get a virus from the pirates exe. (Save me the reply that explains you can use torrenting legally, I already know.)
      • IshKebab 5 days ago |
        Yeah I've been surprised by how unsketchy torrenting is compared to how sketchy it should be. You'd think even just for videos, there must be absolutely tons of RCEs in VLC or whatever. Yet I've never seen one actually used.
      • sixothree 5 days ago |
        I needed to get a 100+ gb image to a coworker remote once and after fighting with it for a while we just said screw it and created a torrent. No third parties. No relays. Just us.

        Worked well enough then we promptly forgot how to do it again when we needed it.

        • Aerroon 5 days ago |
          I've run into the same problem: if you want to share large files to a friend you need to either find a filehost that accepts very large files or use torrents (maybe something like irc transfer works too).
          • johnisgood 5 days ago |
            IRC transfer? I hope you are not referring to DCC. :P
            • Aerroon 5 days ago |
              I am
            • pbhjpbhj 4 days ago |
              IRC transfer being some form of InfraRed Communication.

              And DCC being Direct Cable Connection.

              ??

        • loganhood 5 days ago |
          I've done this with friends/family a couple times and wrote up a tutorial that I use as reference every couple months.

          Has an optional step to password-protect the contents if you have any qualms with security-by-obscurity of using an unlisted torrent on a public tracker.

          https://loganhood.com/2023/12/14/everyday-7zip-bittorrent

    • nubinetwork 5 days ago |
      /shrug

      I use a web browser for web browser stuff... and I'll only open a torrent application when I want to download a manually downloaded .torrent file.

      • basilgohar 5 days ago |
        Torrents can use webseeds, which results in an HTTP request. Torrenting now includes HTTP requests as a result.
    • result2vino 5 days ago |
      Also chiming in to say…zero. A lot of this post feels…trumped up. There’s certainly something there, but “qBittorrent RCE”, whilst technically true, is alarmist.
  • logical_person 5 days ago |
    it's shocking how low-quality these issues are in a client that is otherwise 1000x more performant than the other options listed in the article
    • coppsilgold 5 days ago |
      Deluge performs just as well as qBittorrent. libtorrent-rasterbar (libtorrent.org) is what is performant.
      • magxnta 5 days ago |
        I found the deluge (web?) ui becoming unusable after adding tens (or hundreds?) of thousands of torrents.

        Not sure about the details, but a decade ago I used to seed all files below 100MB on many private trackers for seed bonus points, and yea, deluge ui (might have been the web ui, not sure) became very slow. :D

        • dawnerd 5 days ago |
          Same, deluge and qbittorrent would start to have issues with very large or lots of torrents. Ended up with transmission with the trguiNG UI and its handled everything. It's not perfect and often slow but it hasn't crashed.
        • treyd 5 days ago |
          I ran into slowdowns in the remote control after just a few hundred. I switched to transmission shortly after. I had a great time using Deluge for probably like 6-7 years but Transmission is more performant has more tooling support.
      • 1oooqooq 5 days ago |
        moved to transmission
  • thomas34298 5 days ago |
    >BUGFIX: Don't ignore SSL errors (sledgehammer999)

    >https://www.qbittorrent.org/news

    There should be a security notice IMO.

  • rgovostes 5 days ago |
    Any time someone asks about certificate validation errors on StackOverflow, half of the answers show how to disable validation rather than fix the issue. The API calls should be explicit, e.g., youWillBeFiredForFacilitatingManInTheMiddleAttacks().
    • lucb1e 5 days ago |
      Or it should be easier to supply an expected certificate

      Nearly all the time, the tool doesn't accept the certificate format or it wants a chain instead of just the root because the other side doesn't supply a chain or the CA bundle doesn't match the CA you used or it doesn't use the CA system at all or the fingerprint format is the wrong hash or it wants a file instead of just a command-line fingerprint or there isn't an "at least do TOFU" flag so for testing you resort to "okay then just accept everything"... it's very rarely smooth sailing from the point of "okay I'm ssh'd into the server, now what do I run here to give this tool something it can use to verify the connection"

      Makes me think of how hard PGP is considered to be. Perhaps key distribution in any asynchronous cryptographic system is simply hard

      • shermantanktop 5 days ago |
        Key distribution and revocation is pretty much the hard problem, at least in pragmatic terms. The details of cryptographic operations in code get a lot of scrutiny, and even then there are issues. But key management combines crypto complexity with distributed system complexity, and mixes that with human propensity for operational error.
      • chx 5 days ago |
        > Makes me think of how hard PGP is considered to be

        https://www.usenix.org/system/files/1401_08-12_mickens.pdf

      • IshKebab 5 days ago |
        Yeah the fact that on Linux the certificate bundle can be in literally 10 different locations depending on the distro is pretty embarrassing too.
        • lyu07282 5 days ago |
          10? Ridiculous! We need to develop one universal standard that cover's everyone's usecases.
          • TheSpiceIsLife 5 days ago |
            Obligatory XKCD link
            • fragmede 5 days ago |
              927.

              9 is 3^2, 27 is 3^3

    • bluedino 5 days ago |
      A large company I worked at a few years ago had an internal Python channel in Teams for coding support.

      So many questions were about SSL issues, people would just ask how to disable errors/warnings from not having the correct certificate chain installed. It was insane how many "helpful" people would assist in turning them off instead of simply fixing the problem.

      I started showing people the correct way to fix the issue and also created documentation to install the internal certificate server on our Ubuntu servers (I think they had it working on some of the RHEL machines). I was a contractor so I received an $80 bonus for my efforts.

      • gertop 5 days ago |
        > instead of simply fixing the problem.

        Your view is probably skewed because you were the expert but I can assure you that fixing certificate issues is not a simple process for the vast majority of us, especially 15 years ago.

        See the sibling comment by lucb1e for a description of what the typical experience is like when trying to solve such issue.

      • ethbr1 5 days ago |
        > Python channel in Teams for coding support. So many questions were about SSL issues

        I learned the other day that Python doesn't support AIA chasing natively.

        https://bugs.python.org/issue18617

        (Certs configured that way are technically incomplete, but because other browsers etc. handle it, it's now a "python breaks for certificates that work for other pieces of software" situation)

        • anttihaapala 5 days ago |
          The issue was migrated to github so more up-to-date discussion is in https://github.com/python/cpython/issues/62817
          • consp 5 days ago |
            This discussion is just "do it because some browsers do it" without any reasoning why (or why not) you should do it. Firefox approach is i guess the best compromise between user annoyance and developer annoyance but it's still a compromise against proper TLS.
        • richm44 5 days ago |
          Downloading things from the AIA fields would mean triggering HTTP/HTTPS requests to an untrusted URL from a certificate you haven't verified - not a good idea. What firefox does is cache intermediates that it has seen elsewhere, the windows TLS stack can fetch additional certs from windows update on-demand (and actually starts with only a small bundle of trusted roots). There is no good solution for incomplete chains other than getting the sites fixed (or using a provider like cloudflare that solves it for them).
        • zerocrates 5 days ago |
          I don't think I've seen anything but a browser ever do this, fixing an incomplete chain. curl, wget, several different programming languages, everything just fails to verify.

          I can understand why it wouldn't be supported, but you also see why users and developers experience this as just "SSL/TLS just gives you these weird errors sometimes" and pass around solutions to turn off verification.

      • TechDebtDevin 5 days ago |
        You'd be surprised how many companies with insanely valuable IP (especially in the startup space) who do not use vaults/secret managers and store keys in plain text files. Its pretty astonishing tbh.
        • xvector 5 days ago |
          Even at large companies. Secrets management was not even being done across large swaths of FAANG companies until ~2020. I know some people that made a very lucrative career out of enabling secrets at these orgs from 2010-2020.
      • TeMPOraL 5 days ago |
        > instead of simply fixing the problem.

        No such thing when certificates are involved.

        You basically have two options to do it "correctly":

        1) Walk a path of broken glass and razorblades, on your naked knees, through the depths of hell, trying to get a complex set of ancient tools and policies that no one truly understands to work together. One misstep, the whole thing seizes up, and good luck debugging or fixing it across organizational boundaries.

        2) Throw in the towel and expose the insides of your org, and everyone you come into contact with, on the public Internet, so you can leverage "Internet-standard" tools and practices.

        One of the fundamental issues is that doing SSL properly breaks a basic engineering assumption of locality/isolation. That is, if I'm making a tool that talks to another tool (that may or may not be made by me too) directly, I should only care about the two tools and the link between them. Not the goddamn public Internet. Alas, setting SSL means either entangling your tool with the corporate universe, or replicating a facsimile of the entire world locally, just so nothing in the stack starts whining about CAs, or that self-signed certs smell like poop, or something.

        Like seriously. You make a dumb internal tool for yourself, with a web interface. You figure you want to do HTTPS because browsers whine (o. Apparently the correct way of doing this is... to buy a domain and get a cert from LetsEncrypt. WTF.

        The whole philosophy around certificates is not designed to facilitate development. And guess what, I too sometimes get requests to give ability for a tool to skip some checks to make product testing possible, and it turns out that the whole communication stack already has flags for exactly that, for exactly that reason.

        EDIT:

        Imagine an arm broke off your coat hanger. You figure you'll take a metal bracket and two screws and fix it right there. But as you try, your power drill refuses to work and flashes some error about "insecure environment". You go on-line, and everyone tells you you need to go to the city council and register the drill and the coat hanger on a free Let's Construct build permit.

        This is how dealing with SSL "correctly" feels.

        • arccy 5 days ago |
          the network is never secure, that's why there's all this stuff going on about "zero trust"
          • ninkendo 5 days ago |
            Please explain to me about how the “network” between my browser and my kubernetes dev installation on the same computer is insecure.
          • TeMPOraL 5 days ago |
            Nothing in life is ever secure. "All this stuff going on about ''zero trust''" is a broad and diverse mix of good practices, hot air, fear, misconceptions about reality, and power seeking. I'd dare say that in a big way, it's practical effects are, intentionally or otherwise, disenfranchising workers, screwing with their ability to do their jobs, generating huge costs and threat exposure across the board. But it's sure nice if you're a supplier in the "zero trust" market.

            Also, not everything is - or should be - on the Internet; there exists more than one network. Different systems have different needs and risk profiles. Failing to recognize that fact, and trying to apply the same most strict security standards to everything doesn't lead to more security - it leads to people caring less, and getting creative with workarounds.

        • im3w1l 5 days ago |
          Regarding your example, it really does seem like the direction the world is moving.
          • TeMPOraL 4 days ago |
            It's not a coincidence. Same incentives are at play, same justifications given - except when it comes to computers, even tech people seem much less willing to question them than their equivalents in other areas of policy and enterprise.
    • ahoka 5 days ago |
      The amount of times I have to make this comment on code reviews or undo the madness and just add the certificate to the script/container and enable validation is insane.
    • yard2010 5 days ago |
    • concerndc1tizen 5 days ago |
      Just add `--kubelet-insecure-tls`

      that solves the problem!

      https://github.com/kubernetes-sigs/metrics-server/issues/196

      The number of comments and blogs/guides that recommend this is astonishing. And the lack of a proper solution is frustrating.

  • immibis 5 days ago |
    (if you have MITM)
  • duskwuff 5 days ago |
    This seems a little overblown, especially towards the later points.

    > 1. Malicious Executable loader with stealth functionality

    TL;DR the client downloads Python from python.org over HTTPS. This isn't great (especially since it's hard-coded to 3.12.4), but there's no obvious exploit path which doesn't involve both MITM and user interaction.

    > 2. Browser Hijacking + Executable Download (Software Upgrade Context)

    TL;DR the client downloads an RSS file over HTTPS and will conditionally prompt the user to open a URL found in that file. This is even lower risk than #1; even if you can MITM the user and get them to click "update", all you get to do with that is show the user a web page.

    > 3. RSS Feeds (Arbitrary URL injection)

    The researcher seems confused by the expected behavior of an RSS client.

    > 4. Decompression library attack surface (0-click)

    If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.

    • consp 5 days ago |
      > If you can find an exploit in zlib, there are much worse things you can do with that than attacking a torrent client. Decompressing input is assumed to be safe by default.

      Any (e.g. http) server supporting stream compression comes to mind.

      • duskwuff 5 days ago |
        Or, on the client side, any software that uses libpng to render PNG images (since that's using deflate on the inside). There's probably even more direct exploits against qbittorrent than MITMing the GeoIP database download.
    • sdefresne 5 days ago |
      Those are minor if certificates errors are not ignored.

      Since the original issue is that the ssl errors are ignored, then all those https downloads are downgraded to http downloads in practice (no need to mitm to attack).

      Or to say it another way, due to ignoring ssl errors, all those https urls were giving a wrong sense of security as reviewers would think them secure when they were not (due to lack of validation of ssl).

      • notpushkin 5 days ago |
        You still need to MITM the connection though. I think this is more of a risk if you live in dictatorship states, but even a rogue ISP or Wi-Fi hotspot would do. So yeah, definitely not theoretical.
    • ufmace 5 days ago |
      I agree. Calling this a "RCE Vulnerability" is ridiculously exaggerated.

      What's next, are we going to declare web browsers to have a "RCE Vulnerability" because they allow you to download programs from any site which may or may not be secure and then execute them?

      Or, hey everyone, did you know that if you live in an authoritarian state, the Government can do bad things to you?

    • result2vino 5 days ago |
      Yep. I hate to be this negative, but…Christ, security ‘researchers’ will really grasp at the most remote straws for a bit of notoriety. I’d respect this more if it were documented honestly. How it’s been done here however has just left me rolling my eyes.
  • EVa5I7bHFq9mnYK 5 days ago |
    Thank you. Uninstalled.
    • cbg0 5 days ago |
      If you knew how much of a common thing this is you'd probably just uninstall everything.
      • EVa5I7bHFq9mnYK 5 days ago |
        Thank you, BRB.
        • ykonstant 5 days ago |
          and that was the last time anyone say EVa5I7bHFq9mnYK online
          • EasyMark 5 days ago |
            R.I.P. bruv. I wonder why people always overreact to stuff like this. qBittorrent is a great piece of software and I pay homage to the developers.
    • EasyMark 5 days ago |
      You might as uninstall everything on your computer. Rust isn't immune to stuff like this either, this is a logic/security mistake error.
  • sneak 5 days ago |
    Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.

    Syncthing does this too (though presumably with a certificate check). Automatic unattended autoupdate is logically indistinguishable from a RAT/trojan.

    • gertop 5 days ago |
      > Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.

      It literally is not.

    • bmacho 5 days ago |
      > Automatic unattended autoupdate is logically indistinguishable from a RAT/trojan.

      What about: the same people do the automatic unattended autoupdate that you downloaded the original program from, or not?

      • TeMPOraL 5 days ago |
        Does it matter? Do you consider them a trusted party indefinitely?

        Think at scale of years, and think of e.g. Microsoft of Adobe when pondering this question.

        • ripdog 4 days ago |
          Then just turn it off. qBT isn't windows, it doesn't demand autoupdate.

          That said, you really shouldn't be running outdated torrent clients, like any network-connected programs. Case in point - the topic of this thread.

      • sneak 4 days ago |
        Absolutely not. GitHub is usually used as a CDN for updates distributed in binary form; it is run by Microsoft.

        If I download source and build and run it, and it downloads binaries from Microsoft and runs those, that isn’t remotely “the same people”.

      • account42 3 days ago |
        As others already have pointed out, people can change and trusting them during installation doesn't mean you want to have to trust those same people for as long as you use the software.

        But also, SSL certificates don't certify the people you are connecting to but instead certify control over a domain which can change hands for various reasons.

    • echoangle 5 days ago |
      > Even with a proper certificate check, downloading and running a remote executable is by definition an RCE vulnerability.

      I have to disagree here, the vulnerability part is that it can be exploited by a third party. Auto-update itself isn’t really an RCE vulnerability because the party you get the software from has to be trusted anyways.

      • TeMPOraL 5 days ago |
        > the party you get the software from has to be trusted anyways.

        Which is a big problem in itself, that's rarely talked about in such terms.

        Me getting some software only means I trust the party I got it from at that moment of time, for that particular version of the software. It doesn't imply I trust that party indefinitely. This is the reason why so many people hate automatic updates (and often disable them when possible): they don't trust the vendor beyond the point of initial installation. They don't trust the vendor won't screw them up with UX "improvements" or license changes or countless other things that actively make users' life miserable.

        Think about Windows and Microsoft. You can't at the same time say you don't trust them because of their track record of screwing with their users and enshittifying their products, and at the same time, say they're a trusted first party in your Windows installation. They aren't - they can and will screw you over with some update.

        In this sense, it's not a stretch to compare unattended updates with RCE vulnerabiltiy. Just because the attacker is the product vendor, doesn't mean they're not going to pwn your machine and make you miserable. And just because their actions are legal, doesn't make them less painful.

    • dist-epoch 5 days ago |
      Clicking "Yes" on a "Do you want to upgrade to the latest version?" is not fundamentally different.
  • fulafel 5 days ago |
    What's considered the most secure Bittorrent app?
    • niceguy4 5 days ago |
      qBittorrent after the most recent update...
      • johnisgood 5 days ago |
        Why not Transmission?
        • EasyMark 5 days ago |
          transmission is great if you're just getting linux images, but it's much easier to configure qbittorrent for stuff like VPN lockout and such
          • ndsipa_pomu 5 days ago |
            It's pretty easy to combine docker containers for torrenting and a VPN so that the torrenting doesn't get any network access until the VPN successfully connects. However, I use qbittorrent myself (containerised of course).
          • johnisgood 4 days ago |
            Why for Linux images only? I use it with everything. You do not even need to use the GUI, there is transmission-cli. There is transmission-daemon as well, controlled by transmission-remote (or Transmission's web interface), meaning that you can use it on a seedbox.
    • 0points 5 days ago |
      The one in a restricted container.
      • steelframe 5 days ago |
        This is exactly what I do with any software that talks to the Internet. However I'd still really, really like for an advanced adversary to not have arbitrary RCE on my machine, whether it's in a container or not. Any zero days in my kernel that said adversary may have in their back pocket are then exposed for exploitation.
      • fulafel 5 days ago |
        Containers aren't strong security boundaries so the question still remains. If you get RCE in a containerized app you can tickle eg host kernel bugs, container runtime bugs, etc.
    • ripdog 4 days ago |
      Without a formal audit on a variety of BT clients, this isn't really an answerable question. Just because this one issue was discovered in qBT, doesn't mean that there are hundreds more in it, and Transmission, say, has none.
    • concinds 4 days ago |
      There are none. They connect to thousands of untrusted peers, accepts incoming connections, all in C++ code, and none of them are sandboxed. It's laughable.
  • 0x38B 5 days ago |
    For compiling and running the latest version, https://github.com/userdocs/qbittorrent-nox-static is a nice helper script to build a static binary using Docker - I wanted to run 5.0.0 using libtorrent 1.2, and found the script by far the easiest way.
    • TechDebtDevin 5 days ago |
      *inserts backdoor*
      • agartner 5 days ago |
        There are attestations that the binaries were built via CI:

        https://userdocs.github.io/qbittorrent-nox-static/artifact-a...

        Here's a verification of the latest build:

          gh attestation verify x86_64-qbittorrent-nox -o userdocs
          Loaded digest sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 for file://x86_64-qbittorrent-nox
          Loaded 1 attestation from GitHub API
           Verification succeeded!
        
          sha256:af9ceace3fea1418ebd98cd7a8a7b6e06c3d8d768f4d9c9ce0d3e9a3256f40d9 was attested by:
          REPO                             PREDICATE_TYPE                  WORKFLOW
          userdocs/qbittorrent-nox-static  https://slsa.dev/provenance/v1  .github/workflows/matrix_multi_build_and_release_qbt_workflow_files.yml@refs/heads/master
  • kasabali 5 days ago |
    > If you click or hit enter on the auto-selected ‘Yes’ option, qBittorrent will then *download, execute the .exe*

    No shit.

    Yet another case of "security" people making a mountain out of a molehill for making a name for themselves.

    Linus was right :p

  • peanut-walrus 5 days ago |
    Not to downplay this vulnerability, but I feel like relying on (valid tls cert+domain name) combination as the only line of defense for code paths which allow remote code exec is a recipe for disaster. At a minimum, if your application is downloading and executing some artifact from the internet, it should always be fixed to a particular version of the artifact and it should verify the hash of the downloaded artifact before executing.
    • hypeatei 5 days ago |
      I think that would be challenging due to the nature of a potential man-in-the-middle attack here. An attacker could view and change contents of the request therefore making the hash check useless (other than for integrity)

      Automatic updates and/or checks to a domain from a desktop app is a security angle that doesn't seem to be given as much attention overall. There are other scenarios like a hostile domain takeover (e.g. the original author stops renewing) which I haven't found a good solution to.

      • dinosaurdynasty 5 days ago |
        You can sign updates with an offline key (ideally like a hardware key), this is what APT based repositories do/allow
        • hypeatei 5 days ago |
          Sure, but how do you deal with expiration or revocation of that key? If someone is using an old version which doesn't know about the new key then you're back at square one right?
          • freedomben 5 days ago |
            Linux package managers like apt and dnf deal with this as well. When a key is getting old, you generate a new key with an updated expiration and push it out using the old key to verify it.

            If the old key expires before a new key is delivered, then you have a problem. This has happened to me a few times and it is a pain in the butt. You basically have to disable key checking in order to receive the new key, which breaks the security.

            • Sleaker 5 days ago |
              I would say it doesn't break it, it means you must manually inspect it to verify it Is indeed a key being published from the source you expect. But that's kind of the point right? If automated checks don't work, then you have to rely on the user doing a manual inspection.
          • ramchip 4 days ago |
            A good solution to this is to have multiple roles and use threshold signatures: https://theupdateframework.io/
    • gruez 5 days ago |
      >At a minimum, if your application is downloading and executing some artifact from the internet, it should always be fixed to a particular version of the artifact and it should verify the hash of the downloaded artifact before executing.

      so auto-updaters are out?

      • xelamonster 5 days ago |
        Gut instinct I was with you, but actually yes--there's some places I definitely want to be aware and/or involved when software updates and the torrent client is one of them. Not that it should force you to go download and install your own updates, I'd just prefer it to notify me and wait for approval.

        Edit to note I don't quite agree with GP either, I see their point but cert-based security is pretty much the best we've got as far as I'm aware, likely what I'd use if designing this system.

      • im3w1l 5 days ago |
        A cryptographic signature (e.g. pgp) seems prudent. In addition to tls, I mean.
    • bjoli 5 days ago |
      I think verifying a signature is the lowest bar. If you update the software often enough you should have plenty of chances to do key rotation.
    • Negitivefrags 4 days ago |
      On windows at least you can use a code signing certificate in your build tooling and ask the OS to verify any binaries that you download. Just make sure you use a timestamping server for your code signing or things will break when the certificate expires.
  • dgfitz 5 days ago |
    Ask chat gpt: which open source codebases have active ssl vulnerabilities.
  • ptx 5 days ago |
    Although it wasn't the cause of this particular vulnerability, this kind of application, which communicates with large numbers of potentially malicious nodes, seems like it would really benefit from memory safety, but all the current implementations seem to be written in C++. (The article does mention the potential for this kind of vulnerability in point 4.)

    Even Deluge, which is written in Python, relies on libtorrent which is written in C++.

    I don't suppose there is a modern fork of the old Java-based Azureus client? Many BitTorrent clients nowadays split the GUI from the daemon process handling the actual torrenting, so using Java for the daemon and connecting it to a native GUI could strike a good balance between security, performance and user experience.

    • xelamonster 5 days ago |
      I'm going to ask a lazy question instead of figuring it out myself for the sake of discussion, feel free to ignore if I should just Google it:

      Where would one start in building an alternative to libtorrent? Have there been any attempts (or successes)? Any functional clients that use other implementations?

      • result2vino 5 days ago |
        libtorrent’s configuration documentation gives a glimpse into the massive hidden complexity of writing a good, performant, resilient client.
    • 65a 5 days ago |
      There are several pure Go bittorrent libraries from a cursory search
      • ptx 5 days ago |
        Are there any GUI clients based on those libraries? Wikipedia's list[0] doesn't include any.

        But it does include, I see now, exactly what I was asking for – apparently there's an actively developed fork of Azureus called BiglyBT[1].

        [0] https://en.wikipedia.org/wiki/Comparison_of_BitTorrent_clien...

        [1] https://github.com/BiglySoftware/BiglyBT

        • iso8859-1 4 days ago |
          It's heartwarming to see that the spirit behind Azureus is still alive. SWT might not be what the Duke himself wants in a Java GUI framework, but it's practical and I remember the "chunks bar" in the Azureus GUI fondly. It'll enjoy firing up BiglyBT after all these years. Using a largely memory safe language makes a lot of sense for P2P software.
      • Groxx 5 days ago |
        Potentially worth pointing out that Go is memory safe only when single threaded (races can corrupt memory), and this kind of application is very likely to use multiple threads.

        But I do also generally expect it to be safer than C++. The race detector prevents a lot of badness quite effectively because it's so widely used.

        • 65a 4 days ago |
          Go is safe from the perspective of RCEs due to buffer overflow, which is what matters here. Happy to be enlightened otherwise, but "I broke your (poorly implemented, non-idiomatic, please use locks or channels ffs) state machine" is a lot better than "I am the return instruction pointer now"
    • Svenskunganka 4 days ago |
      There is rqbit written in Rust and does not rely on libtorrent: https://github.com/ikatson/rqbit
  • rustcleaner 4 days ago |
    This, folks, is why when I LARP as a QBittorrent-wielding copyright infringer, I LARP using Qubes OS!

    Qubes OS: Shut it, I'm LARPing (in minecraft)!

  • PenisBanana 4 days ago |
    If you had seen the qbittorrent code . . . it's _awful_.

    Pages long un-commented functions, single spaced, appearing most like the prison notebooks of a wrongly-incarcerated psychotic. No testing of any return values at all (in the small part - a few packed pages - of the code that I looked at).

    There was some field and if it got a correct 3-char (instead of the usual correct 2-char) value, the program would crash or something a minute or so later (I forget). As I was paid to program C++ ~~once~~ twice about 20 years ago, and from a "why don't _you_ have a look at it" message from a maintainer (which was 100% fair-enough, I thought) I ran it in a debugger. I got to the wrong & correct value(s) being read in from the GUI . . . and started following it/them . . . and then . . . so now there a -1 being passed around, and now everything just carries on, for a while.

    Eventually the wrong valued-run would crash in some somewhat remote function with a wrongly-incarcerated psychotic's error message.

    one of the real qbittorrent programmers did, then, fix it next release. But any how ...