• DeepYogurt a day ago |
    Interesting. This is probably a good thing to have around as a baseline for all the iot crap out there
  • silisili a day ago |
    What's to stop the bad actors from just printing the logo on their gear anyways? Like they do with UL and N95?
    • anotherhue a day ago |
      The vendor is supposed to check I think. Not that that makes sense in a comingling inventory world.
      • likeabatterycar a day ago |
        I'm sure Amazon - whose store is mostly generic Chinese schlock nowadays - will check.

        Not that it matters, posters on this very site who claim to care will continue buying stuff off AliExpress, proud they got it for pennies on the dollar.

        Look ma, a mini PC for $22! And they didn't even charge for the preinstalled malware!

        Has anyone ever considered this junk is sold at a loss as a price of doing business, to expand a PRC-controlled botnet?

        • simne 20 hours ago |
          Looks like you just have not deal with bad traders on platforms. I once found on local aggregator product too cheap to be good (unfortunately at that moment only two stores sell these product).

          I stored their number in my notebook and going to my shopping, calling them from bus stop, and they answered me some nonsense.

          I made my shopping, and some walk, then opened platform and these shops already disappear. Less than hour.

          In other case I managed to make order and paid from card, and also shop disappeared. - In a week I received SMS from bank "your payment returned to your account".

          • likeabatterycar 20 hours ago |
            Sorry my post was sarcasm. If English is a second language I can see where that could be lost in translation. I don't expect any of these vendors, especially Amazon 3rd party, to check.
            • simne 20 hours ago |
              You are welcome! I just looking on these things very serious and trying to use my knowledge to make better products/services. And sometimes I'm really surprised in good way.
        • knowitnone 16 hours ago |
          assertion without proof is paranoia. not saying you are wrong but go get some proof first
    • ehaskins a day ago |
      You have to have rules before you can enforce them...

      It looks like part of the label [1] will include a QR or link to a public registry, so in theory you can easily confirm the device has actually been certified.

      [1] https://docs.fcc.gov/public/attachments/FCC-23-65A1.pdf point 42

    • Jtsummers a day ago |
      They describe it as being like EnergyStar which suggests they'll have a consumer accessible registry as described here:

      https://www.ul.com/news/ul-solutions-named-lead-administrato...

      > UL Solutions will also work with the FCC and program stakeholders to develop a national registry of certified products that consumers can access via QR code on the label. The registry will have more detailed information about each product. Additionally, UL Solutions will serve as liaison between the FCC and other CLAs, as well as other key stakeholders. [emphasis added]

      and here:

      https://www.fcc.gov/CyberTrustMark

      > The logo will be accompanied by a QR code that consumers can scan, linking to a registry of information with easy-to-understand details about the security of the product, such as the support period for the product and whether software patches and security updates are automatic.

      This doesn't block full-blown counterfeit products (recreating certified devices including the label), but does address non-compliant devices trying to pose as compliant.

      • likeabatterycar a day ago |
        > They describe it as being like EnergyStar which suggests they'll have a consumer accessible registry

        I've seen Energy Star logos for 30 years and never knew there was a public database, never thought to verify, and I don't think anyone else has either. The only thing Energy Star has been useful for is extracting rebates from utility companies and buying shitty dishwashers which were certain to be worse than what they were replacing.

        Verification is useless if no one knows about it, or if the data isn't actionable. I have verified UL mark numbers for questionable products, but they often resolve to some Chinese ODM you've never heard of like 'Xionshang Industrial Electric Company' whose name certainly doesn't match the product label. Do you know the components haven't been swapped out since certification was achieved? Was the product actually sourced from there or counterfeit? You have no way to verify any of that.

        UL issues holographic stickers but I've seen those like 10% of the time and probably just as easily faked.

        • Jtsummers a day ago |
          https://www.energystar.gov/ - Here's the registry.

          And I'm not saying this will be that useful, just that it's not going to be a sticker and nothing else. That would be truly useless and pretty much just make money for sticker makers.

          • timewizard 15 hours ago |
            "Find all the information you need to start shopping for ENERGY STAR certified products, including product details, rebates, and retailers near you."

            So, the product search works like a shopping cart site, and has no historical products, only new ones, and helpfully lists the prices.

            Who is this meant to benefit?

      • knowitnone 16 hours ago |
        how many people search the registries when they buy stuff? I've only done it once and the product(made in China) looked janky and as cheap as can be.
    • simne 21 hours ago |
      > What's to stop the bad actors from just printing the logo on their gear anyways?

      This is federal offense, like document falsification.

      So if somebody will be caught on doing it - could go to jail.

      • knowitnone 16 hours ago |
        except they are not in your country so how can to go to jail and why do they care about you laws?
        • simne 3 hours ago |
          Have you hear about customs?
  • Terr_ a day ago |
    Digging for more details, but a lot of the technical requirements (e.g. encryption, password handling, etc.) are still unclear.

    https://www.fcc.gov/CyberTrustMark

    • krunck a day ago |
      "Which products will be included in the program? The program applies to consumer wireless IoT products.

      Examples of eligible products include internet-connected home security cameras, voice-activated shopping devices, smart appliances, fitness trackers, garage door openers, and baby monitors."

      Ok, nothing I use then. I hope this comes to home and SMB network gear.

  • schnable a day ago |
    Probably overlaps with the EU RED Cybersecurity requirements for IoT devices that are supposed to go into effect this year: https://www.ul.com/services/ul-solutions-cybersecurity-advis...
  • jzebedee a day ago |
    The combined requirements of govt purchasing must carry the mark and major US surveillance tech manufacturers like Amazon are leading the rollout, makes this seem less like a cybersecurity concern and more of a protectionist carve out.
    • readyplayernull a day ago |
      Laser safety glasses in Amazon are so fake anyone could come up with a conspiracy theory about some country trying to blind the population of another.
      • knowitnone 16 hours ago |
        cool. you get to sue Amazon for millions for the loss of one eye
        • SketchySeaBeast 2 hours ago |
          You mean get stuck in a legal battle which takes years where Amazon denies responsibility and pushes everything to the original vendor, who for some reason went out of business 15 minutes before the suit was filed?
  • vessenes a day ago |
    Interesting. I'm not sure if the public comment period is over (The original proposal is dated August, 2023), but this stands out to me from their paper:

        We propose to focus the scope of our program on intentional radiators that generate and emit RF energy by radiation or induction.31 Such devices – if exploited by a vulnerability – could be manipulated to generate and emit RF energy to cause harmful interference. While we observe that any IoT device may emit RF energy (whether intentionally, incidentally, or unintentionally), in the case of incidental and unintentional radiators, the RF energy emitted because of exploitation may not be enough to be likely to cause harmful interference to radio transmissions.
    
    I guess it is the FCC so this makes sense from their point of view. From my perspective, I'd like to see marks indicating:

    * If the devices can be pointed to an alternate API provider if the company stops supporting

    * If firmware has been escrowed / will be made available if the company stops supporting

    * If device data is stored by the company

    * If that data is certified as end to end encrypted

    * Some marks for who / how the data is used

    • kube-system 21 hours ago |
      You might be getting a bit too far ahead of where the industry is at with some of those wishlist items. NIST's requirements are things that are best practices that everyone agrees with, like:

      * data stored/transmitted is secured by some kind of means

      * the device supports software updates

      * the device requires users to authenticate

      * the device has documentation

      * you can report security vulnerabilities to the developer

      And even these are things that many devices fail to do, today. We gotta get the basics fixed first.

      But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)

      • vessenes 21 hours ago |
        Oh, I'm with you 100%. The labels for my list will all be like black with a big X through them. But I propose consumer behavior has a better shot of changing with labels.
      • saltminer 20 hours ago |
        > But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)

        At least for Android TV devices, Button Mapper works for some.

        https://play.google.com/store/apps/details?id=flar2.homebutt...

      • rkagerer 16 hours ago |
        the device supports software updates

        'Cause they need somewhere to load in those exploits!

        A hypothetical device which is all read-only (except perhaps for a very carefully crafted, limited set of configurable parameters) might in some cases be more secure than the bulk of what's on the shelves today. After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').

        • moritonal 7 hours ago |
          An interesting thought is how when devices couldn't auto-update, they had to work out the gate. I imagine this encouraged companies to do much better testing to reach a gold-plate before deploying.
          • jonhohle 3 hours ago |
            Less critical, but video games are the same way. Companies will press and ship discs with known broken games and then issue a patch of dozens of GBs day one. The whole point of having a disc is when all the servers are off line, and the store have shut down, the game is still playable.
          • helsinkiandrew 2 hours ago |
            Assuming these are network devices - it can be harder to certify future working if the network services they rely on become unavailable or when the failure only occurs at scale.

            Case in point when 700,000 Netgear routers pinged the University of Wisconsin–Madison NTP server (harcoded IP address) every second.

            https://en.wikipedia.org/wiki/NTP_server_misuse_and_abuse#Ne...

          • kube-system 42 minutes ago |
            This is true, but mostly only relevant to the expected user functionality. User acceptance testing in waterfall development doesn't often identify security vulnerabilities.
        • ocdtrekkie 3 hours ago |
          Yep! My Insteon home automation devices have no firmware update capabilities. They also have an extremely simple local RF protocol. They allow smart device behavior but are too stupid to be "compromised".
          • wpm 3 hours ago |
            Local RF? What protocol? Proprietary?
            • ocdtrekkie 3 hours ago |
              It's proprietary but relatively easy to reverse engineer, the details are out there. In the US it uses like 914.5 MHz or something, and I can send instructions to devices with an extremely simple serial protocol on my computer.

              No Bluetooth, no Wi-Fi, no protocol sophisticated enough to distribute code. Just locally transmitted instructions.

              • kube-system an hour ago |
                Do they require auth and use solid crypto? If not, they are vulnerable, it's just that the vulnerability requires the attacker to be within range.

                People thought old analog 900mhz cordless phones were fine until others realized you could just tune a radio to that freq and listen to your neighbors.

                • ocdtrekkie 21 minutes ago |
                  They're not vulnerable in any way that matters. If you manage to get a device in range you can... turn my lights on and off? You can't program them to do malicious things over the Internet. They don't have any sensitive information you can access. There's no damage you can do them.

                  The problem with saying you need auth and crypto is now you just added a bunch of complexity you have to maintain and update and hence now you've introduced vulnerabilities.

        • jayd16 2 hours ago |
          Some things to realize about read-only devices is that once they are cracked, they are cracked forever. The devs have dev time to secure the device, the hackers have infinite time to crack it. Once done, the game is up. All instances are now easily exploited.

          The more popular the device, the more knowable upside to an exploit. If the device can be updated, then usually the exploitable timeframe is limited and its unknown if the attempt is even worthwhile.

          > After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').

          Well basically because any device of consequence is trivially hacked by now. Think about game consoles or anything that would have DRM today.

          • mjevans 19 minutes ago |
            A really dumb camera that just has an interface that's polled for data by a remote host is more likely to be used in a secure way than a 'smart' camera that tries to remember state and talk to an external server itself.
            • kube-system 9 minutes ago |
              Not at all. Many old mjpeg IP cameras worked this way and they ended up on the open internet. Shodan is full of them, still.
        • kube-system an hour ago |
          Most software vulnerabilities aren't intentionally added backdoors, but flaws in the software that shipped on a device.

          > After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').

          Quite a lot -- these are some of the easiest devices to hack. The only saving grace is that most of them are not connected to the internet so they are only vulnerable to local attacks. But garage doors, cordless phones, keyless entry, smart locks, smart home protocols, etc are notoriously vulnerable.

          The reason you don't hear about new vulnerabilities each week is precisely because they're aren't updatable. The fact that they don't get updates with new vulnerabilities is not an advantage when they permanently have older vulnerabilities.

          • ndriscoll 28 minutes ago |
            > Most software vulnerabilities aren't intentionally added backdoors, but flaws in the software that shipped on a device.

            Disagree, it is extremely common for e.g. TVs and smart phones to ship with malware included. In fact it is almost impossible to buy some classes of devices that aren't intentionally compromised.

            Having the thing never connect to the Internet at all and never receive updates is a far better security posture, and is the common recommendation among knowledgeable people for e.g. TVs.

            In practice, your neighbors are almost certainly quite a bit less malicious than whatever a smart device might talk to on the Internet. Your neighbor isn't going to hack your cordless phone. Your TV manufacture is definitely going to drop malware onto it, disable functionality (i.e. damage it), etc.

        • hn_throwaway_99 an hour ago |
          > After all, how many widespread hacks do you read about on old, single-purpose fixed analog or digital devices (which in a sense are similarly 'read-only').

          Tons and tons? I don't understand this viewpoint at all. As the saying goes "There is no 'Internet of Things', just an Internet of unpatched Linux devices." That is, the primary vector for malware is devices that aren't (or can't) be patched after vulnerabilities are discovered.

        • nyrikki an hour ago |
          You don't need access to persistent storage, especially with multi-entry executables like busybox, which are often similar to built in rootkits today.

          I actually just spent time last week getting rid of tftpd, telnetd, netcat etc... on some IP cameras last week.

          You only need a few k of ram to have a bot, especially with how it is almost the rule that embedded system run everything as root.

          If you have the ability to do firmware extraction, look at just how bad the industry is now.

      • TeMPOraL 11 hours ago |
        The software update angle has already been commented on, but I'm not sure this one is a good idea either:

        > the device requires users to authenticate

        • kube-system an hour ago |
          No, it is a good idea. If someone is operating or changing settings on your baby monitor, doorbell camera, garage door opener, smart switch, light bulb, etc -- the developer should check to make sure that the actor doing so is authorized to do it.

          Why in the world would anyone want unauthenticated access to these devices?

          • ndriscoll 35 minutes ago |
            All of those things might have a authentication-free use-case for e.g. a babysitter (maybe not to change settings, but to use). For personal networks, being on your local LAN is in practice a decent form of authentication given the tradeoffs of otherwise having to manage credentials.
      • godelski 16 minutes ago |

          > But for now, you can presume the Netflix button on your TV remote can't be configured to point to an alternative API if Netflix goes away. :)
        
        It is HackerNews, so your statement is true UNLESS you're willing to hack your TV. (But this shouldn't be a thing people _have_ to do... ):

        Warning, I haven't personally tried this

        https://askanydifference.com/how-to-root-samsung-tv/

        https://wiki.samygo.tv/index.php?title=SamyGO_for_DUMMIES

    • drdaeman 21 hours ago |
      > If that data is certified as end to end encrypted

      This needs better and more detailed clarification. I've reverse engineered a camera-equipped pet feeder, and videos sent to a cloud (or my emulating server in my case) were partially encrypted - I-frames were, P-frames were NOT. Someone ticked a checkbox "videos are encrypted", and still left the thing glaring open.

      Then, of course, it's also a matter of ciphers and modes, authentication, key generation, transmission and storage, etc etc.

      Feels like encrypted storage and transmission features alone require a full whole label, like the FCC's broadband facts label, or FDA's nutritional facts label, which outlines what data exists in the system, where the data is stored, how it's encrypted, how it's authenticated, and so on.

      Which is probably not happening until cryptography 101 becomes a part of general school curriculum and layman people start to understand the basics. Without people asking real questions and refusing to purchase products from sloppy engineering companies (aka voting with their wallets*), companies will always wave it away with tried-and-proven "military-grade security" bullshit.

      ___

      *) That is, if there's even a competition. When no one does things right (because consumers don't know and thus don't ask for it), there's nothing to pick from.

      • ttyprintk 5 hours ago |
        2023 FDA labeling for networked medical devices is conscious that a nurse might need to use the device on a network he/she doesn’t trust.
    • mystified5016 21 hours ago |
      Seems to me that this would wholesale rule out projects like the ESP32 open WiFi driver. Or rather, in order to comply, espressif would have to retool their chips to make "unauthorized" aceess to the raw radio hardware impossible. Sort of how cellular modems are now.

      Seems reasonable from the FCC's perspective, but I'm not sure how I'd feel about it.

    • simne 21 hours ago |
      Some questions already answered in article - from gov't responsible NIST and FCC and from industry agreed to participate deputies from large companies, so now they will gather meetings and will create some documents.

      So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.

    • toddmorey 4 hours ago |
      The device doesn’t ship with a known, unchangeable admin password. The device doesn’t needlessly require Wi-Fi access for basic local functionality. [my wish list]
    • ok123456 4 hours ago |
      Can we add that it's self-repairable domestically?
      • dylan604 3 hours ago |
        What could we do to make something self-repairable domestically that would also make it not repairable otherwise? Like if you bought it here, but then took it with you internationally, would it suddenly not be repairable?
        • lcnPylGDnU4H9OF an hour ago |
          Curious if it would be possible for a manufacturer could do similar hardware attestation as what's done for iPhones while allowing for the hardware and its attestation key to be swapped for a different set only if one has a certain private encryption key.

          I don't do hardware at all so this may be infeasible or misunderstood but I imagine a scheme whereby one needs the encryption key in order to properly change the key that the hardware attestation firmware is expecting. The attestation key is encrypted with a separate private key and is decrypted by the firmware with the corresponding public key.

          Presuming that's feasible, it would only really work until that private key is leaked and our hostile trade partners pinky promise not to use it. Perhaps some licensing could be used to make the people who own the device to be responsible for repairing it at an approved repair shop but that still has to be enforced.

    • nimbius 3 hours ago |
      another question: how does this work with open-source technology? Banana Pi for example is often considered an IOT.
  • floxy a day ago |
    Seems like good fodder for a tongue twister. Try saying it 10 times fast:

    - Must the Cyber Truck (Musk) bear the Cyber Trust Mark?

  • beams_of_light a day ago |
    Things like this are useless, in my mind, because hackers are always going to innovate and find ways around protection mechanisms. Today's "locked down" IoT device could easily become tomorrow's "vulnerable to an easily exploitable pre-auth RCE".

    What the government probably _should_ do is begin establishing a record of manufacturers/vendors which indicates how secure their products have been over a long period of time with an indication of how secure and consumer-friendly their products should be considered in the future. This would take the form of something like the existing travel advisories Homeland Security provides.

    Should you go to the Bahamas? Well, there's a level 2 travel advisory stating that jet ski operators there get kinda rapey sometimes.

    Should you buy Cisco products? Well, they have a track record of deciding to EOL stuff instead of fixing it when it's expensive or inconvenient to do the right thing.

    Should you buy Lenovo products? Well, they're built in a country that regularly tries and succeeds in hacking our infrastructure and has a history of including rootkits in their laptops.

    • svnt a day ago |
      Picking and choosing companies like that could work if it could somehow remain apolitical. This registry can work despite the tendency for these things to become political.

      What you’ve described is maybe more possible if provided by a Consumer Reports-style org that consumers could subscribe to.

      • Greyfoscam 21 hours ago |
        Wouldn't it be simpler to have a QR code below the symbol with anything relevant to make this work ?
    • kube-system 21 hours ago |
      NIST isn't a bunch of dummies that don't know this. The requirements posed are not micromanagement of device design; some address your concern exactly... like a requirement that developers provide contact information to report vulnerabilities and that devices makers just can't ignore authentication entirely.

      But this is IoT stuff we're talking about here, not Lenovo/Cisco... but ReoLink/PETLIBRO/eufy/roborock/FOSCAM/Ring/iRobot/etc. Security (or the lack of it) in the IoT world is a whole different ball game. It isn't uncommon for IoT devices to be EOL on release date, or just lack authentication or encryption entirely.

      • timewizard 15 hours ago |
        > NIST isn't a bunch of dummies that don't know this

        They've provided thorough definitions and a label that implies they've all been understood by the manufacturer. It doesn't mean that this solves any real world problem.

        > Security (or the lack of it) in the IoT world is a whole different ball game.

        Those can be described as IoT devices. They're more appropriately categorized as "consumer electronics" and often have a firmware update right out of the box. That's what makes this badging program an absurd idea with no meaningful outcome. This segment is not going to care.

        This isn't "Energy Star" where the purchased product does not have additional functionality which can be exposed or exploited through software and no third party testing can be exhaustive enough to prevent the obvious exploit from occurring.

        Even to the extent they can it then enforces a product design which cannot be upgraded or modified by the user under any circumstances. Worse the design frustrates the users ability to do their own verification of the device security.

        It's a good idea applied to the wrong category of products and users.

        • kube-system an hour ago |
          > Those can be described as IoT devices. They're more appropriately categorized as "consumer electronics"

          IoT devices are a subset of a much broader 'consumer electronics' category.

          > and often have a firmware update right out of the box.

          From major, established, mature companies, yes. Many device manufacturers in this category never issue firmware updates. Which is precisely why this is one of the requirements.

          > This segment is not going to care.

          Some may, some may not. The federal government will care, because they will be forced by law to comply.

          > no third party testing can be exhaustive enough to prevent the obvious exploit from occurring.

          Of course, no cybersecurity compliance plan can prevent exploits from occurring. If you try to address cybersecurity in that way, you will fail, anyway. The point is to place controls in place which are achievable, measurable, and help to mitigate risk.

          > Even to the extent they can it then enforces a product design which cannot be upgraded or modified by the user under any circumstances.

          NIST's requirements require the opposite of this.

    • ryandrake 21 hours ago |
      When I buy technology today, I'm 10X more worried about the manufacturer deliberately changing, killing or nerfing the product after I bought it, than I am worried about hackers compromising it. This goes for connected hardware, IOT devices, and software.
      • elcritch 35 minutes ago |
        Oddly "hackers" are the ones who often revive defunct hardware or give users back control over their devices. Things like DRM laws seem to only enhance corporate interests.
  • JohnMakin 21 hours ago |
    Cool, I'd rather have a stamp that indicates a company will support their product for X number of years, and if they don't, they will release the software as OSS so you can maintain yourself. I have an extremely expensive scale that came with wifi support and an app, only bought it 3 years ago, half the features already don't work because they nuked the app and stopped supporting the scale. did I need a smart scale? Absolutely not, and I don't really need any other "smart" devices the more I think about stuff like this, and now seek to buy "stupid" devices as much as possible. I'm not sure what such security stamps are supposed to provide other than false sense of security, as most things can be hacked eventually with enough determination or someone unknown zero day.
    • mxuribe 20 hours ago |
      Yeah, nowadays i try to buy many things that are "not smart" in order to avoid what you experienced with the smart scale. That being said, i wonder if what you're asking for is more on the warranty side, rather than security/promise side? To clarify, i am 100% in agreement with you that after a company stops supporting a product, they should open source it (which could create a secondary ecosystem of techs who offer services to support said open source software if a person is not inclined to manage the OSS themselves, etc.)...However, technically wouldn't a company's "promise" to support software be more like a warranty? And in that case, whatever gov. agency who oversees warranties would need to nudge business to comply...nevertheless both this cybermark, a warranty on software lifecycle, and other things are the LEAST that shoild exist nowadays.
      • heresie-dabord 3 hours ago |
        > try to buy many things that are "not smart"

        This is the best strategy, but let's be clear... consumers who make a purchase have a reasonable expectation of owning a durable product that does not increase the threat surface of consumers' lives.

        This means that the product requirements should be clear and the supply chain must be secure.

        Until a "trust label" can guarantee these principles, the proposal is just another prop in a grand security theatre.

  • gibibit 21 hours ago |
    I wonder how much this is going to add to the cost/effort of creating a new IOT product for startups/small businesses?
    • simne 21 hours ago |
      I'm sure, it will depend on how large part of product is made abroad and in which country.

      I wonder, how strict will be regulations on Chinese software parts. For EU/US/Australia/Korea originated should be less strict if could prove source.

    • duskwuff 20 hours ago |
      Based on the sorts of recommendations in [1], probably not to any meaningful degree, if at all. Much of what it's asking for is table-stakes functionality, along the lines of "have a factory reset feature", "use encryption when transmitting data", or "have a product support page" - things that any responsible developer should have been doing already.

      [1]: https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8425.pdf

    • mxuribe 20 hours ago |
      I honestly don't know...but isn't this sort of like when toasters first came out? I don't know for sure, but i guess toasters maybe didn't have the UL symbol...and probably some accidents happened, maybe house fires and such? Fast forward to nowadays and toasters tend to be pretty safe - well, if used properly and purchased from a reputable manufacturer who has been tested via entities like UL, etc....So yeah, maybe a little extra cost...but wouldn't we want at least some modicum of a signal of quality assurances for IoT devices - like we have for things like toasters?
  • jmclnx 21 hours ago |
    This is all well and good. You can have thousands of "mark of approvals", but is the most important item needed required ?

    User upgradability if the Company Folds or Sunsets the product. When that happens, the user will need to buy a new device or live with comprised devices. Most will live with the comprised device.

    So, IMO, the product should be fully open source and easily upgraded in order to get the Cyber Trust Mark.

    • duskwuff 20 hours ago |
      > User upgradability if the Company Folds or Sunsets the product.

      This isn't something which a company can meaningfully guarantee to consumers. Even if it's technically possible for users to install their own software on a device - for that matter, even if the company goes out of their way to support it by releasing documentation and source code - there simply isn't interest from developers to build and maintain custom software for those devices. And the same goes for devices which depend on online services - those services cost money to run, and the number of users capable and willing to run their own is miniscule.

  • crazygringo 21 hours ago |
    I'm interested in the actual details here --

    1) What are the requirements for the mark? E.g. no passwords stored in plaintext on servers, no blank/default passwords on devices for SSH or anything else, a process for security updates, etc.?

    2) Who is inspecting the code, both server-side and device-side?

    3) What are the processes for inspecting the code? How do we know it's actually being done and not just being rubber-stamped? After all, discovering that there's an accidental open port with a default password isn't easy.

    • kube-system 21 hours ago |
      https://nvlpubs.nist.gov/nistpubs/ir/2022/NIST.IR.8425.pdf

      Yep, pretty basic stuff, like 'require authentication', 'support software updates', etc

      > 2) Who is inspecting the code, both server-side and device-side?

      UL is administering the program and they're going to come up with the requirements

      > UL Solutions will work with stakeholders to make recommendations to the FCC on a number of important program details, like applicable technical standards and testing procedures, post-market surveillance requirements, the product registry, and a consumer education campaign.

      https://www.ul.com/insights/us-cyber-trust-mark

    • simne 21 hours ago |
      Good questions. As I understand, they spent months to decide who will be responsible and who will pay (and how much). Announce happen after budget passed Parliament, so now could make manning table and hire people for next steps.
      • simne 21 hours ago |
        Some questions already answered in article - from gov't responsible NIST and FCC and from industry agreed to participate deputies from large companies, so now they will gather meetings and will create some documents.

        So now, any interested subject (any human or entity, even "group of hackers") could ask to responsible. Or could talk with deputies, as their contacts should appear soon.

    • bloomingkales 11 hours ago |
      Here are requirements if you follow the China-bad politics:

      1) Don’t be select Chinese products

      2) Be select American products

      It’s not reaaaally 3d chess, but a relatively crude misnomer for the “Made in America” stamp or “Its American and definitely not Chinese”.

      The security practices are probably the same across products, it’s just the wrong time wrong presidency for China.

  • surfpel 21 hours ago |
    Only Gov approved spyware included!
  • rkagerer 15 hours ago |
    The real problem is very few vendors are inclined to spend the time and money to make their products truly stable & secure. Instead we churn out a firehouse of crap code for a sewage dump of cheap IoT products. I'm not sure how much a government-conceived seal will raise the bar of consumer expectations.

    I'd still put my faith in other indicators like a company's track record, third party audits, robustness of open source library choices where applicable, my own analysis of their stack and engineering choices based on signs I can observe about their product / interface / etc (there are usually several present), my own testing and so forth.

    I'd argue the generally accepted pace of consumer product development these days is reckless, and not sustainable if you want truly robust results.

    I would have been glad to see this step in the right direction if I weren't convinced all it will likely amount to in practice is security theatre. Here's hoping my skepticism is unwarranted.

  • 0xbadcafebee 15 hours ago |
    This is a bit scary. Knowing how software is developed, I know there's no government program that could actually ensure a device is secure. It's one thing to measure an electronic device's EMI or pump it full of power and see if it catches fire. But black box testing of software is itself a black art, as software security is a lot more complex than [typical] electronic design.

    The scary bit is that this label is going to be found to be ineffective, and then consumers may lose trust in government-issued safety stamps.

    • cookiengineer 14 hours ago |
      In Germany we had something like this from the TUEV Süd, where they certified online shops and online banking websites for their security.

      Suffice it to say, but the keywords are a google dork for finding easy to hack pentesting victims.

      Now the BSI (German institute for cybersecurity, similar to CISA) also started to push out certifications for the BSI Grundschutz, which is an absolute meaningless certificate and literally tests the absolute bare minimum things.

      The problem here is that there is no market, this cyber security crisis cannot be solved economically, because customers want a certificate without having to do further work. So they'll get it at whatever auditor that accepts their money.

      This is how it's done, even for ISO 27001 and SOC2 certifications. Nobody gives a damn if a single working student has 20+ role descriptions laying on their table. Findings are always ignored and never corrected.

      Cyber security policies and their effects over time need to be measurable first before there can be certification processes.

      Additionally there needs to be legislation that cannot be interpreted. Things like "reasonably modern" cannot be used as a law text because it doesn't mean anything, and instead standardized practices have to be made mandatory requirements. Preferably by a committee that is not self controlling, maybe even something like the EFF, FSF, OWASP or Linux foundation.

    • est 12 hours ago |
      > I know there's no government program that could actually ensure a device is secure

      Well, there's SELinux, TOR

      • LinuxBender 4 hours ago |
        I would be more specific than that. SELinux can be running and intentionally poorly written policies can allow absolutely anything to happen. The risk being, [X] Checkbox SELinux is technically running.
  • mikewarot 14 hours ago |
    This is equivalent to requiring an Underwriters Laboratory (UL) approval on every electrical appliance before settling on requirements for fuses or circuit breakers.

    No matter how good everyone in this trust mark program is, you're only one confused deputy[1] away from disaster.

    [1] https://en.wikipedia.org/wiki/Confused_deputy_problem

  • fulafel 13 hours ago |
    Many countries have been doing this already (usually based on this ETSI spec: https://www.etsi.org/deliver/etsi_en/303600_303699/303645/03...)
  • ngneer 13 hours ago |
    Who are these UL Solutions? They seem to have come out of nowhere and hit the jackpot, inserting themselves as arbiters for security. Smells a bit like how Common Criteria proffered independent certification labs, which were no panacea either.
    • amaterasu 13 hours ago |
      Underwriters Laboratories, UL. Look at the back of pretty much any mains powered device and you'll see their mark. They were founded 130 years ago, and test and warrant devices (typically high voltage) to be safe. Security is a new thing for them, but they're well suited to provide the services.
      • cesarb an hour ago |
        > Underwriters Laboratories, UL. Look at the back of pretty much any mains powered device and you'll see their mark.

        I just looked at the closest mains powered device I have here (a fancy humidifier/fan), and only saw an Inmetro mark, there's no UL mark at all.

        (My point is: plenty of people are not from the USA. I happen to have already heard that the UL is sort of the USA equivalent of our Inmetro, though like many things in the USA it's a private entity instead of a government entity, but the parent poster probably hadn't heard of that.)

  • trod1234 9 hours ago |
    This is doomed to failure.

    Cybersecurity best practices are a point in time snapshot, the label will be dependent on at purchase time, how will that help people who have purchased second hand, or had products where items on shelves suddenly had a vulnerability discovered? You really think they are going to go through the cost of sending those back?

    All software bugs can potentially be security bugs. This follows classic shock doctrine.

  • magic_smoke_ee 6 hours ago |
    NIST is involved (Dual_EC_DRBG).

    Verdict: nope.

    This is something that an independent, international cybersecurity nonprofit should be in-charge of, not a standards org that shills for what we think may have been the NSA (BULLRUN).

  • delfinom 5 hours ago |
    This is basically going to become a monopoly program. Stores and governments will start mandating it for sales like energy star. Then because UK is the administrator, the costs to certify will skyrocket. Basically this is going to ensure the only devices you can buy are those made by a select few megacorps.
  • mattmaroon 4 hours ago |
    It's as if the federal government doesn't realize nobody trusts it. Whether due to ineptitude or dishonesty, the only thing we can be sure about this is that we can't be sure about it.

    We need a blue ribbon commission on transparency, honesty, and good governance desperately. Let's reduce any federal agencies that make any sort of direct-to-citizen recommendations by 100% and instead spend that on rooting out bad incentives, misinformation, etc.

  • devwastaken 4 hours ago |
    The FCC doesnt do testing themselves. they just trust submitted paperwork. tech gets the “good one” certified then changes the parts for cheaper.

    there is no regulation in tech. they own the fed.