• NavinF 15 hours ago |
    The fake audio in question: https://www.youtube.com/watch?v=WT-2p832IMk

    This was distributed by news sites. Of course they didn't bother contacting anyone involved to check if it was real.

    • unsnap_biceps 15 hours ago |
      Assume they did and the principal claimed it was fake. What do you do?

      I'm asking for real here. I don't know the right answer. Do you not publish anything that someone involves calls a fake or do you publish it anyway?

      • samatman 15 hours ago |
        The real answer to this question is quite fundamental to journalistic ethics: you reach out to the source for comment, and include it in the reporting. In this case that would have been "I never said that, or anything like it, and I think it's an AI deepfake".

        Or words to that effect.

      • llm_trw 15 hours ago |
        That depends, do you assume people are innocent until proven guilty or do you enjoy lynchings?
        • mcphage 13 hours ago |
          Proven guilty by whom, and of what? Had the clip been real, it would have been shitty, but not a crime.
          • Brimaldo 12 hours ago |
            In the judeo-West, racism is a greater sin than any legally-defined crime. Many will literally cheer your violent murder if you publicly express views that violate our state-enforced globohomo mythology.
      • hyrix 14 hours ago |
        Journalists don't just publish whatever comes into their mailbox--at least, good journalists understand the potential that they are being manipulated. We should be wondering to what extent these journalists vetted the source of the clip. It's not that journalists need to publish an identity, but they ought to serve as a filter as much as they do as a megaphone. Without a credible story about how the sender got ahold of the clip and its provenance, why would they even believe that it's real? With so little context, how could a responsible journalist publish this kind of character assassination without any more research than one phone call to the purported speaker?

        Perhaps if there were a pattern of behavior behind the clip, supported by other testimony from the speaker's colleagues or previous incidents--not that it's the responsibility of journalists to conduct a trial in the court of public opinion, presenting the strongest arguments from each side--but, lacking any actual effort at all to establish the credibility of the source, this is just lazy, click-counting, ragebait--not journalism.

        Partly, it's also on the laziness and gullibility of the general public--but these are well-established features of the audience that should be no surprise to a trained journalist. To publish such clips without any real work to validate them is basic negligence.

    • nar001 15 hours ago |
      It took the police investigating to get to the truth, it's the first time something like this happened and the clip even knowing it's AI sounds legit, I don't blame journalists for believing it.
      • beej71 15 hours ago |
        And I just will never trust these particular "journalists" again.
        • mcphage 13 hours ago |
          Which particular ones do you mean?
    • defrost 14 hours ago |
      > Of course they didn't bother contacting anyone involved to check if it was real.

          When the clip landed on the desk of Kristen Griffith, an education reporter at the Baltimore Banner, she thought it was going be a relatively straightforward story of a teacher being exposed for making offensive remarks.
      
          But as is best-practice in journalism, Ms Griffith wanted to give the principal the chance to comment and tell his side of the story. So, she reached out to his union representative, who said not only did Mr Eiswert condemn the comments, but he didn’t make them.
      
          “He said right away, oh, we think this is fake… We believe it's AI,” she told the BBC. “I hadn't heard that angle” before.
      
          But when she published that explanation, her readers were not convinced. Far from raising questions about the clip’s veracity, it just fuelled backlash from people who thought the allegation of fakery was just an excuse or an attempt to evade accountability.
      • NavinF 14 hours ago |
        That's the exception that proves the rule and happened long after other news sites published the video I linked.
        • defrost 14 hours ago |
          That's the ground zero prime source for all other news feeds.

          It proves a completely different rule that's more common: "News sites" (aka Fox et al.) are usually well aware that the spin they put on stories is patently false, based on isolated reports, on X-twitter posts, on small town actual news papers, etc.

          They know full well that that immigrants are working in Ohio, that it's a housing crisis, and that no one is eating the cats, dogs, or ducks ..

          They knowingly propagate sensationalist and pandering takes on breadcrumb sized stories hoping they'll blow up and attract eyeballs in their target demographics.

          It's not that they don't bother to check veracity - they wilfully ignore anything in sources that might downplay the drama of stories.

          I wouldn't call these "News sites", they're pretty obviously infotainment industries.

  • jauntywundrkind 7 hours ago |
    It scares me a bit that AI has been so destabilizing so quickly, such a tool for a would be "axis of upheaval" or smaller antagonists to help inflame and agitated the world.

    And it scares me just as much what the ever-so likely "someone has to do something about this" reactions will be. There's thoughts floating around like Originator Profile & Content Authenticity, which seems like voluntarily ways to start letting sites vouch for their content, and that seems ok, let's sites create verifiable statements...

    https://github.com/w3c/tpac2024-breakouts/issues/70 https://github.com/w3c/tpac2024-breakouts/issues/90

    ..but it feels like there is a rapidly growing angst about mis and disinformation. And that eventually some nations are going to start demanding truth, and only truth, online. And it feels implausible except by letting only very few speak at all. Trying to enforce stability seems almost the bigger danger than letting things play out organically, than letting society tangle and mete with this info-chaos organically.

    It also bothers me that HN seems to be sweeping these issues with AI under the rug. It feels like many of these deepfake articles almost immediately get flagged. Yesterday the article on AI generated image of Trump in blue-jeans walking through flood water being widely shared for flagged quite quickly too. This is so what the current moment of technology & it's role in society is, right here, and malfeasants projecting their petty narrow view via suppression seem to be winning the day already. https://futurism.com/the-byte/donald-trump-hurricane-ai