This was distributed by news sites. Of course they didn't bother contacting anyone involved to check if it was real.
I'm asking for real here. I don't know the right answer. Do you not publish anything that someone involves calls a fake or do you publish it anyway?
Or words to that effect.
Perhaps if there were a pattern of behavior behind the clip, supported by other testimony from the speaker's colleagues or previous incidents--not that it's the responsibility of journalists to conduct a trial in the court of public opinion, presenting the strongest arguments from each side--but, lacking any actual effort at all to establish the credibility of the source, this is just lazy, click-counting, ragebait--not journalism.
Partly, it's also on the laziness and gullibility of the general public--but these are well-established features of the audience that should be no surprise to a trained journalist. To publish such clips without any real work to validate them is basic negligence.
When the clip landed on the desk of Kristen Griffith, an education reporter at the Baltimore Banner, she thought it was going be a relatively straightforward story of a teacher being exposed for making offensive remarks.
But as is best-practice in journalism, Ms Griffith wanted to give the principal the chance to comment and tell his side of the story. So, she reached out to his union representative, who said not only did Mr Eiswert condemn the comments, but he didn’t make them.
“He said right away, oh, we think this is fake… We believe it's AI,” she told the BBC. “I hadn't heard that angle” before.
But when she published that explanation, her readers were not convinced. Far from raising questions about the clip’s veracity, it just fuelled backlash from people who thought the allegation of fakery was just an excuse or an attempt to evade accountability.
It proves a completely different rule that's more common: "News sites" (aka Fox et al.) are usually well aware that the spin they put on stories is patently false, based on isolated reports, on X-twitter posts, on small town actual news papers, etc.
They know full well that that immigrants are working in Ohio, that it's a housing crisis, and that no one is eating the cats, dogs, or ducks ..
They knowingly propagate sensationalist and pandering takes on breadcrumb sized stories hoping they'll blow up and attract eyeballs in their target demographics.
It's not that they don't bother to check veracity - they wilfully ignore anything in sources that might downplay the drama of stories.
I wouldn't call these "News sites", they're pretty obviously infotainment industries.
And it scares me just as much what the ever-so likely "someone has to do something about this" reactions will be. There's thoughts floating around like Originator Profile & Content Authenticity, which seems like voluntarily ways to start letting sites vouch for their content, and that seems ok, let's sites create verifiable statements...
https://github.com/w3c/tpac2024-breakouts/issues/70 https://github.com/w3c/tpac2024-breakouts/issues/90
..but it feels like there is a rapidly growing angst about mis and disinformation. And that eventually some nations are going to start demanding truth, and only truth, online. And it feels implausible except by letting only very few speak at all. Trying to enforce stability seems almost the bigger danger than letting things play out organically, than letting society tangle and mete with this info-chaos organically.
It also bothers me that HN seems to be sweeping these issues with AI under the rug. It feels like many of these deepfake articles almost immediately get flagged. Yesterday the article on AI generated image of Trump in blue-jeans walking through flood water being widely shared for flagged quite quickly too. This is so what the current moment of technology & it's role in society is, right here, and malfeasants projecting their petty narrow view via suppression seem to be winning the day already. https://futurism.com/the-byte/donald-trump-hurricane-ai