• mbfg 2 days ago |
    Seems like the obvious response is to create a deepfake of the judge giving a monstorous offensive answer as to why he overruled the law.
    • olliej 2 days ago |
      He didn’t overrule the law - that’s the whole reason the US separates the legislative and judicial branches (at least in theory), and the whole reason for having a constitution (a constitution - any constitution - is irrelevant if the government is allowed to pass and enforce a law that violates that constitution.

      The judge also has not said “this is an unconstitutional law”.

      What they’ve issued is a preliminary injunction - that seems reasonably well justified: if the law is too vague it covers actual satire, and a slew of other possible cases that are not objectively awful, and hence would be a violation of the constitution. If it is a violation of the constitution then leaving it on the books during a trial would be manifestly harmful, which is why the judge blocked the law temporarily until the CA gov can make its arguments in court and also the opposition can make their arguments.

      If the CA government can demonstrate the law does pass constitutional muster then the injunction is rescinded and CA can enforce it, if they lose in court then they cannot enforce it because definitionally the law itself is illegal.

      That’s literally how this is meant to work. If you want the alternative where judges work to support the government regardless of the constitution you can look at the current Supreme Court to see how that goes.

      Now to the topic at hand it does sound like this statute is too vague, but also it’s difficult to see how you can word something like this to avoid this - I think the core issue is that the statute ties the legality of the AI generated image to whether the person publishing it is doing so in a way that is knowingly false/deceptive/offensive.

      That brings out a bunch of problems:

      * does the person actually know it’s false? Think of all the old FB posts where people seemed to legitimately believe onion articles were real (see abortionplex)

      * what if the author knows it’s false (abortionplex once more)? That’s obviously satire but plenty of people believed it, do satirical sources now need to tailor to the most credulous people in society?

      * what is offensive? Pro-life people might say abortionplex is offensive because it makes a mockery of them, and pro-choice people might find it offensive because it makes fun of a serious and hard choice they have to make

      * obviously the above don’t imply ai images, but let’s say it’s an ai rendered video sketch - how would it being done by ai meaningfully make it different from say an SNL sketch or its people pretending to be people? Is the only thing that matters the quality of the rendering?

      And so on.

      I think targeting the message content is just too challenging.

  • JohnMakin 2 days ago |
    I think we're going to find out some uncomfortable lessons about our current interpretation of free speech being incompatible with the current generative AI technology. Strictly, I think the judge's ruling is correct - this is definitely a constitutional issue of free speech. However, one mistake people make frequently in these debates is that not all forms of speech are necessarily protected by law - for instance, you can't scream "Bomb" on an airplane without repercussions.

    However, I don't see how or when the law ever catches up with this technology. It's never been illegal to present misinformation about a political candidate. Photoshopping a candidate has never been illegal. Clearly, this is something new and different, but what exactly? How does this get legislated without trampling all over existing precedent?

    • rangestransform 2 days ago |
      If photoshopping a candidate was legal and protected as free speech, why should this be any different? Talented Photoshop artists could create convincing fakes in a matter of hours in the 2000s, and just lets the average joe express his political opinions similarly with greater ease. There has never been any regulation to regulate free speech based on how easy it is to produce; if there were, we would've banned free speech with electronic amplification, free speech with a telegraph, free speech on the radio, free speech with a printing press, free speech with a printer, free speech on social media, only allowing communication methods that existed when the constitution was signed.
      • JohnMakin 2 days ago |
        It wouldn’t be different under the current interpretation of the law, I wasn’t saying that it is. In reality though, the scale and impact of this technology is far more significant and has far more potential to significantly disrupt society than photoshop - I think that much is very obvious. So in that case it is something new the law has not quite anticipated.
        • rangestransform 2 days ago |
          in the 2000s, we would've said the same thing about the ease of use and scale of photoshop for image doctoring too
          • JohnMakin 2 days ago |
            the scale and pervasiveness in which these things can spread now is many orders of magnitude more now. It isn’t an accurate comparison. also, I was there in the 2000’s, there wasn’t remotely the same amount of concern then, because these things are not the same
            • rangestransform 2 days ago |
              After each technological leap, it was an order of magnitude each time, and still the world hasn’t ended yet despite the handwringers at every stage of technological development
        • jjk166 2 days ago |
          That's debatable. There was a time when having something written on official-looking letterhead was considered authoritative; then it became trivially easy to generate letterhead and we found new ways to authenticate claims. Likewise for numerous other forms of evidence. Our present reliance on photos and videos and related media as the default presumed-authoritative form of evidence is a new phenomenon, and if it becomes unreliable we will move on to the next thing.

          It's kind of ironic that the last major tech-bro fad was entirely about cryptographically secure, trustless verification of public data.

    • opello 2 days ago |
      There's also restriction around political speech already, if the objective is to influence the outcome of an election it must be paid for using campaign funds. And while the current design is rather laughable (magic words test, e.g. "vote for," "elect," etc.) maybe this move in technology presents an opportunity to refine the definition of "influence the outcome" to the overall benefit of the electorate.
    • reginald78 a day ago |
      I don't think there is anything fundamentally different. The only argument is scale and ease of use, but you could apply that to hiring poorly paid artists in a third world country somewhere like existing troll armies. A better question is would creating a law actually stop this? There's tons of ad fraud and misinformation already and my impression is the war is already lost. Most large platforms didn't have the kind of moderation to keep control of this kind of thing before. Small communities often do but they aren't usually the target and have less effect.