The judge also has not said “this is an unconstitutional law”.
What they’ve issued is a preliminary injunction - that seems reasonably well justified: if the law is too vague it covers actual satire, and a slew of other possible cases that are not objectively awful, and hence would be a violation of the constitution. If it is a violation of the constitution then leaving it on the books during a trial would be manifestly harmful, which is why the judge blocked the law temporarily until the CA gov can make its arguments in court and also the opposition can make their arguments.
If the CA government can demonstrate the law does pass constitutional muster then the injunction is rescinded and CA can enforce it, if they lose in court then they cannot enforce it because definitionally the law itself is illegal.
That’s literally how this is meant to work. If you want the alternative where judges work to support the government regardless of the constitution you can look at the current Supreme Court to see how that goes.
Now to the topic at hand it does sound like this statute is too vague, but also it’s difficult to see how you can word something like this to avoid this - I think the core issue is that the statute ties the legality of the AI generated image to whether the person publishing it is doing so in a way that is knowingly false/deceptive/offensive.
That brings out a bunch of problems:
* does the person actually know it’s false? Think of all the old FB posts where people seemed to legitimately believe onion articles were real (see abortionplex)
* what if the author knows it’s false (abortionplex once more)? That’s obviously satire but plenty of people believed it, do satirical sources now need to tailor to the most credulous people in society?
* what is offensive? Pro-life people might say abortionplex is offensive because it makes a mockery of them, and pro-choice people might find it offensive because it makes fun of a serious and hard choice they have to make
* obviously the above don’t imply ai images, but let’s say it’s an ai rendered video sketch - how would it being done by ai meaningfully make it different from say an SNL sketch or its people pretending to be people? Is the only thing that matters the quality of the rendering?
And so on.
I think targeting the message content is just too challenging.
However, I don't see how or when the law ever catches up with this technology. It's never been illegal to present misinformation about a political candidate. Photoshopping a candidate has never been illegal. Clearly, this is something new and different, but what exactly? How does this get legislated without trampling all over existing precedent?
It's kind of ironic that the last major tech-bro fad was entirely about cryptographically secure, trustless verification of public data.