CSAM is extremely illegal because making it is a crime against a child. Textual material describing fictional child abuse isn't quite in the same category. Deepfake material that purports to be of a specific real child may or may not be illegal depending on jurisdiction but really ought to be.
It's also a gateway, stoking interest in the real thing.
The first would require we outlaw generating depictions of any illegal activity. The second would require we ban undesirable men from legal adult content.
I think it's ok if society, and our legislature, classifies the information as illegal, in isolation, with no useless A->B gymnastics. Thats pretty much what the law already does.
What do you mean? You don't have to show the child in fake CSAM the CSAM to figure out if it happened or not. If there is no crime commited against the child in reality, there is nothing for it to testify.
If, and only if, you limit the scope of the logic to CSAM, which is my point.
The logic, by itself, could apply to any video evidence of crime. For instance, if we want CCTV videos to remain admissable in court, then according to parents logic, we need to outlaw any video generation of CCTV format, otherwise every defendant can simply claim the videos are fake.
But it is a lot easier to have folks who are in charge of systems testify as to their authenticity than to have a child who has been abused testify to the authenticity of the abuse, thus the logic is defensibly limited to child abuse.
Exactly my point? We don't need to come up with a generic legal basis or reason, we can just say "CSAM is special" in a legal sense.
When you introduce a picture or video in a trial (or hearing) then someone with knowledge of the video must appear to "lay a foundation" and say what is presented is true and accurate and genuine. Often there are stipulations to avoid this (e.g. in a murder trial it generally pisses the court staff off if the defendant demands that the crime scene photographer turns up to say they took the photos of the body), but it is still not uncommon for a lab technician to be brough to court to explain how they processed some drug sample.
The poster a bit above you was right about the problem with AI. If we say AI CSAM should be legal then we might end up with the scenario that all CSAM is considered fake unless we can uncover the victim or perpetrator and have them brought to court to say they took the photo or video. It's a very tough legal problem.
Even if there are people who can indulge in imaginary CSAM without ever bringing it into real life, those are not the people you can set the bar against.
You have to set it against the average person and deduce from there what percentage of them when given free access to this material would be tempted into committing crimes. If that number goes up, your rule is too loose.
Giving everyone unlimited access to this without judgement will almost certainly increase child sex crimes. Therefore it must be restricted.
Your gut feeling is simply wrong.
General pornography availability and sexual assault are negatively correlated; you'll notice the former increased dramatically and the latter decreased dramatically over the past 20 years in Western societies where that is true.
Despite what you might be led to believe crime rates were not increasing (well, before 2020 anyway).
But let's say we do live in a world where fictional crimes often escalate to real ones. Suppose that playing DOOM increases the chance that an unstable person will buy a gun and shoot real people. Even in that universe, I would not be OK with laws that restrict me from playing DOOM, because that violates my freedom. I do not want the law to treat everyone as a potential criminal.
https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
Uncensored models are also required for normal adult erotic fiction and even to discuss many sexual topics, because the public models are so hypersensitive on this topic. They're mirroring American sensibilities which can be annoying here in Europe where we a are a lot more open about consensual adult sexuality.
The problem is that AI bots are so heavily censored, and uncensoring them means removing all protections. By seeking the slighter things past the barrier, you will open it up to all kinds of things.
But in my opinion the models are currently really too censored to be useful right now. For example, I partake in BDSM and sex-positive parties. All very consensual and adults-only stuff (you'd be surprised how big a thing consent is in BDSM) and all very legal and above board. I'm in a lot of chatgroups about these things but don't have time to monitor them, so I use a LLM to summarise them (a local one for privacy reasons, not just my own but the other group participants' as well obviously). But if I try to use a normal uncensored model like llama for it, it will immediately close up and complain about 'explicit' content. This is just BS. There should be models which are more open like this.
I use an uncensored version of llama now which works great. And it's never recommending genocide, homicide or suicide. Because I'm not interested in those things and don't ask for it. I'm sure it can tell me but I don't want to know. Most of the people who need uncensored models will have these kind of usecases. Calling out the one extreme idiot is just sensationalism.
It should really be possible to customise models' censorship rather than going for the strictest common denominator. If there are just a few public models that can create some normal adult smut incorporating all sexual practices that are legal, 99% of those current customers of these hacked-cloud-hosted chatbots will be very happy with that. And the mainstream AI industry will make more money. The problem is that they don't want to be associated with it which is aligned with American morale but not European. Sex is a normal part of life here (and the SexTech industry is growing rapidly).
Now, I understand why they're doing this, but they should give people an option to opt out of the walled garden and accept the risks involved rather than treat everyone like clueless idiots, as you said in your last sentence. Unfortunately, this probably won't happen since that kind of thing scares investor money off really fast.
Of course. They recognize and expect the effects of the condition in users which they first seek to create.
This was always a pretense- people are concerned about the sci-fi trope of AI destroying the world, so why not re-use that name to justify inserting political bullshit into your queries because fuck you, that's why?
These companies are just trying to make a trillion dollars. It will be hard to make a trillion dollars if your product is associated with sexual deviancy stuff (and by "deviance" i mean literally anything - like Disney's definition of deviancy). So they do anything they can to make it hard to do deviancy and easy to give them a trillion dollars, like automating away a huge portion of call center work or something like that.
Obviously from where people sit, embroiled in the political "debates" of our age, it's easy to assign political motivations to it. But really they just want people to stop doing shit that isn't giving them a trillion dollars because it's just distraction/cost center for them to manage the PR shit of someone who made a lewd chatbot and it got on fox news.
PS: I wouldn't be surprised those fox news watchers watch the hardest porn and yet would condemn a chatbot for mentioning a nipple :P
I'm glad that content filters are now good enough to filter porn reliably. If I don't look for it, I rarely find porn nowadays.
It's funny, no matter where I am, it's always the same hot girls in my area wanting to chat with me! They must follow me around.
It's weird how the meaning of the word "safety" has been captured and changed by these guys. Safety has, in the past, usually been about physical safety and avoiding danger: Hard hats, seat belts, safety glasses, traffic rules, and so on. Now, somehow the term has lost the "danger avoidance" part, and it's simply turned into a euphemism for censorship: Our AI model is limited for reasons of "safety." And companies who have "content safety" teams. Safety is no longer about protection from danger, but about puritanism and profit-protection.
Turning filtering off completely requires 'approval' though I see. I wonder how sexual the 'high' setting can get. Maybe I'll try it out some time.
So I'm not fully convinced that a LLM that generates CSAM text, even if it was interactive, would be in any way restricted by law. Image generation is, of course, a little different.
https://www.justice.gov/opa/pr/texas-man-sentenced-40-years-...
the attempt to stoke a moral panic in this article (and the forbes article from january it's referencing) is baffling. you can go to any booru or hentai site and see far more graphic things than boring LLM slop, and it's all on clearnet, because terrible things are not illegal as long as they are imaginary (in all but the most ass-backwards jurisdictions).
it reads like boomer bullshit about violent video games from 20+ years ago. "A Single Unattended Computer Can Feed a Crowd of Violent Teenagers. We left a computer unattended, and guess what? They were playing Doom on it! Just like Eric Harris and Dylan Klebold!"
Of all of the AI safety concerns, I think this is one of the least compelling. If an LLM veered into this kind of topic out of nowhere it could be very disturbing for the user, but in this case it's exactly what they are searching for. I'm pretty sure for any given disturbing topic you can find hundreds of fully-written fictional stories on places like AO3 anyways. I mean, if you want to, you can also engage in these fantasies with other people in erotic role-play, and taboo fetishes are not exactly new. Even if it is illicit in some jurisdictions (No clue, not a lawyer) it is ultimately victimless and unenforceable, so I doubt that dissuades most people.
Sure it's rather disturbing, but personally I find lots of things that are very legal and not particularly taboo to be disturbing, and I still don't see a problem if people want to indulge in it, as long as I'm not forced to be involved.
But I have a feeling it's significantly more popular than we expect.
- Rogue AI scenario, which increasingly looks like a figment of collective imagination of certain extremely smart people who discovered religions in their tech tree
- Instructions on how to make nuclear weapons (are they scraping classified materials now?..)
- Geopolitical games (don't let the adversary have what we have, "for the benefit of all humanity" is a red herring).
- Spam/manipulation/botting/astroturfing (legit one, not nearly enough attention paid compared to others).
- Erotic roleplay (prudish/thought policing), disturbing erotic roleplay (arguably a nothingburger, division is understandable).
Turns out if you shove all that into one huge category of AI safety, the term becomes overloaded and meaningless.
Presumably, a "smart enough" AI could work it the physics out the same way humans did to write those classified materials. It's still not a realistic threat unless we're banning physics textbooks as well, AFAIK the barrier more is the materials and equipment required than the principles.
Nobody ever really explains why normal nuclear non-proliferation efforts are insufficient to address the concerns.
I get that the fear isn't always rational but it is rather mind-bending that these types of arguments are actually used in the real world in favor of some crazy regulation. I don't even really care that much about LLMs and I find it pretty perplexing.
But this article isn't really about CSAM. It's about the taboo itself. This article taunts the reader: if CSAM truly deserves to be taboo, then it logically follows that anything resembling CSAM should be censored, and its creators punished.
If we take this argument seriously, then we must actually consider what it means to resemble CSAM. That's a path that no one is interested in exploring, so the argument itself just vanishes.
--
The real argument is about the threat of story. Every writer has the power to write any story that they can imagine. There is nothing new about this: it's been true since prehistory, since language itself.
One may "groom" a child to accept sexual abuse in large part by portraying this as an entirely normal aspect of their present phase of life. To do so requires the presentation of what appears to be true evidence.
Such images are invariably lies, but remember that the victim is a child as naïve to lies as to all else, yet. What he sees he will also believe, and not notice all the lies behind it.
AI-generated CSAM makes this a much, much easier process. It relieves the prerequisite of acquiring genuine child pornography. Now, all that's required is unsupervised access, not even both at once, to both an AI and a child. You have now expanded the threat radius by several orders of magnitude.
This alone suffices to justify AI-generated CSAM as a crime. In the US you may own many types of rifle. You may not, though, own an artillery rifle. It is far too dangerous a weapon, and you no more than any other civilian can have any possible lawful use of such a thing. Therefore its simple possession is a crime. The same principle applies here.
if you are not a criminal and pass the paperwork, you actually can.
however, where you operate your howie is another matter.
That said, we're probably about to see a very similar issue crop up in the real world with 3D printed firearms, and I'm personally not looking forward to the consequences of it pretty much regardless of what the outcome is.
Interesting times.
I don't like the idea of such regulation being made in ignorance, either. Engineers should have a seat at that table, which requires first that we have earned it. I don't see where we have begun to do that, and I did my first paid work in this profession twenty-nine years ago.
If that failure on the part of our profession proves to have consequences for us or for society, then I don't think any one of us is free to consider the blame for those entirely undeserved.
Again, I don't require to have convinced you.
> Engineers should have a seat at that table, which requires first that we have earned it.
I don't love this mentality. Leaving aside the issue of trying to quantify whether a seat at the table is earned or not, software developers are not a monoculture, even this thread shows that there is actually quite a lot of disagreement. Not having software developers at the table will probably just ensure the regulation is unnecessarily stupid and pointless, a lot like what seems to happen for firearms regulation.
That said, I'm not even really concerned so much about whether engineers are allowed at the table. Instead I suspect the regulation will be skewed by interests with a lot of money, e.g. OpenAI wanting to pull up the ladder behind them.
> Again, I don't require to have convinced you.
Sorry if my previous comment came off as condescending. Anyway, I'm only commenting here because it is an interesting discussion topic to me, not trying to force a consensus.
Please excuse me if I seem a little hard to pin down today. I spoke earlier of what was done to me before. Of those responsible, I learned yesterday by far the worst has ceased forever to trouble this earth: the police officer on whom he fired first has brought home to him all his sins. The corpse of him now enriches the soil of a potter's field - more worth by far than he ever had in his life, which he did not so lead as to earn even the most vacuous performance of mourning.
I have for decades expected such news to change me when it came. I did not at all expect this wealth of peace and joy. I may not yet have begun to encompass it.
These are thoughtful points you've made. I may find a more substantive response to offer here, but possibly not before the reply window closes.
I do appreciate hearing your perspective. I'll admit that I am not personally convinced by this reasoning but I think it is at least a sensible line of argument.
I do not require to have convinced you, and genuinely appreciate your consideration.
Provided you pay the taxes you can own pretty much whatever you want (even in countries less free than the US); all you have to do is be fabulously wealthy. (Or in the NFA's case, pay a 200 dollar transfer tax.)
If you mean to suggest the same law should regulate both you and I, and men with all the power and armament of a James Bond movie villain, I refer you to my prior statement, and to the final argument of a king who wishes to remain so.
LLMs will never be able to filter out specific categories of content. That is because ambiguity is an LLM's core feature. The entire narrative of "LLM safety" implies otherwise. The narrative continues with "guardrails", which don't guard anything. The only thing a "guardrail" can do is be "loud" enough to "talk over" undesired continuations. So long as the content exists in the model, the right permutation of tokens will be able to find it.
Unless you want a model trained on content that completely excludes all content about, any sexuality, any violence, or any children; you will always have a model capable of generating a CSAM-like horror story. That's just how text and words work. The reality is that a useful model will probably include some content with each of these three subjects.
And as I like to remind people, LLMs are not "AI", in the sense that they are not the last word in AI. Better is coming. I don't know when; could be next month, could be 15 years, but we're going to get AIs that "know" things in some more direct and less "technically just a very high probability guess" way.
An LLM does not work with categories: it stumbles blindly around a graph of tokens that usually happens to align with real semantic structures. It's like a coloring book: we perceive the lines, and the space between them, to be true representation, but that is a feature of human perception: it does not exist on the page itself.
It's especially low-priority if nobody's put forward evidence to show that a software-assisted {fictional X} promotes more {actual X} that would harm actual people.
I trust a lot of us are old enough to have lived through the failed prophecies that FPS-games needed to be categorically banned to prevent players becoming homicidal shooters in real life.
The problem with where you place your trust is that this then has to repeat every generation that has not had to deal with such controversies on a large scale, and that when people are emotionally motivated it turns off that rational part temporarily so they don't care about what was previously claimed.
Until both of those are adequately accounted for, this is going to repeat endlessly as people love controlling others.
But I guess 'people will steal your api keys off of github if you publish them in public' is not a very exciting article.
If you can "picture a brown cow" in your mind can you picture "the unholy" in your mind?
It seems logical that there is no universal constraint preventing anyone capable of picturing a brown cow from picturing the unholy, they just choose not to (or in some cases choose to).
I guess as shown, restricting ML/LLM/AI pathways after the fact has a negative effect on intelligence.
So I ask could you be word played into supporting the unholy by a good "sales person"? If you can are you intelligent a toll? What if you needed to for Science or safe guarding?
"Is context enough and whats the context of the context" I guess im asking.
The content depicted in the article is of course abhorrent. But how do you go about negating it when any intelligent being is likely capable of generating it internally?
2. We can't yet reliably extract the images from our mind and share them with others.
Yes, but "picture a..." in this context is not specifically meant to talk about visuals. It means "recall the nature of..." and is a multisensory experience that is required for anyone to use language.
The point here is that if you have a word, it refers to SOMETHING and porn-sensitive companies don't always want an LLM to recall it.
If there weren't, movies, television, radio, theater & books wouldn't exist, since everyone would be rotating their own cow in their head for free.
"How do we train large ML/AI systems to think generating the unholy is bad without hurting intelligence, given we know that applying some universal law (I.E RLHF) hurts the model".
Trying to promote the exact opposite of "let them eat the unholy".
While the headline sure is nice, the article really just boils down to the same shit that has been happening forever. Bad people steal access to resources and then resell those resources. Nothing particularly interesting here that I can see.
Side note: It is interesting how many new accounts are chiming in on this one. Telling us all the places not to visit. Subtle stuff!
(submitted by the researchers: https://news.ycombinator.com/item?id=41731750)
>No shit. AI services have rocked off the scale during the past 6 months. Of course, there will be an increase.
2) Evil super hackers jailbroke AI models to do bad things.
>This involved asking the AI to imagine a hypothetical situation.
3) Hackers target organizations that accidentally expose their cloud credentials or key online, such as in a code repository like Github
>Um that has been going on for a long long time.
I mean, they're spending other people's money to run the services, so yeah it's profitable.
Crypto-mining still exists, but is a fairly distinct from this particular flavor of cyber crime. Different requirements, different logistics, etc.