To people claiming a physical raid is pointless from the point of gathering data :
- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
It's also just very basic police work. We're investigating this company, we think they've committed a crime. Ok, why do you think that. Well they've very publicly and obviously committed a crime. Ok, are you going to prosecute them? Probably. Have you gone to their offices and gathered evidence? No thanks.
Of course they're going to raid their offices! They're investigating a crime! It would be quite literally insane if they tried to prosecute them for a crime and how up to court having not even attempted basic steps to gather evidence!
All because "AI nudes"? Seems heavy-handed, almost like the controversy over naughty images has received a state-sponsored outrage boost for other reasons.
"Shocking Grok images"... really? It's AI. We know AI can make any image. The images are nothing but fake digital paintings that lose all integrity as quickly as they're generated.
Beyond comedic kicks for teenage boys, they're inconsequential for everyone else. But nevermind that, hand me a pitchfork and pre-fabricated sign and point me to the nearest anti-Grok protest.
It has always been illegal and morally reprehensible to create, own, distribute or store sexually explicit material that represents a real person without their consent, regardless if they are underage or not.
Grok is a platform that is enabling this en masse. If xAI can't bring in guardrails or limit who can access these capabilities, then they deserve what's coming to them.
You can _almost_ do anything you want in the privacy of your home; but in this case Twitter was actively and directly disseminating pictures publicly on their platform.
And profiting from it, though less directly than "$ for illegal images". Even if it wasn't behind a paywall (which it mostly is) driving more traffic for more ads for more income is still profiting from illegal imagery.
In which broken society do you live where this is true?
I would say drawing sexually explicit pictures of real persons without their consent and keeping them in your drawer is neither illegal nor morally reprehensible in most of the world.
At my kids school the children have been using grok to create pics of other children without clothes on - chatgpt etc won’t let you do that - grok needs some controls and x seem unable to do that themselves.
In such a case specifically: Uncover internal communication that shows the company was aware of the problem and ignored it, which presumably affects liability a lot.
This is the cyber crime unit. They will exfiltrate any data they want. They will use employee account to pivot into the rest of the X network. They don't just go in, grab a couple of papers, laptops and phones. They hook into the network and begin cracking.
So, when they were doing it for the last 3 decades in Photoshop (I was in high-school and this already existed) you would be just fine with the tool being used to do it and with the boys and the school?
Is that your argument? Did you ever expect the government to go after Adobe for "enabling" this?
Not the same - the barrier to entry was too high. Most people don't have the skills to edit photos using Photoshop. Grok enabled this to happen to scale for users who are complete non techies. With grok, anyone who could type in a half-coherent sentence in English could generate and disseminate these images.
Sorry, but barrier to entry doesn't seem like a very good legal excuse. Goes in the same direction as NY attempts to ban 3D printing because - supposedly - it enables people to more easily make guns.
This is a political action by the French... slowly loosing their relevance, even inside the EU. Nothing else.
I see what you’re getting at. You’re trying to draw a moral equivalence between photoshop and grok. Where that falls flat for me is the distribution aspect: photoshop would not also publish and broadcast the illegal material.
But police don’t care about moral equivalence. They care about the law. For the legal details we would need to consult French law. But I assume it is illegal to create and distribute the images. Heck, it’s also probably against Twitter’s TOS too so by all rights the grok account should be banned.
> This is a political action by the French
Maybe. They probably don’t like a foreign company coming in, violating their children, and getting away with it. But what Twitter did was so far out of line that I’d be shocked if French companies weren’t treated the same way.
> But I assume it is illegal to create and distribute the images.
I very much so expect it to be illegal to distribute the images, of course (creating them, not so much).
But the illegality, in a sane world (and until 5 minutes ago) used to be attached to the person actually distributing them. If some student distributes fake sexualized images of their colleague, I very much expect the perpetrator to be punished by the law (and by the school, since we are at it).
Creating, possessing, and distributing CSAM is illegal in the US and many other countries. Can you explain why you think it should be legal to create something that is illegal to possess or distribute?
It's not hypothetical. And in fact the girl who was being targeted was expelled not the boys who did it [1].
Those boys absolutely should be held accountable. But I also don't think that Grok should be able to quickly and easily generate fake revenge porn for minors.
You can’t “undo” a school shooting, for instance, so we tend to have gun laws.
You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…
Yes, we should have some protections or restrictions on what you can do.
You may not understand it, either because you aren’t a parent or maybe just not emotionally equipped to understand how serious this actually can be, but your lack of comprehension does not render it a non-issue.
Having schools play whack-a-mole after the photos are shared around is not a valid strategy. Never mind that schools primarily engage in teaching, not in investigation.
As AI-generated content gets less and less distinguishable from reality, these incidents will have far worse consequences and putting such power in the hands of adolescents who demonstrably don’t have sound judgment (hence why they lack many other rights that adults have) is not something most parents are comfortable with - and I doubt you’ll find many teachers, psychiatrists and so on who would support your approach either.
>You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…
No, but if you send those people who made and distributed the AI nude of her to jail, these problems will virtually disappear overnight, because going to jail is a hugely effective deterrent for most people.
But if you don't directly prosecute the people doing it, and instead just ban Grok AI, then those people will just use other AI tools, outside of US jurisdiction, to do the same things and the problem persists.
And the issues keeps persisting, because nobody ever goes to jail. Everyone only gets a slap on the wrist, deflects accountability by blaming the AI, so the issue keeps persisting and more people end up getting hurt because those who do the evil are never held directly accountable.
Obviously Grok shouldn't be legally allowed to generate fakes nudes of actual kids, but in case such safeguards can and will be bypassed, that doesn't absolve the humans from being the ones knowingly breaking the law to achieve a nefarious goal.
Youths lack judgment, so they can’t vote, drink, drive, have sex or consent to adults.
A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.
Beyond that, we regulate guns, speed limits and more according to principles like “your right to swing your fist ends at my nose”.
We do that not only because shoving kids into jails is something we want to avoid, but because regulating at the source of the problem is both more feasible AND heads off a lot of tragedy.
And again, you fail to acknowledge the investigative burden you put on society to discover who originated the photo after the fact, and the trauma to the victim.
If none of that computes for you, then I don’t know what to say except I don’t place the right to generate saucy images highly enough to swarm my already overworked police with requests to investigate who generated fake underage porn.
>A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.
Teenagers do stupid shit all the time. But they still get prosecuted or convicted when they do crimes. They go to juvy or their parents get punished. Being 14 is not a get out of jail free card.
The way you are arguing makes it really hard to understand what you are trying to say. I am guessing you are upset that non-human entity is being used as a boogie man while the actual people are going free? But your argumentation reads like someone who is very upset at AI producing CSAM is being persecuted. I won’t be surprised if people think you are defending CSAM.
In good faith, a few things - AI generated imagery and Photoshop are not the same. If someone can mail Adobe and a photo of a kid and ask for a modified one and Adobe sent it back, yes Adobe’s offices will be raided. That’s the equivalent here. It’s not a tool. It’s a service. You keep using AI, without taking a moment to give the “intelligence” any thought.
Yes, powerful people are always going to get by, as you say. And the laws & judicial system are for the masses. There is definitely unfairness in it. But that doesn’t change anything here - this is a separate conversation.
If not Grok then someone else will do it - is a defeatist argument that can only mean it can’t be controlled so don’t bother. This point is where you come across as a CSAM defender. Govt’s will/should do whatever they can to make society safe, even if it means playing whack a mole. Arguing that’s “not efficient” is frankly confusing. Judicial system is about fairness and not efficiency.
frankly, I think you understand all of this and maybe got tunnel visioned in your anger at the unfairness of people scapegoating technology for its failings. That’s the last thing I want to point out, raiding an office is taking action against the powerful people who build systems without accountability. They are not going to sit the model down and give a talking to. The intention is to identify the responsible party that allows this to happen.
You cannot offload all problems to the legal system. It does not have the capacity. Legal issues take time to resolve and the victims have to have the necessary resource to pursue legal action. Grok enabled abuse at scale, which no legal system in the world can keep up with. It doesn't need explanation that generating nudes of people without their consent is a form of abuse. And if the legal system cannot keep up with protecting victims, the problem has to be dealt with at source.
>You cannot offload all problems to the legal system. It does not have the capacity.
You definitely can. You don't have to prosecute and send a million people to jail for making and distributing fake AI nudes, you just have to send a couple, and then the problem virtually goes away.
People underestimate how effective direct personal accountability is when it comes with harsh consequences like jail time. That's how you fix all issues in society and enforce law abiding behavior. You make the cost of the crime greater than the gains from it, then crucify some people in public to set an example for everyone else.
Do people like doing and paying their taxes? No, but they do it anyway. Why is that? Because THEY KNOW that otherwise they go to jail. Obviously the IRS and legal system don't have the capacity to send the whole country to jail if they were to stop paying taxes, but they send enough to jail in order for the majority of the population to not risk it and follow the law.
You know there is no such thing as the world police or something of that sort.
If the perpetrator is in another country / jurisdiction it is virtually impossible to prosecute let alone sentence.
It is 100% regulatory problem in this case. You just cannot allow this content to be generated and distributed in the public domain by anonymous users. It has nothing to do with free speech but with civility and common understanding of what is morally wrong / right.
Obviously you cannot prevent this in private forums unless it is made illegal which is a completely different problem that requires a very different solution.
No, because the comment is in bad faith, it just introduced an unrelated issue (poor sentencing from authorities) as an argument for the initial issue we are discussing (AI nudes), derailing the conversation, and then using the new issue they themselves introduced to legitimize their poor argument when one has nothing to do with the other and both can be good/bad independently of each other.
I don't accept this as good faith argumentation nor does HN rules.
You are the only one commenting in bad faith, by refusing to understand/acknowledging that the people using Grok to create such pictures AND Grok are both part of the issue. It should not be possible to create nudes of minors via Grok. Full stop.
For disagreeing on the injection of offtopic hypothetical scenarios as an argument derailing the main topic?
>It should not be possible to create nudes of minors via Grok.
I agree with THIS part, I don't agree with the part where the main blame is on the AI, instead of on the people using it. That's not a bad faith argument, it's just My PoV.
If Grok disappears tomorrow, there will be other AIs from other parts of the world outside of US/EU jurisdiction, that will do the same since the cat is out of the bag and the technical barrier to entry is dropping fast.
Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?
> Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?
Both, obviously. For example, you go after drug distributors and drug producers. Both approaches are effective in different ways, I am not sure why you are having such trouble understanding this.
The existence and creation of cigarettes and adult nude magazines is fully legal, only their sale is illegal to kids. If kids try to illegally obtain those LEGAL items, it doesn't make the existence of those items illegal, just the act of sale to them.
Meanwhile, the existence/creation CSAM of actual people isn't legal, for anyone no matter the age.
> If parents or school let children play with explosives or do drugs
The explosive sellers that provide explosives to someone without a certification (child or adult) get in trouble (in this part of the world) .. regardless of whether someone gets hurt (although that's an upscale).
If sellers provide ExPo to certified parents and children get access .. that's on the parents.
In that analagy of yours, if grok provided ExPo or CSAM to children .. that's a grok problem,
> A country can ban guns and allow rope, even though both can kill.
That's actually a good argument. And that's how the UK ending up banning not just guns, but all sorts of swords, machetes and knives, meanwhile the violent crime rates have not dropped.
So maybe dangerous knives are not the problem, but the people using them to kill other people. So then where do we draw the line between lethal weapons and crime correlation. At which cutting/shooting instruments?
Same with software tools, that keep getting more powerful with time lowering the bar to entry for generating nudes of people. Where do we draw the line on which tools are responsible for that instead of the humans using them for it?
You’re absolutely right that it is a difficult question where to draw the line. Different countries will do it differently according to their devotion to individual freedoms vs communal welfare.
The knife (as opposed to sword) example is interesting. In the U.K. you’re not allowed to sell them to children. We recognise that there is individual responsibility at play, and children might not be responsible enough to buy them, given the possible harms. Does this totally solve their use in violent crime? No. But if your alternative is “it’s up to the individuals to be responsible”, well, that clearly doesn’t work, because some people are not responsible. At a certain point, if your job is to reduce harm in the population, you look for where you can have a greater impact than just hoping every individual follows the law, because they clearly don’t. And you try things even if they don’t totally solve the problem.
And indeed, the same problem in software.
As for the violent crime rates in the U.K., I don’t have those stats to hand. But murder is at a 50 year low. And since our post-Dunblane gun laws, we haven’t had any school shootings. Most Britons are happy with that bargain.
> meanwhile the violent crime rates have not dropped.
The rate of school shootings has dropped from one (before the implementation of recommendations from the Cullen report) to zero (subsequently). Zero in 29 years - success by any measure.
If you choose to look at _other_ types of violent crime, why would banning handguns have any effect?
> Where do we draw the line on which tools are responsible for that instead of the humans using them for it?
You can ban tools which enable bad outcomes without sufficient upside, while also holding the people who use them to account.
"Correction: kids made the pictures. Using Grok as the tool."
No. That is not how AI nowdays works. Kids told the tool what they want and the tool understood and could have refused like all the other models - but instead it delivered. And it only could do so because it was specifically trained for that.
"If kids were to "git gud" at photoshop "
And what is that supposed to mean?
Adobe makes general purpose tools as far as I know.
You're beating it around the bush not answering the main question.
Anyone skilled at photoshop can do fake nudes as good or even better than AI, including kids (we used it to make fun fakes of teachers in embarrassing situations back in the mid 00s and distribute them via MSN messenger), so then why is only the AI tool the one to blame for what the users do, but not Photoshop if both tools can be used to do the same thing?
People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?
Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things? This is the answer I'm looking for and I don't think there is an easy one, yet people here are too quick to pin blame based on their emotional responses and subjective biases and word views on the matter and the parties involved.
So let's say there are two ways to do something illegal. The first requires skills from the perpetrator, is tricky to regulate, and is generally speaking not a widespread issue in practice. The second way is a no brainer even for young children to use, is easy to regulate, and is becoming a huge issue in practice. Then it makes sense to regulate only the second.
> People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?
Tricky question, but a more accurate comparison would be with a company that runs a service to 3D print guns (= generating the image) and shoot with them in the street (= publishing on X) automatically for you and keeps accepting illegal requests while the competitors have no issue blocking them.
> Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things?
That's also a tricky question, but generally you don't really need to know precisely where to draw the line. It suffices to know that something is definitely on the wrong side of the line, like X here.
A 3D printer needs a blueprint. AI has all the blueprints built-in. It can generalize, so the blueprints cannot simply be erased, however at least what we can do is forbid generation of adult content. Harm should be limited. Photoshop requires skill and manual work, that's the difference. In the end, yes, people are the ones who are responsible for their actions. We shouldn't let kids (or anyone else) harm others with little to no effort. Let's be reasonable.
Punishing kids after the fact does not stop the damage from occurring. Nothing can stop the damage that has already occurred, but if you stop the source of the nudes, you can stop future damage from occurring to even more girls.
>Punishing kids after the fact does not stop the damage from occurring.
Banning AI doesn't stop the damage from occurring. Bullies at school/college have been harassing their victims, often to suicide for decades/centuries before AI.
I'm sorry, did the article or anyone in this subthread suggest banning AI? That seems like quite a non-sequitur. I'm pretty sure the idea is to put a content filter on an online platform for one very specific kind of already-illegal content (modified nude images of real people, especially children), which is a far cry from a ban. Nothing can stop local diffusion or Photoshop, of course, but the hardware and technical barriers are so much higher that curtailing Grok would probably cut off 99% or more of the problem material. I suppose you'll tell me if any solution is not 100% effective we should do nothing and embrace anarchy?
Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.
Child sexual abuse material is literally in the training sets. Saying "banning AI" as though it's all the same thing, and all morally-neutral, is disingenuous. (Yes, a system with both nudity and children in its dataset might still be able to produce such images – and there are important discussions to be had about that – but giving xAI the benefit of equivocation here is an act of malice.)
They may well get in trouble, but in that takes time, in the meantime photos will have been seen by most kids in school + you might get a year of bullying.
Education might be so disrupted you have to change schools.
I really find this kind of appeal quite odious. God forbid that we expect fathers to have empathy for their sons, sisters, brothers, spouses, mothers, fathers, uncles, aunts, etc. or dare we hope that they might have empathy for friends or even strangers? It's like an appeal to hypocrisy or something. Sure, I know such people exist but it feels like throwing so many people under the bus just to (probably fail) to convince someone of something by appealing to an emotional overprotectiveness of fathers to daughters.
You should want to protect all of the people in your life from such a thing or nobody.
You're defending X/Grok as if it's a public social platform.
It is a privately controlled public-facing group chat. Being a chat-medium does not grant you the same rights as being a person. France isn't America.
If a company operates to the detriment and against the values of a nation, e.g. not paying their taxes or littering in the environment, the nation will ask them to change their behavior.
If there is a conspiracy of contempt, at some point things escalate.
I'm in the same boat. We have literally pedos and child abusers in the epstein files talking openly about doing despicable things to women, kids and even babies, while authorities are focused on criminalizing generating images of fake minors that don't exist and that any other LLM platform can do if asked.
Plus, how do you even judge the age of AI generated fake people to say it's CP? Reminds me when UK activists were claiming Grok's anime girl avatar was a minor and deserved to be considered CP, when she had massive tits that no kid has. So how much of this is just a political witch-hunt looking for any reason to justify itself?
You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?
Also, it seems pretty likely that Musk is tangled up with the Epstein shit. First Musk claimed he turned down offer to go to the island. Now it turns out Musk repeatedly sought to visit, including wanting to know when the "wildest" party was happening, after Epstein was already known as a child sex abuser. Musk claimed that Epstein had never been given a tour of SpaceX but it turns out he did in 2013. It's the classic narcissistic "lie for as long as possible" behaviour. Will be interesting to see what happens as more is revealed.
>You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?
No i said no such thing, what I said was that the resources of authorities is a finite pie. If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.
Same how police won't bother with your stolen phone/bike because they have bigger crimes to catch. I'm asking for the same logic be applied here.
There's no indication that this investigation would draw resources away from investigating the Epstein files. It's happening in France, for starters, whilst the vast majority of Epstein's crimes appear to have happened in US territories. Speaking about "the authorities" as if they're a unified global entity sounds a little conspiratorial.
> If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.
1.) That is not how it works, even if we ignore the fact that France is not USA.
2.) Lack of resources was not the issue with Epstein prosecution. The prosecutor was literally told to not investigate by her superiors who were trying to stop the case. She was told she is unsubordinated for doing it. Acosta giving Epstein sweetheart deal or seeking to stop the prosecutor is not the resources issue.
It is billionaires (Thiel, Musk, Gates), politicians (Clinton, Luthnic ) media darlings (Summers, Kraus and the rest of sexism is totally not a thing anymore crowd literally partying with Epstein) are to be protected at all cost issue. Even now, people implicated in Epstein files are still getting influential positions with explicit "it would be cancel culture to not give these people more influence" argument.
THat's like the 1993 moral panic that video games like Doom cause mass shootings, or the 1980's mass panic that metal music causes satanist, or the 1950s moral panic that superhero comic book violence leads to juvenile delinquency.
Politicians are constantly looking for an external made up enemy to divert attention to from the real problems.
People like Epstein and mass woman/child exploitation have existed for thousands of years in the past, and will exist thousands of years in the future. It's part of the nature of the rich and powerful to execute on their deranged fetishes, it's been documented in writing since at least the Roman and Ottoman empires.
Hell, I can guarantee you there's other Epsteins operating in the wild right now, that we haven't heard of (yet), it's not like he was in any way unique. I can also guarantee you that 1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could.
> THat's like the 1993 moral panic that video games like Doom cause mass shootings,
Apart from doom wasn't producing illegal content.
the point is that grok is generating illegal content for those jurisdictions. In france you can't generate CSAM, in the UK you can't distribute CSAM. Those are actual laws with legal tests, none of them need to be of actual people, they just need to depict _children_ to be illegal.
Moral panics require new laws to enforce, generally. This is just enforcing already existing laws.
More over, had it been any other site, it would have been totally shut down by now and the servers impounded. Its only because musk is close to trump and rich that he's escaped the fate than you or I would have had if we'd done the same.
>Apart from doom wasn't producing illegal content.
Sure but where's the proof that Grok is actually producing illegal content? I searched for news sources, but they're just all parroting empty accusations not concrete documented cases.
> but they're just all parroting empty accusations not concrete documented cases.
In the UK it is illegal to create, distribute and store CSAM. A news site printing a photo CSAM would make them legally up the shitter.
However, the IWF, who are tasked with detecting this stuff have claimed to have found evidence of it, along with multiple other sources, Ofcom who are nominally supposed to police this have an open investigation, so do the irish police.
The point is, law has a higher threshold of proof than news, which takes time. If there is enough evidence, then a court case (or other instrument) will be invoked.
Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
Oh yeah, because the main reason why EPstein and his guests got away with it for so long, is because there was so much low hanging CP out there confusing authorities and prosecutors, not because of the corruption, cronyism and political protection they enjoyed at the highest levels of government.
But how about the "1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could."
> Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
(own quote)
Yes, the predators existed before AI, but also:
> I think the reasoning is that the AI contributes to more offenders (edited).
(own quote, edited)
To be clear, I don't think this line of reasoning is entirely convincing, but apparently some people do.
No, 20% of population is not seeking to abuse children nor teens. If you think so, you are moving in weird circles. In fact, what we also have are people who noped out of Epstein circle or even openly criticized it for years.
Also, framing the issue of sexual abuse by untouchable issue as the same as superhero comic issue (which itself was not just about superhero comic and you should know it) is spectacularly bad faith.
Yes, there were always people who were stealing, abusing, murdering for own gain and fun. That is not an argument for why we should accept and support it as normalized state of world. It is a good reason to prevent people from becoming too powerful and for building accountable institutions able to catch and punish them.
The same guy responsible for creating child porn that you are defending is also in the Epstein's list. Also, don't abbreviate child pornography, it shows you have a side on this
No need to be coy the raid exists because it's a way to punish the company without proving anything. They have zero intention of getting even the slightest bit of valuable data related to Grok from this.
Making Silicon Valley the judge, jury and executioner of pedos seems, at best, a dereliction of duty by the real authorities, and at worst, a very dark and dystopian path to opaque and corruptible justice.
X should identify those users who are abusing its tools to make CSAM (and I suspect those users are mostly leftie Musk-haters trying to create vexatious evidence against X), and then X should pass that information to the authorities so the proper legal process can be followed.
> And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
In the 90s, the principal of a prominent school in my city was arrested for CSAM on his computer downloaded from the Internet.
As the story made the news most people were trying to wrap their head around this "Internet" thing and how it could produce CSAM material. Remember, in the 90s the "Internet" was a bit like quantum computing for most people, hard to understand how it works and only a few actually played with it.
I have no idea how that school principal downloaded the CSAM. UUCP, FTP, Usenet or maybe the brand new "World Wide Web"?
But I guess the justice system had to figure out how that stuff works to prosecute him.
So the society and the state knew for at least 30 years the Internet is full of that stuff. The question is why are they so motivated to do something about it only now?
Could it be because the "web of rich and powerful pedos" is getting exposed through the Epstein affair in the last few years?
So maybe they need to pretend to crack down on the "web of poor pedos"?
Enforcement of anti-CSAM law has been a significant thing for a long time. It's in no way "only now". Even the "free speech" platforms banned it because they knew they would get raided otherwise. There are long standing tools for dealing with it, such as a database of known hashes of material. There's even a little box you can tick in Cloudflare to automatically check outgoing material from your own site against that database - because this is a strict liability offence, and you are liable if other people upload it to you where it can be re-downloaded.
What's new is that X automated the production of obscene or sexualised images by providing grok. This was also done in a way that confronted everyone; it's very different from a black market, this is basically a harassment tool for use against women and girls.
> What's new is that X automated the production of obscene or sexualised images by providing grok.
Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.
So let me make a suggestion: maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
Hit them hard at the money level... it wouldn't be more authoritarian than something like ChatControl or restricting access to VPNs.
And actually all the mechanisms are already in place to implement something like that.
> Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.
I don't get what's difficult to understand or believe here. Grok causes a big issue in practice right now, a larger issue than photoshop, and it should be easy for X to regulate it themselves like the competition does but they don't, so the state intervenes.
> maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
You're basically asking "why do a surgical strike when you can do carpet bombing"? A surgical strike is used to target the actual problem. With carpet bombing you mostly cause collateral damage.
The different factors are scale (now "deepfakes" can be automatically produced), and endorsement. It is significant that all these images aren't being posted by random users, they are appearing under the company's @grok handle. Therefore they are speech by X, so it's X that's getting raided.
I remember when CSAM meant actual children not computer graphics.
Should platforms allow violent AI images? How about "R-Rated" violence like we see in popular movies? Point blank executions, brutal and bloody conflict involving depictions of innocent deaths, torment and suffering... all good? Hollywood says all good, how about you? How far do you take your "unacceptable content" guidance?
There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
> What is the criteria for this?
My criteria would be victims suffering personally from the generated material.
The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases).
You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
Ok, imagine your mom, sister or daughter is using X. Some random guy with an anime profile picture and a neonazi bio comes in, asks grok to make a picture of them in bikini for the whole world to see, and the bot obliges. Do you see the issue now? Because that happened to literally millions of people last month.
I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.
A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.
The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.
We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.
You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.
But how would we bring down our political boogieman Elon Musk if we take that approach?
Everything I read from X's competitors in the media tells me to hate X, and hate Elon.
If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?
How? X is hostile to any party attempting to bring justice to its users that are breaking the law. This is a last recourse, after X and its owner stated plainly that they don't see anything wrong with generating CSAM or pornographic images of non-consenting people, and that they won't do anything about it.
X will not provide these informations to the French Justice System. What then? Also insane that you don't believe the company that built a "commit crime" button bears no responsibility whatsoever in this debacle.
Frankly it sounds to me like a "show me the man and I'll show you the crime" kind of operation. France and the UK, and judging by yesterday's speech by the PM of Spain maybe the whole EU might be looking to do what China and Russia did earlier on and start cracking down on foreign social media by making it impossible to operate without total alignment with their vision and not just their (new) rules. Together with a push for local alternatives, that currently don't seem to be there, it may spell the end for a big chunk of the Global social network landscape.
I still believe that the EU and aligned countries would rather have America to agree to much tighter speech controls, digital ID, ToS-based speech codes as apparently US Democrats partly or totally agree to. But if they have workable alternatives they will deal with them from a different position.
Yes, if you don't follow EU laws prepare to not do business in Europe. Likewise, if you don't follow US laws I'd advise against trying to do business in USA.
If X/Twitter was to be banned in the EU, and some of its citizens still wanted to access X/Twitter, let us say for the sake of getting alternative points of view on politics and news, would it be a good or a bad thing if accessing X/Twitter by IP was stopped?
As in, a citizen of an EU country types x.com/CNN, because he or she wants to know the other side of some political issue between the EU and the USA, and he or she feels that the news in the EU might be biased or have misunderstood something. Would it be good or bad if the user was met with a "This website is by law not available within the EU"?
that was decades later, but yea I don't think for a second that was justifiable - not even considering that China had completely closed shop for America decades earlier and this was a 1-way openness relationship for a long time; they could have sold this as a reciprocity issue but they didn't
esp. when America already controls the main outlets through Android Play Store and Apple Store, and yep, they have proven to control them not just happen to host them as a country
arguably America did have valid security concerns with Huawei though, but if those are the rules then you cannot complain later on
It's worth pointing out that in France and the UK, the authorities involved are arms length independent of the political bodies - it's not like the US where if you give the President good vibes you can become head of the FBI, and all you have to do in return is whatever he says. There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
They are tasked - and held to account by respective legislative bodies - with implementing the law as written.
Nobody wrote a law saying "Go after Grok". There is however a law in most countries about the creation and dissemination of CSAM material and non-consensual pornography. Some of that law is relatively new (the UK only introduced some of these laws in recent years), but they all predate the current wave of AI investment.
Founders, boards of directors and their internal and external advisors could:
1. Read the law and make sure any tools they build comply
2. When told their tools don't comply take immediate and decisive action to change the tools
3. Work with law enforcement to apply the law as written
Those companies, if they find this too burdensome, have the choice of not operating in that market. By operating in that market, they both implicitly agree to the law, and are required to explicitly abide by it.
They can't then complain that the law is unfair (it's not), that it's being politicised (How? By whom? Show your working), and that this is all impossible in their home market where they are literally offering presents to the personal enrichment of the President on bended knee while he demands that ownership structures of foreign social media companies like TikTok are changed to meet the agenda of himself and his administration.
So, would the EU like more tighter speech controls? Yes, they'd like implementation of the controls on free speech enshrined in legislation created by democratically appointed representatives. The alternative - algorithms that create abusive content, of women and children in particular - are not wanted by the people of the UK, the EU, or most of the rest of the World, laws are written to that effect, and are then enforced by the authorities tasked with that enforcement.
This isn't "anti-democratic", it's literally democracy in action standing up to technocratic feudalism that is an Ayn Randian-wet dream being played out by some morons who got lucky.
> It's worth pointing out that in France and the UK, the authorities involved are arms length independent of the political bodies
As someone who has lived in (and followed current affairs) in both of these countries, this is a very idealistic and naïve view. There can be a big gap between theory and practice
> There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
> They are tasked - and held to account by respective legislative bodies -
It's worth nothing here that the UK doesn't have separation of powers or a supreme court (in the US sense)
European courts have repeatedly said that in France the procureur (public prosecutor) isn’t an “independent judicial authority”.
The European Court of Human Rights has reminded this point (e.g. 29 Mar 2010, appl. no. 3394/03), and the Court of Justice of the European Union reaches a very similar conclusion (2 Mar 2021, C-746/18): prosecutors are part of the executive hierarchy and can’t be treated as the neutral, independent judicial check some procedures require.
For a local observer, this is made obvious by the fact that the procureur, in France, always follows current political vibes, usually in just a few months delay (extremely fast, when you consider how slowly justice works in the country).
> and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
Well, there is evidence that this company made and distributed CSAM and pornographic deepfakes to make a profit. There is no evidence lacking there for the investigators.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
> Well, there is evidence that this company made and distributed CSAM
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
It wouldn't be called CSAM in France because it would be called a French word. Arguing definitions is arguing semantics. The point is, X did things that are illegal in France, no matter what you call them.
Not really, they put a shit ton of effort in to make sure you can't create any kind of nude/suggestive pictures of anyone. I imagine they have strict controls on making images of children, but I don't feel inclined to find out.
It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
Also, safe harbor doesn't apply because this is published under the @grok handle! It's being published by X under one of their brand names, it's absurd to argue that they're unaware or not consenting to its publication.
The safe harbor provisions largely protect X from the content that the users post (within reason). Suddenly Grok/X were actually producing the objectionable content. Users were making gross requests and then an LLM owned by X, using X servers and X code would generate the illegal material and then post it to the website. The entity responsible is no longer done user but instead the company itself.
It's not like the world benefited from safe harbor laws that much. Why don't just amend them so that algorithms that run on server side and platforms that recommend things are not eligible.
I'm not trying to make excuses for Grok, but how exactly isn't the user creating the content? Grok doesn't have create images on its own volition, the user is still required to give it some input, therefore "creating" the content.
X is making it pretty clear that it is "Grok" posting those images and not the user. It is a separate posting that comes from an official account named "Grok". X has full control over what the official "Grok" account posts.
There is no functionality for the users to review and approve "Grok" responses to their tweets.
This might be an unpopular opinion but I always thought we might be better off without Web 2.0 where site owners aren’t held responsible for user content
If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?
Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?
EncroChat was illegal because it was targeted at drug dealers, advertised for use in drug dealing. And they got evidence by texting "My associate got busted dealing drugs. Can you wipe his device?" and it was wiped. There's an actual knowledge component which is very important here.
What about webmail, IM, or any other sort of web-hosted communication? Do you honestly think it would be better if Google were responsible for whatever content gets sent to a gmail address?
Messages are a little different than hosting public content but sure, a service provider should know its customers and stop doing business with any child sex traffickers planning parties over email.
I would prefer 10,000 service providers to one big one that gets to read all the plaintext communication of the entire planet.
In a world where hosting services are responsible that way, their filtering would need to be even more sensitive than it is today, and plenty of places already produce unreasonable amounts of false positives.
As it stands, I have a bunch of photos on my phone that would almost certainly get flagged by over-eager/overly sensitive child porn detection — close friends and family sending me photos of their kids at the beach. I've helped bathe and dress some of those kids. There's nothing nefarious about any of it, but it's close enough that services wouldn't take the risk, and that would be a loss to us all.
They'd all have to read your emails to ensure you don't plan child sex parties. Whenever a keyword match comes up, your account will immediately be deleted.
I don't have an answer, but the theme that's been bouncing around in my head has been about accessibility.
Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.
Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.
Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.
Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?
May be.
We do have codified in law definition of machine gun which clearly separates it from a block of lead. What codified in law definitions are used here to separate photoshop from Grok in the context of those deepfakes and CSAM?
Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.
They don’t provide a large platform for political speech.
This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)
No because most of those things aren't illegal and most of those companies have guard rails and because a prosecution requires a much higher standard of evidence than internet shitposting, and only X was stupid enough to make their illegal activity obvious.
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
That's why this is an investigation looking for evidence and not a conviction.
This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.
On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.
Note that the raid itself is a punishment. It's normal for them to seize all electronic devices. How is X France supposed to do any business without any electronic devices? And even when charges are dropped, the devices are never returned.
Grok do seem to have tons of useless guardrails. Reportedly you can't prompt it directly. But also reportedly they tend to go for almost nonsensically off-guardrail interpretation of prompts.
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
> Also, X seem to disagree with you and admit that CSAM was being generated
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
> That post doesn't contain such an admission, it instead talks about forbidden prompting.
In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."
> which contradicts your claim that there were no guardrails before.
From the linked post:
> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards
Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
As you quoted.
I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.
Comparing Apples and Oranges. Defending this company is becoming cringe and ridiculous. X effed up, and Musk did it on purpose. He uses CSAM to strongman the boundaries of the law. That's not worth defending unless you also say eff the rule of law.
Aren't a lot of US pickup trucks basically that? Sure, maybe there's a mechanism for preventing you from installing a baby seat in reverse to position in front of an airbag, but they're also built so that you can't see anything adult human sized 15m in front of the car, let alone anything child-sized.
The US would spend 20 years arguing about which agency's jurisdiction it was, and ignore the dead babies?
No, wait, Volvo is European. They'd impose a 300% tariff and direct anyone who wanted a baby-killing model car to buy one from US manufacturers instead.
Let's raid car companies too. We were all born into this. We never had a vote. Thomas Jefferson is said to have written Constitutions ought to be re-written every so often or the dead rule by fiat decree. Let's.
The rich can join in the austerity too. No one voted for them. We been conditioned to pick acquiescence or poverty. We were abused into kowtowing to a bunch of pants shitting dementia addled olds educated in religious crack pottery. Their economic and political memes are just that, memes, not immutable physical truth.
In America, as evidenced by the public not in the streets protesting for single payer comprehensive healthcare, we clearly don't want to be on the hook for each other's lives. That's all platitudes and toxic positivity.
Hopes and prayers, bloodletting was good enough for the Founders!
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
This vindicates the pro-AI censorship crowd I guess.
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here
Those images are generated from a training set, and it is already well known and reported that those training sets contain _real_ CSAM, real violence, real abuse. That "generated" face of a child is based on real images of real children.
Indeed, a Stanford study from a few years back showed that the image data sets used by essentially everybody contain CSAM.
Everybody else has teams building guardrails to mitigate this fundamental existential horror of these models. Musk fired all the safety people and decided to go all in on “adult” content.
> let's distinguish between generated images, however revolting, and actual child sexual abuse.
Can't because even before GenAI the "oh its generated in photoshop" or "they just look young" excuse was used successfully to allow a lot of people to walk free. the law was tightend in the early 2000s for precisely this reason
It's not because it could generate CSAM. It's because when they found out it could generate CSAM, they didn't try to prevent that, they advertised it. Actual knowledge is a required compenent of many crimes.
"Enough" can always be pushed into the impossible. That's why laws and regulations need to be more concrete than that.
There's essentially a push to end the remnants of the free speech Internet by making the medium responsible for the speech of its participants. Let's not pretend otherwise.
In the UK, you must take "reasonable" steps to remove illegal content.
This normally means some basic detection (ie fingerprinting which is widely used from a collaborative database) or if a user is consistently uploading said stuff, banning them.
Allowing a service that you run to continue to generate said illegal content, even after you publicly admit that you know its wrong, is not reasonable.
No law is concrete. Murder is killing with intent to kill. What concrete test shows if someone intended to kill? They say you have intent to kill if a reasonable person would expect the actions you took would result in killing.
Nothing in common law is "concrete", thats kinda the point of it.
Judges can evolve and interpret as they see fit, and this evolution is case law.
This is why in the US the supreme court can effectively change the law by issuing a binding ruling. (see 2nd amendment meaning no gun laws, rather than as written, or the recent racial profiling issues)
if you can be sued for billions because some overbearing body, with a very different ideology to yours, can deem your moderation/censorship rules to be "unreasonable" then what you do is err on the side of caution and allow nearly nothing
this is not compatible with that line of business - perhaps one of the reasons nothing is done in Europe these days
The vast majority of the EU is not common law, so "reasonable" in this instance is different.
What you describe already happens in the USA, that why MLB has that weird local TV blackout, why bad actors use copyright to take down content they don't like.
The reason why its so easy to do that is because companies must reasonably comply with copyright holder's requests.
Its the same with CSAM, distributing it doesn't have first amendment protection, knowingly distributing it is illegal. All reasonable steps should be taken to detect and remove CSAM from your systems to qualify for safe harbour.
> Its the same with CSAM, distributing it doesn't have first amendment protection, knowingly distributing it is illegal. All reasonable steps should be taken to detect and remove CSAM from your systems to qualify for safe harbour.
nice try, but nobody is distributing or hosting CSAM in the current conversation
people trying to trick a bot to post bikini pictures of preteens and blaming the platform for it is a ridiculous stretch to the concept of hosting CSAM, which really is a transparent attack to a perceived political opponent to push for a completely different model of the internet to the pre-existing one, a transition that is as obvious as is already advanced in Europe and most of the so-called Anglosphere
> The vast majority of the EU is not common law, so "reasonable" in this instance is different.
the vast majority of the EU is perhaps incompatible with any workable notion of free speech, so perhaps America will have to choose whether it's worth it to sanction them into submission, or cut them off at considerable economic loss
it's not a coincidence that next to nothing is built in Europe these days, the environment is one of fear and stifling regulation and if I were to actually release anything in either AI or social networks I'd do what most of my fellow Brits/Europoors do already, which is to either sell to America or flee this place before I get big enough to show up in the euro-borg's radar
> nice try, but nobody is distributing or hosting CSAM in the current conversation
multiple agencies (Ofcom, irish police IWF, and what ever the french regulator is) have detected CSAM.
You may disagree with that statement, but bear in mind the definition of CSAM in the UK is "depiction of a child" which means that if its of a child or entirely generated is not relevant. This was to stop people claiming that massive cache of child porn they had was photoshoped.
in the USA CSAM is equally vaguely defined, but the case law is different.
> EU is perhaps incompatible with any workable notion of free speech
I mean the ECHR definition is fairly robust. But given that first amendment protection has effectively ended in the USA (the president is currently threatening to take a comedian to court for making jokes, you know, like the twitter bomb threat person in the UK) its a bit rich really. The USA is not the bastion of free speech it once was.
> either sell to America or flee this place before I get big enough to show up in the euro-borg's radar
Mate, as someone whos sold a startup to the USA, its not about regulations its about cold hard fucking cash. All major companies comply with EU regs, and its not hard. they just bitch about them so that the USA doesn't put in basic data protection laws, so they can continue to be monopolies.
Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.
It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.
It matters whether they attempt to block it or encourage it. Musk encouraged it, until legal pressure hit, then moved it behind a paywall so it's harder to see evidence.
This is not about AI but about censorship of a nonaligned social network. It's been a developing current in EU. They have basically been foaming at the mouth at the platform since it got bought.
I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).
And importantly, this is clearly published by Grok, rather than the user, so in this case (obviously this isn't the US) but if it was I'm not sure Section 230 would apply.
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
It's 2026. No common individual can be accountable for anything wrong they do. We must always find some way to blame some "corporation" or some "billionaire" or some ethnic group of people.
I wonder where all these people - and the French government - has been in the past 3 decades where kids did the same thing with Photoshop.
Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.
And the sign out front says "X-Ray camera photographs anyone naked — no age limits!"
And the camera is pointing out the window so you can use it on strangers walking by.
There is a point in law where you make something so easy to misuse that you become liable for the misuse.
In the USA they have "attractive nuisance", like building a kid's playground on top of a pit of snakes. That's so obviously a dumb idea that you become liable for the snake–bitten kids — you can't save yourself by arguing that you didn't give the kids permission to use the playground, that it's on private property, that the kids should have seen the snakes, or that it's legal to own snakes. No, you set up a situation where people were obviously going to get hurt and you become liable for the hurt.
Not knowing any better, and not having seen any of the alleged images, my default guess would be they used the exact same CSAM filtering pipeline already in place on X regardless of the origin of the submitted images.
They obviously didn’t really implement anything as you can find that content or involuntary nudes of other people, which is also an invasion of privacy, super easily
If the camera reliably inserts racist filters and the ballpen would add hurtful words to whatever you write, indeed, let them up their legal insurance.
Especially if contracts with SpaceX start being torn up because the various ongoing investigations and prosecutions of xAI are now ongoing investigations and prosecutions of SpaceX. And next new lawsuits for creating this conflict of interest by merger.
Honest question: What does it mean to "raid" the offices of a tech company? It's not like they have file cabinets with paper records. Are they just seizing employee workstations?
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
Sadly the media calls the lawful use of a warrant a 'raid' but that's another issue.
The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.
As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.
It is a raid in that it's not expected, it relies on not being expected, and they come and take away your stuff by force. Maybe it's a legal raid, but let's not sugar coat it, it's still a raid and whether you're guilty or not it will cause you a lot of problems.
Agreed its a stretch, my experience comes from Google when I worked there and they set up a Chinese office and they were very carefully trying to avoid anything on premises that could searched/exploited. It was a huge effort, one that wasn't done for the European and UK offices where the government was not an APT. So did X have the level of hygiene in France? Were there IT guys in the same vein as the folks that Elon recruited into DOGE? Was everyone in the office "loyal"?[1] I doubt X was paranoid "enough" in France not to have some leakage.
[1] This was also something Google did which was change access rights for people in the China office that were not 'vetted' (for some definition of vetted) feeling like they could be an exfiltration risk. Imagine a DGSE agent under cover as an X employee who carefully puts a bunch of stuff on a server in the office (doesn't trigger IT controls) and then lets the prosecutors know its ready and they serve the warrant.
Under GDPR if a company processes European user data they're obligated to make a "Record of Processing Activities" available on demand (umbrella term for a whole bunch of user-data / identity related stuff). They don't necessarily need to store them onsite but they need to be able to produce them. Saying you're an internet company doesn't mean you can just put the stuff on a server in the Caribbean and shrug when the regulators come knocking on your door
That's aside from the fact that they're a publicly traded company under obligation to keep a gazillion records anyway like in any other jurisdiction.
I'm guessing you're asking this because you have a picture of a 'server' as a thing in a large rack? Nearly every tech business has a bunch of machines, everything from an old desk top to last year's laptop, which have been reinstalled with Linux or *BSD and are sitting on the network behaving, for all intents and purposes, as a 'server.' (they aren't moving or rebooting or having local sessions running on them, Etc).
Gather evidence against employees, use that evidence to put them under pressure to testify against their employer or grant access to evidence.
Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
That was legal. Guess what, similar things would be legal in France.
We all forget that money is nice, but nation states have real power. Western liberal democracies just rarely use it.
The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
> We all forget that money is nice, but nation states have real power.
I remember something (probably linked from here), where the essayist was comparing Jack Ma, one of the richest men on earth, and Xi Jinping, a much lower-paid individual.
They indicated that Xi got Ma into a chokehold. I think he "disappeared" Ma for some time. Don't remember exactly how long, but it may have been over a year.
From what I hear, Ma made 1 speech critical of the government and Xi showed him his place. It was a few years, a year of total disappearance followed by slow rehab.
But China is different. Not sure most of western europe will go that far in most cases.
Ah, so the daily LARGE protests, in Venezuela, against his kidnapping are not indicative of "the vast majority of Venezuela".
But the celebratory pics, which were claimed to be from Venezuela, but were actually from Miami and elsewhere (including, I kid you not, an attempt to pass off Argentine's celebrating a Copa America win) ... that is indicative of "the vast majority of Venezuela"?
If I were smarter, I might start to wonder why, if President Maduro was so unpopular, why would his abductors have to resort to fake footage - which was systematically outed & destroyed by independent journalists within 24 hours? I mean, surely, enough real footage should exist.
Probably better not to have inconvenient non-US-approved independent thoughts like that.
I never liked the Paul's and their opinions, but I must say that they usually speak according to their principles, rather than make up principles to fit what they want to happen.
To me, that's the distinction between political opponents I can respect, and, well, whatever we're seeing now.
The people of the US mostly wouldn’t like it the people of VZ mostly did and consider Maduro a thug who lost and stayed in power not their president. Ideologies like Paul have trouble with exceptions to their world view.
I mean, come on, we kidnapped him. Yes, he was arrested, but we went into another sovereign nation with special forces and yoinked their head of state back to Brooklyn.
To be fair he isn't legitimate head of state- he lost an election and is officially recognized as a usurper and the US had support of those who actually won.
Soke people argue Trump isn't a legitimate head of state. (One of those people is Trump, since he says he was already the president twice.) Should Xi kidnap him?
Large amounts of people call Joe Biden's election illegitimate. You could even say thats the official position of the current government. Would his kidnapping by a foreign nation be okay with you too?
In France it's possible without legal consequences (though immoral), if you call 119, you can push to have a baby taken from a family for no reason except that you do not like someone.
Claim that you suspect there may be abuse, it will trigger a case for a "worrying situation".
Then it's a procedural lottery:
-> If you get lucky, they will investigate, meet the people, and dismiss the case.
-> If you get unlucky, they will take the baby, and it's only then after a long investigation and a "family assistant" (that will check you every day), that you can recover your baby.
Typically, ex-wife who doesn't like the ex-husband, but it can be a neighbor etc.
One worker explains that they don't really have time to investigate when processing reports: https://www.youtube.com/watch?v=VG9y_-4kGQA
and they have to act very fast, and by default, it is safer to remove from family.
The boss of such agency doesn't even take the time to answer to the journalists there...
This is very common, all "think of the children" laws are ripe for abuse. I'm convinced the secrecy around child abuse/child protective services is regularly abused both by abusive parents and abusive officials.
If you call 119 it gets assessed and potentially forwarded to the right department, which then assesses it again and might (quite likely will) trigger an inspection. The people who turn up have broad powers to seize children from the home in order to protect them from abuse.
In general this works fine. Unfortunately in some circumstances this does give a very low skilled/paid person (the inspector) a lot of power, and a lot of sway with judges. If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.
afaik similar systems are present in most western countries, and many of them - like France - are suffering with funding and are likely cutting in the wrong place (audit/rigour) to meet external KPIs. One of the worst ways this manifests is creating 'quick scoring' methods which can end up with misunderstandings (e.g. said a thing they didn't mean) ranking very highly, but subtle evidence of abuse moderate to low.
So while this is a concern, this is not unique to France, this is relatively normal, and the poster is massively exaggerating the simplicity.
In Sweden there is a additional review board that go through the decision made by the inspector. The idea is to limit the power that a single inspector has. In practice however the review board tend to rubber stamp decisions, so incompetence/malice still happens.
There was a huge mess right after metoo when a inspector went against the courts rulings. The court had given the father sole custody in a extremely messy divorce, and the inspector did not agree with the decision. As a result they remove the child from his father, in direct contrast to the courts decision, and put the child through 6 years of isolation and abuse with no access to school. It took investigative journalists a while, but the result of the case getting highlighted in media was that the inspector and supervisor is now fired, with two additoal workers being under investigation for severe misconduct. Four more workers would be under investigation but too long time has passed. The review board should have prevented this, as should the supervisor for the inspector, but those safety net failed in this case in part because of the cultural environment at the time.
“ If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.”
This seems guaranteed to occur every year then… since incompetence/malice will happen eventually with thousands upon thousands of cases?
I heard there's a country where they can even SWAT you out of existence with a simple phone call, but it sounds so outrageous this must be some evil communist dictatorship third-world place. I really don't remember.
Wait, Sabu's kids were foster kids. He was fostering them. Certainly if he went to jail, they'd go back to the system.
I mean, if you're a sole caretaker and you've been arrested for a crime, and the evidence looks like you'll go to prison, you're going to have to decide what to do with the care of your kids on your mind. I suppose that would pressure you to become an informant instead of taking a longer prison sentence, but there's pressure to do that anyway, like not wanting to be in prison for a long time.
France has Ariane, which was good enough to send Jame Web Telescope to some Lagrange point with extra precision. It's all fun and and games until the French finish their cigarette, arms French Guyana and fire ze missiles.
>Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
>That was legal. Guess what, similar things would be legal in France.
lawfare is... good now? Between Trump being hit with felony charges for falsifying business records (lawfare is good?) and Lisa Cook getting prosecuted for mortgage fraud (lawfare is bad?), I honestly lost track at this point.
>The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
What's even the implication here? That they're going to shoot his plane down? If there's no threat of violence, what does the French government even hope to achieve with this?
>fighter jets ARE a threat of violence, and it is widely understood and acknowledged.
That's not a credible threat because there's approximately 0% chance France would actually follow through with it. Not even Trump would resort to murder to get rid of his domestic adversaries. As we seen the fed, the best he could muster are some spurious prosecutions. France murdering someone would put them on par with Russia or India.
In the USA they would be allowed to down any aircraft not complying with national air interception rules, that would not be murder. It would be equivalent to not dropping a gun once prompted by an officer and being shot as a result.
I think the implication of the fighter jets is that they force the plane to land within a particular jurisdiction (where he is then arrested) rather than allowing it to just fly off to somewhere else. Similar to the way that a mall security guard might arrest a shoplifter; the existence of security guards doesn't mean the mall operators are planning to murder you.
Guards can plausibly arrest you without seriously injuring you. But
according to https://aviation.stackexchange.com/a/68361 there are no safe options if the pilot really doesn’t want to comply, so there is no “forcing” a plane to land somewhere, just making it very clear that powerful people really want you to stop and might be able to give more consequences on the ground if you don’t.
Planes are required to comply with instructions; if they don't they're committing a serious crime and the fighters are well within their international legal framework to shoot the plane down. They would likely escalate to a warning shot with the gun past the cockpit, and if the aircraft is large enough they might try to shoot out one engine instead of the wing or fuselage.
I suspect fighter pilots are better than commercial pilots at putting their much-higher-spec aircraft so uncomfortably close that your choices narrow down to complying with their landing instructions or suicidally colliding with one - in which case the fighter has an ejector seat and you don't.
I felt like you ruled out collision when you said they're not going to murder, though, granted, an accidental but predictable collision after repeatedly refusing orders is not exactly murder. I think the point stands, they have to be willing to kill or to back down, and as others said I'm skeptical France or similar countries would give the order for anything short of an imminent threat regarding the plane's target. If Musk doesn't want to land where they want him to, he's going to pay the pilot whatever it takes, and the fighter jets are going to back off because whatever they want to arrest him for isn't worth an international incident.
Don’t forget that captain of the plane makes decisions not Elon.
If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Being clear for flying anywhere in the world is their job.
Would be quite stupid to loose it like truck driver DUI getting his license revoked.
>Don’t forget that captain of the plane makes decisions not Elon.
>If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Again, what's France trying to do? Refuse entry to France? Why do they need to threaten shooting down his jet for that? Just harassing/pranking him (eg. "haha got you good with that jet lmao")?
Well, when everything is lawfare it logically follows that it won't always be good or always be bad. It seems Al Capone being taken down for tax fraud would similarly be lawfare by these standards, or am I missing something? Perhaps lawfare (sometimes referred to as "prosecuting criminal charges", as far as I can tell, given this context) is just in some cases and unjust in others.
As they say: you can beat the rap but not the ride. If a state wants to make your life incredibly difficult for months or even years they can, the competent ones can even do it while staying (mostly) on the right side of the law.
We are not entirely sure the rule of law in America isn't already over.
People are putting a lot of weight on the midterm elections which are more or less the last line of defense besides a so far tepid response by the courts and even then consequence free defiance of court orders is now rampant.
We're really near the point of no return and a lot of people don't seem to notice.
> Also, they are restricted in how they use it, and defendents have rights and due process.
As we're seeing with the current US President... the government doesn't (have to) care.
In any case, CSAM is the one thing other than Islamist terrorism that will bypass a lot of restrictions on how police are supposed to operate (see e.g. Encrochat, An0m) across virtually all civilized nations. Western nations also will take anything that remotely smells like Russia as a justification.
Well, that's particular to the US. It just shows that checks and balances are not properly implemented there, just previous presidents weren't exploiting it maliciously for their own gains.
>> they are restricted in how they use it, and defendents have rights and due process.
That due process only exists to the extent the branches of govt are independent, have co-equal power, and can hold and act upon different views of the situation.
When all branches of govt are corrupted or corrupted to serve the executive, as in autocracies, that due process exists only if the executive likes you, or accepts your bribes. That is why there is such a huge push by right-wing parties to take over the levers of power, so they can keep their power even after they would lose at the ballot box.
> Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
This is pretty messed up btw.
Social work for children systems in the USA are very messed up. It is not uncommon for minority families to lose rights to parent their children for very innocuous things that would not happen to a non-oppressed class.
It is just another way for the justice/legal system to pressure families that have not been convicted / penalized under the supervision of a court.
And this isn't the only lever they use.
Every time I read crap like this I just think of Aaron Swartz.
One can also say we do too little for children who get mistreated. Taking care of other peoples children is never easy the decision needs to be fast and effective and no one wants to take the decision to end it. Because there are those rare cases were children dies because of a reunion with their parents.
Offline syncing of outlook could reveal a lot of emails that would otherwise be on a foreign server. A lot of people save copies of documents locally as well.
Most enterprises have fully encrypted workstations, when they don't use VM where the desktop is just a thin client that doesn't store any data. So there should be really nothing of interest in the office itself.
Except when they have encryption, which should be the standard? I mean how much data would authorities actually retrieve when most stuff is located on X servers anyways? I have my doubts.
The authorities will request the keys for local servers and will get them. As for remote ones (outside of France jurisdiction) it depends where they are and how much X wants to make their life difficult.
Musk and X don't seem to be the type to care about any laws or any compelling legal requests, especially from a foreign government. I doubt the French will get anything other than this headline.
Getting kicked out of the EU is extremely unattractive for Twitter. But the US also has extradition treaties so that’s hardly the end of how far they can escalate.
White people already extradited to the EU during the current administration would disagree. But this administration has a limited shelf life, even hypothetically just under 3 years of immunity isn’t enough for comfort.
Yes, he is in power since 2000 (1999, actually) but 1999-2012 he was Prime Minister. Only then he became President, which would make the end of his second term 2024. So the current one would be his third term (by the magic of changing the constitution and legal quibbles which effectively allow a president to stay in charge for four almost whole terms, AFAIU).
> France? A nuclear state? Paris is properly sovereign.
That is true. But nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris. Because i’m just not seeing it.
> nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris
Paris doesn’t need to back down. And it can independently exert effort in a way other European countries can’t. Musk losing Paris means swearing off a meaningful economic and political bloc.
France doesn't extradite its citizens, even absolute scumbags like Roman Polanski. Someone like Musk has lots of lawyers to gum up extradition proceedings, even if the US were inclined to go along. I doubt the US extradition treaty would cover this unless the French could prove deliberate sharing of CSAM by Musk personally, beyond reckless negligence. Then again, after the Epstein revelations, this is no longer so far-fetched.
If I'm an employee working in the X office in France, and the police come in and show me they have a warrant for all the computers in the building and tell me to unlock the laptop, I'm probably going to do that, no matter what musk thinks
Witnesses can generally not refuse in these situations, that's plain contempt and/or obstruction. Additionally, in France a suspect not revealing their keys is also contempt (UK as well).
The game changed when Trump threatened the use of military force to seize Greenland.
At this point a nuclear power like France has no issue with using covert violence to produce compliance from Musk and he must know it.
These people have proven themselves to be existential threats to French security and France will do whatever they feel is necessary to neutralize that threat.
Musk is free to ignore French rule of law if he wants to risk being involved in an airplane accident that will have rumours and conspiracies swirling around it long after he’s dead and his body is strewn all over the ocean somewhere.
Counter-point. France has already kidnapped another social media CEO and forced him to give up the encryption keys. The moral difference between France (historically or currently) and a 3rd wold warlord is very thin. Also, look at the accusations. CP and political extremism are the classic go-tos when a government doesn't really have a reason to put pressure on someone but they really want to anyway. France has a very questionable history of honoring rule of law in politics. Putting political enemies in prison on questionable charges has a long history there.
"I can't see any difference between a country that has busted two companies that were known for hosting child porn, and a random cartel kingpin" isn't the flex you think it is
We are also talking about a country who wants to ban anonymous VPNs in the name of protecting the children and ask everyone to give their ID card to register account on Instagram, TikTok, etc.
People were surprised when the US started just droning boats in the Caribbean and wiping out survivors, but then the government explained that it was law enforcement and not terrorism or piracy, so everyone stopped worrying about it.
Seriously, every powerful state engages in state terrorism from time to time because they can, and the embarrassment of discovery is weighed against the benefit of eliminating a problem. France is no exception : https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior
Killing foreigners outside of the own country has always been deemed acceptable by governments that are (or were until recently) considered to generally follow rule of law as well as the majority of their citizen. It also doesn't necessarily contradicts rule of law.
It's just that the West has avoided to do that to each other because they were all essentially allied until recently and because the political implications were deemed too severe.
I don't think however France has anything to win by doing it or has any interest whatsoever and I doubt there's a legal framework the French government can or want to exploit to conduct something like that legally (like calling something an emergency situation or a terrorist group, for example).
The second Donald Trump threatened to invade a nation allied with France is the second anyone who works with Trump became a legitimate military target.
Like a cruel child dismembering a spider one limb at a time France and other nations around the world will meticulously destroy whatever resources people like Musk have and the influence it gives him over their countries.
If Musk displays a sufficient level of resistance to these actions the French will simply assassinate him.
You got that backwards. Greenpeace for all its faults is still viewed as a group against which military force is a no-no. Sinking that ship cost France far more than anything they inflicted on Greenpeace. If anything, that event is evidence that going after Musk is a terrible idea.
PS Yes, Greenpeace is a bunch of scientifically-illiterate fools who have caused far more damage than they prevented. Doesn't matter because what France did was still clearly against the law.
I knew someone who was involved in an investigation (the company and person was the victim not the target of the investigation), their work laptop got placed into a legal hold, the investigators had access to all of their files and they weren't allowed delete to anything (even junk emails) for several years.
If you're a database administrator or similar working at X in France, are you going to going to go to jail to protect Musk from police with an appropriate warrant for access to company data? I doubt it.
It sounds better in the news when you do a raid. These things are generally not done for any purpose other than to communicate a message and score political points.
What happened to due process? Every major firm should have a "dawn raid" policy to comply while preserving rights.
Specific to the Uber case(s), if it were illegal, then why didn't Uber get criminal charges or fines?
At best there's an argument that it was "obstructing justice," but logging people off, encrypting, and deleting local copies isn't necessarily illegal.
violent agreement is when you're debating something with someone, and you end up yelling at each other because you think you disagree on something, but then you realize that you (violently, as in "are yelling at each other") agree on whatever it is. Agressive compliance is when the corporate drone over-zealously follows stupid/pointless rules when they could just look the other way, to the point of it being aggressively compliant (with stupid corporate mumbo jumbo).
This is a perfect way for the legal head of the company in-country to visit some jails.
They will explain that it was done remotely and whatnot but then the company will be closed in the country. Whether this matters for the mothership is another story.
It's not illegal to head a subsidiary of a company that did bad things, but I'm sure he will be intensely questioned. If he did something illegal, he may be punished.
That sounds awfully difficult to do perfectly without personally signing up for extra jail time for premeditated violation of local laws. Like in that scenario, any reference to the unsanitized file or a single employee breaking omertà is proof that your executives and IT staff conspired to violate the law in a way which is likely to ensure they want to prosecute as maximally as possible. Law enforcement around the world hates the idea that you don’t respect their authority, and when it slots into existing geopolitics you’d be a very tempting scapegoat.
Elon probably isn’t paying them enough to be the lightning rod for the current cross-Atlantic tension.
True, but that’s going to be a noisy process until there are a few theoretical breakthroughs. I personally would not leave myself legally on the hook hoping that Grok faked something hermetically.
Nobody does that. It is either cooperation with law enforcement or remote lock (and then there are consequences for the in-country legal entity, probably not personally for the head but certainly for its existence).
This was a common action during the Russian invasion of Ukraine for companies that supported Ukraine and closed their operations in Russia.
Or they just connect to a mothership with keys on the machine. The authorities can have the keys, but alas, they're useless now, because there is some employee watching the surveillance cameras in the US, and he pressed a red button revoking all of them. What part of this is illegal?
Obviously, the government can just threaten to fine you any amount, close operations or whatever, but your company can just decide to stop operating there, like Google after Russia imposed an absurd fine.
You know police are not all technically clueless, I hope. The French have plenty of experience dealing with terrorism, cybercrime, and other modern problems as well as the more historical experience of being conquered and occupied, I don't think it's beyond them to game out scenarios like this and preempt such measures.
As France discovered the hard way in WW2, you can put all sorts of rock-solid security around the front door only to be surprised when your opponent comes in by window.
They do have some physical records, but it would be mostly investigators producing a warrant and forcing staff to hand over administrative credentials to allow forensic data collection.
I read somewhere that Musk (or maybe Theil) companies have processes in place to quickly offload data from a location to other jurisdictions (and destroy the local data) when they detect a raid happening. Don't know how true it is though. The only insight I have into their operations was the amazing speed by which people are badged in and out of his various gigafactories. It "appears" that they developed custom badging systems when people drive into gigafactories to cut the time needed to begin work. If they are doing that kind of stuff then there has got to be something in place for a raid. (This is second hand so take with a grain of salt)
EDIT: It seems from other comments that it may have been Uber I was reading about. The badging system I have personally observed outside the Gigafactories. Apologies for the mixup.
Everyone defines their own moral code and trusts that more than the laws of the land. Don't tell me you've never gone over the speed limit, or broken one of the hundreds of crazy laws people break in everyday life out of ignorance.
The speed limit is not a law the same way "don't murder" is a law. And "don't destroy evidence of a crime" is a lot closer to "don't murder", legally speaking.
I assume that they have opened a formal investigation and are now going to the office to collect/perloin evidence before it's destroyed.
Most FAANG companies have training specifically for this. I assume X doesn't anymore, because they are cool and edgy, and staff training is for the woke.
That can start with self deleting messages if you are under court order, and has happens before:
“Google intended to subvert the discovery process, and that Chat evidence was ‘lost with the intent to prevent its use in litigation’ and ‘with the intent to deprive another party of the information’s use in the litigation.’”
Right, but you are confusing a _conspiracy_ with staff training.
I didn't work anywhere near the level, or anything thats dicey where I needed to have a "oh shit delete everything the Feds are here" plan. Which is a conspiracy to pervert the course of justice (I'm not sure what the common law/legal code name for that is)
The stuff I worked on was legal and in the spirit of the law, along with a paper trail (that I also still have) proving that.
>withholding evidence from the prosecution, you are going to jail if you follow.
Prosecution must present a valid search warrant for *specific* information. They don't get a carte blanche, so uber way is correct. lock computers and lets the courts to decide.
In common law/4th amendment, kinda. Once you have a warrant, then the word reasonable comes into play. Its reasonable to assume that the data you want is on the devices of certain people. if incidental data/evidence is also procured that was reasonably likely to contain said data, then its fair game
In the civil code, its quite possibly different. The french have had ~ 3 constitutions in the last 80 years. The also dont have the concept of case history. who knows what the law actually is.
mine had a scene where some bro tried to organise the resistance. A voice over told us that he was arrested for blocking a legal investigation and was liable for being fired due to reputational damage.
X's training might be like you described, but everywhere else that is vaguely beholden to law and order would be opposite.
> Seems like you'd want to subpoena source code or gmail history or something like that.
This would be done in parallel for key sources.
There is a lot of information on physical devices that is helpful, though. Even discovering additional apps and services used on the devices can lead to more discovery via those cloud services, if relevant.
Physical devices have a lot of additional information, though: Files people are actively working on, saved snippets and screenshots of important conversations, and synced data that might be easier to get offline than through legal means against the providers.
In outright criminal cases it's not uncommon for individuals to keep extra information on their laptop, phone, or a USB drive hidden in their office as an insurance policy.
This is yet another good reason to keep your work and personal devices separate, as hard as that can be at times. If there's a lawsuit you don't want your personal laptop and phone to disappear for a while.
Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data. There's also the possibility of the MDM software making it impossible to decrypt if given a remote signal. Even if you image the drive, you can't image the secure enclave so if it is wiped it's impossible to retrieve.
> Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data.
In these situations, refusing to provide those keys or passwords is an offense.
The employees who just want to do their job and collect a paycheck aren’t going to prison to protect their employer by refusing to give the password to their laptop.
The teams that do this know how to isolate devices to avoid remote kill switches. If someone did throw a remote kill switch, that’s destruction of evidence and a serious crime by itself. Again, the IT guy isn’t going to risk prison to wipe company secrets.
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
An "audition en tant que témoin libre" is more or less the way for an investigation to give a chance to give their side of the story. Musk is not likely to be personally tried here.
The right didn't give a shit about weed in the 80's or the 90's depending entirely upon who had it.
When Bernhard Hugo Goetz shot four teenagers on an NYC subway in the 80s, his PCP-laced marijuana use and stash back at his apartment came up in both sets of trials in the 80s and later in the 90s.
It was ignored (although not the alleged drug use of the teenagers) as Goetz was dubbed The Subway Vigilante and became a hero to the right.
Still, stoner-cultures in many countries in Europe celebrate 4-20, definitively a bunch of Frenchies getting extra stoned that day. It's probably the de-facto "international cannabis day" in most places in the world, at least the ones influenced by US culture which reached pretty far in its heyday.
>The Paris prosecutor's office said it launched the investigation after being contacted by a lawmaker alleging that biased algorithms in X were likely to have distorted the operation of an automated data processing system.
I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
Distorted the operation how? By making their chatbot more likely to say stupid conspiracies or something? Is that even against the law?
> The first two points of the official document, which I re-quote below, are about CSAM.
Sorry, but that's a major translation error. "pédopornographique" properly translated is child porn, not child sexual abuse material (CSAM). The difference is huge.
> The term “child pornography” is
currently used in federal statutes and
is defined as any visual depiction of
sexually explicit conduct involving a
person less than 18 years old. While
this phrase still appears in federal
law, “child sexual abuse material” is
preferred, as it better reflects the
abuse that is depicted in the images
and videos and the resulting trauma
to the child. In fact, in 2016, an
international working group,
comprising a collection of countries
and international organizations
working to combat child exploitation,
formally recognized “child sexual
abuse material” as the preferred term.
> “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child.
Yes, CSAM is preferred for material depicting abuse reflecting resulting trauma.
But not for child porn such as manga of fictional children depicting no abuse and traumatising no child.
> Child porn is csam.
"CSAM isn’t pornography—it’s evidence of criminal exploitation of kids."
That's from RAINN, the US's largest anti-sexual violence organisation.
Maybe US law makes a distinction, but in Europe there is no difference. Sexual depictions of children (real or not) is considered child pornography and will get you sent to the slammer.
On the contrary, in Europe there is a huge difference. Child porn might get you mere community service, a fine - or even less, as per the landmark court ruling below.
It all depends on the severity of the offence, which itself depends on the category of the material, including whether or not it is CSAM.
The Supreme Court has today delivered its judgment in the case where the court of appeals and district court sentenced a person for child pornography offenses to 80 day fines on the grounds that he had called Japanese manga drawings into his computer. Supreme Court dismiss the indictment.
The judgment concluded that the cartoons in and of itself may be considered pornographic, and that they represent children. But these are fantasy figures that can not be mistaken for real children.
The way chatbots actually work, I wonder if we shouldn't treat the things they say more or less as words in a book of fiction. Writing a character in your novel who is a plain parody of David Irving probably isn't a crime even in France. Unless the goal of the book as such was to deny the holocaust.
As I see it, Grok can't be guilty. Either the people who made it/set its system prompt are guilty, if they wanted it to deny the holocaust. If not, they're at worst guilty of making a particularly unhinged fiction machine (as opposed to the more restrained fiction machines of Google, Anthropic etc.)
> I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
GDPR and DMA actually have teeth. They just haven't been shown yet because the usual M.O. for European law violators is first, a free reminder "hey guys, what you're doing is against the law, stop it, or else". Then, if violations continue, maybe two or three rounds follow... but at some point, especially if the violations are openly intentional (and Musk's behavior makes that very very clear), the hammer gets brought down.
Our system is based on the idea that we institute complex regulations, and when they get introduced and stuff goes south, we assume that it's innocent mistakes first.
And in addition to that, there's the geopolitical aspect... basically, hurt Musk to show Trump that, yes, Europe means business and has the means to fight back.
As for the allegations:
> The probe has since expanded to investigate alleged “complicity” in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organised group, and other offences, the office said in a statement Tuesday.
The GDPR/DMA stuff just was the opener anyway. CSAM isn't liked by authorities at all, and genocide denial (we're not talking about Palestine here, calm your horses y'all, we're talking about Holocaust denial) is a crime in most European jurisdiction (in addition to doing the right-arm salute and other displays of fascist insignia). We actually learned something out of WW2.
420 is a stoner number, stoners lol a lot, thought of Elmo's failed joint smoking on JRE before I stopped watching
...but then other commenters reminded me there is another thing on the same date, which might have been more the actual troll at Elmo to get him all worked up
Incredible to see all these commenters defending obvious nasty behaviour by a bad individual and a sad company. Are you admiring Elon so much because he has money, success? There are more important things in live. Not being an asshole Nazi admirer, for example.
The merger was most likely now because they have to do it before the IPO. After the IPO, there’s a whole process to force independent evaluation and negotiation between two boards / executives, which would be an absolute dumpster fire where Musk controls both.
How was that move legal anyway? Like... a lot of people and governments gave Musk money to develop, build and launch rockets. And now he's using it to bail out his failing social media network and CSAM peddling AI service.
Money comes with strings, such as when forming an ongoing relationship with a company you expect them to not merge with other companies you are actively prosecuting. I suspect the deal is going so fast to avoid some sort of veto being prepared. Once SpaceX and xAI are officially the same, you lose the ability to inflict meaningful penalties on xAI without penalizing yourself as an active business partner with SpaceX.
Was it a grant or a purchase? If I buy a pizza from the pizza shop, it costs them $10 to make, I pay $11, the $1 is profit and the owner can do what he wants with it. But if I get a grant from NLnet I have to spend it on what the grant proposal says. Though a lot of NLnet grants are for living costs while doing a project, so I can do what I like for that time if the project gets done.
The owner of X / Grok, despite claiming he "never heard of the guy", has been found to be associated with a known convicted pedophile and child prostitute trafficker.
The brother of said owner and board member of X / Grok, procured girls through said convicted pedophile.
X / Grok create a child porn generator.
Nothing to see here, move on. I can't believe you guys are still talking about that woke leftist hoax that I spent years telling you was a conspiracy and a coverup that went to the highest levels of the <other side> elites.
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
“ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.
Marginal note:Mode of expression
(2) A defamatory libel may be expressed directly or by insinuation or irony
(a) in words legibly marked on any substance; or
(b) by any object signifying a defamatory libel otherwise than by words.”
It doesn't have to be an assertion, or even a written statement.
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
"I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech."
I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.
An assassination market, at least the one we discussed, works like this - One or more people put up a bounty paid out on the death of someone. Anyone can submit a (sealed) description of the death. On death, the descriptions are opened — the one closest to the actual circumstances is paid the bounty.
One of my portfolio companies had information about contributors to these markets — I was told by my FBI contact when I got in touch that their view was the creation of the market, the funding of the market and the descriptions were all legal — they declined to follow up.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.
I'm sure it's comforting to believe that people you disagree with do so for silly reasons, but many people will support this just because we like the rule of law.
I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
> official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
This. We don't have to accept that they behave that way. They enter our economies so they need to adhere to our laws. And we can fine them. No one wants to lose Europe as a market, even if all the haters call us a shithole.
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
Who decides what communication is in the interest of the public at large? The Trump administration?
You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
I remember in some countries there's an official government newspaper. Laws reference publishing things in this paper (e.g. tax rate changes, radio frequency allocations) and the law is that you must follow it once it's published.
In practice the information is disseminated through many channels once it's released in the official newspaper. Mass media reports on anything widely relevant, niche media reports on things nichely relevant, and there's direct communication with anyone directly affected (recipient of a radio frequency allocation) so nobody really subscribes to the official government newspaper, but it's there and if there was a breakdown of communication systems that would be the last resort to ensure you are getting government updates.
Maybe you should look up child pornography laws in Europe. In Sweden, the mere act of scrolling by an image depicting (real or not) a child in a sexual position, and having it stored in the browser cache, is a crime with up to 2 years of prison time.
All I've seen is Elon tried to invite himself to the "wild parties" and they told him he couldn't come and that they weren't doing them anymore lol. It's possible he went but, from what I've seen, he wasn't ever invited.
Who knows who did what on this island, and I hope we'll figure it out. But in the meantime, going to this island or/and being friend with Epstein doesn't automatically make someone a pedo or rapist.
No, but they all knew he was a pedo/rapist, and were still friends with him and went to the island of a pedo/rapist, and introduced the pedo/rapist to their friends...
We don't know how many were pedo/rapists, but we know all of them liked to socialize with one and trade favours and spread his influence.
Neither does your wife divorcing you at about the same time things started to go through legal process...
Oops... yeah, in retrospect it was even worse... no... you can and should be judged by the friends you keep and hang-out with... The same ones who seem to be circling the wagons with innocuous statements or attempts to find other scapegoats (DARVO)... hmm, what was that quote again:
"We must all hang together or we will all hang separately"
You know the flight logs are public record and have been for a decade right? We know (and have known for awhile), exactly who was and wasn't there. Who was there: Obama, Bill Clinton, and Bill Gates (his frequency of visits cost him his marriage). Who wasn't there? Trump and Elon because at the time they weren't important enough to get an invite. All of this is a matter of public record.
Elon Musk has his own planes, he would not have needed a ride had Epstein invited him. Recently released emails also show people (like commerce secretary Howard Lutnick, who asserted at great length last year that he hadn't had any contact with Epstein since meeting him in 2005) arranging to visit Epstein at his island and taking their own yacht over there.
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
2. disseminates, transfers, provides, exhibits, or otherwise makes such an image of a child available to another person,
3. acquires or offers such an image of a child,
4. facilitates contacts between buyers and sellers of such images of children or takes any other similar measure intended to promote trade in such images, or
5. possesses such an image of a child or views such an image to which he or she has gained access
shall be sentenced for a child pornography offense to imprisonment for at most two years.
Then there's Proposition 2009/10:70, which is a clarifying document on how the law should be interpreted:
"To depict a child in a pornographic image entails the production of such an image of a child. An image can be produced in various ways, e.g., by photographing, filming, or drawing a real child. Through various techniques, more or less artificial images can also be created. For criminal liability, it is not required that the image depicts a real child; images of fictitious children are also covered. New productions can also be created by reproducing or manipulating already existing depictions, for example, by editing film sequences together in a different order or by splicing an image of a child’s head onto an image of another child’s body."
Let me quote again: Pay attention to c.iv specifically:
(c) ‘child pornography’ means:
(i) any material that visually depicts a child engaged in real or simulated sexually explicit conduct;
(ii) any depiction of the sexual organs of a child for primarily sexual purposes;
(iii) any material that visually depicts any person appearing to be a child engaged in real or simulated sexually explicit conduct or any depiction of the sexual organs of any person appearing to be a child, for primarily sexual purposes; or
(iv) realistic images of a child engaged in sexually explicit conduct or realistic images of the sexual organs of a child, for primarily sexual purposes;
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
"No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
> So you cant speak Swedish, yet you think you grasped the Swedish law definition?
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
> Even the Google "AI" knows better than that. CSAM "is [...]"
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
You are both arguing semantics. A pornographic image of a child. That's illegal no matter what it's called. I say killing, you say murder, same law though, still illegal.
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang
framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att
det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid
avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...
You don't see a huge difference between abusing a child (and recording it) vs drawing/creating an image of a child in a sexual situation? Do you believe they should have the same legal treatment? In Japan for instance the latter is legal.
He made no judgement in his comment, he just observed the fact that the term csam - in at least the specified jurisdiction - applies to generated pictures of teenagers, wherever real people were subjected to harm or not.
I suspect none of us are lawyers with enough legal knowledge of the French law to know the specifics of this case
This comment is a part of the chain that starts with a very judgemental comment and is an answer to a response challenging that starting one. You don't need legal knowledge of the French law to want to distinguish real child abuse from imaginary. One can give arguments why the latter is also bad, but this is not an automatic judgment, should not depend on the laws of a particular country and I, for one, am deeply shocked that some could think it's the same crime of the same severity.
Are you implying that it's not abuse to "undress" a child using AI?
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
the thing is a lot of recent legal preceding surrounding X is about weather X fulfilled the legally required due diligence and if not what level of negligence we are speaking about
and the things about negligence which caused harm to humans (instead of e.g. just financial harm) is that
a) you can't opt out of responsibility, it doesn't matter what you put into your TOS or other contracts
b) executives which are found responsible for the negligent action of a company can be hold _personally_ liable
and independent of what X actually did Musk as highest level executive personal did
1) frequently did statements that imply gross negligence (to be clear that isn't necessary how X acted, which is the actual relevant part)
2) claimed that all major engineering decisions etc. are from him and no one else (because he love bragging about how good of an engineer he is)
This means summoning him for questioning is legally speaking a must have independent of weather you expect him to show up or not. And he probably should take it serious, even if that just means he also could send a different higher level executive from X instead.
Good and honestly it’s high time. There used to be a time when we could give corps the benefit of the doubt but that time is clearly over. Beyond the CSAM, X is a cesspool of misinformation and generally the worst examples of humanity.
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it.
If anything it should be an embarrassment that France are the only ones doing this.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
> If anything it should be an embarrassment that France are the only ones doing this.
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
* "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo
* locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.
I think the grok incident/s were distasteful but I can't honestly think of a reason to ban grok and not any other AI product or even photoshop.
I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.
They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.
To people claiming a physical raid is pointless from the point of gathering data :
- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
It's also just very basic police work. We're investigating this company, we think they've committed a crime. Ok, why do you think that. Well they've very publicly and obviously committed a crime. Ok, are you going to prosecute them? Probably. Have you gone to their offices and gathered evidence? No thanks.
Of course they're going to raid their offices! They're investigating a crime! It would be quite literally insane if they tried to prosecute them for a crime and how up to court having not even attempted basic steps to gather evidence!
All because "AI nudes"? Seems heavy-handed, almost like the controversy over naughty images has received a state-sponsored outrage boost for other reasons.
"Shocking Grok images"... really? It's AI. We know AI can make any image. The images are nothing but fake digital paintings that lose all integrity as quickly as they're generated.
Beyond comedic kicks for teenage boys, they're inconsequential for everyone else. But nevermind that, hand me a pitchfork and pre-fabricated sign and point me to the nearest anti-Grok protest.
It has always been illegal and morally reprehensible to create, own, distribute or store sexually explicit material that represents a real person without their consent, regardless if they are underage or not.
Grok is a platform that is enabling this en masse. If xAI can't bring in guardrails or limit who can access these capabilities, then they deserve what's coming to them.
I think you are going a bit too far.
Let's start from the beginning, create and own:
You're sketching out some nude fanart on a piece of paper. You created that and own that. Thas has always been illegal?!
(This is apart from my feelings on Mechahitler/Grok, which aren't positive.)
You can _almost_ do anything you want in the privacy of your home; but in this case Twitter was actively and directly disseminating pictures publicly on their platform.
And profiting from it, though less directly than "$ for illegal images". Even if it wasn't behind a paywall (which it mostly is) driving more traffic for more ads for more income is still profiting from illegal imagery.
> You're sketching out some nude fanart on a piece of paper.
Is twitter a piece of paper in your desk? No, it's not.
In which broken society do you live where this is true? I would say drawing sexually explicit pictures of real persons without their consent and keeping them in your drawer is neither illegal nor morally reprehensible in most of the world.
I am with you on publishing these...
At my kids school the children have been using grok to create pics of other children without clothes on - chatgpt etc won’t let you do that - grok needs some controls and x seem unable to do that themselves.
What would raiding the office achieve in this case apart from just showing off power.
In such a case specifically: Uncover internal communication that shows the company was aware of the problem and ignored it, which presumably affects liability a lot.
I wonder what they will find. They seemed to have acknowledged working on the problem before.
https://x.com/elonmusk/status/2011432649353511350
This is the cyber crime unit. They will exfiltrate any data they want. They will use employee account to pivot into the rest of the X network. They don't just go in, grab a couple of papers, laptops and phones. They hook into the network and begin cracking.
Why are you defending X here?
It sounds like they are following due process.
Normally getting raided by the police causes people and organizations to change their behavior.
Enforcing the law usually is an inhibitor for criminals
But, isn't that bad for the criminals?
How about you come back when your daughter has a fake AI nude passed around school.
Is that what it would have taken for you to support the Patriot Act?
So, when they were doing it for the last 3 decades in Photoshop (I was in high-school and this already existed) you would be just fine with the tool being used to do it and with the boys and the school?
Is that your argument? Did you ever expect the government to go after Adobe for "enabling" this?
Not the same - the barrier to entry was too high. Most people don't have the skills to edit photos using Photoshop. Grok enabled this to happen to scale for users who are complete non techies. With grok, anyone who could type in a half-coherent sentence in English could generate and disseminate these images.
Edit: clarified the last sentence
Sorry, but barrier to entry doesn't seem like a very good legal excuse. Goes in the same direction as NY attempts to ban 3D printing because - supposedly - it enables people to more easily make guns.
This is a political action by the French... slowly loosing their relevance, even inside the EU. Nothing else.
I see what you’re getting at. You’re trying to draw a moral equivalence between photoshop and grok. Where that falls flat for me is the distribution aspect: photoshop would not also publish and broadcast the illegal material.
But police don’t care about moral equivalence. They care about the law. For the legal details we would need to consult French law. But I assume it is illegal to create and distribute the images. Heck, it’s also probably against Twitter’s TOS too so by all rights the grok account should be banned.
> This is a political action by the French
Maybe. They probably don’t like a foreign company coming in, violating their children, and getting away with it. But what Twitter did was so far out of line that I’d be shocked if French companies weren’t treated the same way.
> But I assume it is illegal to create and distribute the images.
I very much so expect it to be illegal to distribute the images, of course (creating them, not so much).
But the illegality, in a sane world (and until 5 minutes ago) used to be attached to the person actually distributing them. If some student distributes fake sexualized images of their colleague, I very much expect the perpetrator to be punished by the law (and by the school, since we are at it).
Creating, possessing, and distributing CSAM is illegal in the US and many other countries. Can you explain why you think it should be legal to create something that is illegal to possess or distribute?
In your hypothetical scenario, why aren't the school kids making and distributing fake nudes of his daughter be the ones getting in trouble?
Have we a outsourced all accountability for the crimes of humans to AI now?
It's not hypothetical. And in fact the girl who was being targeted was expelled not the boys who did it [1].
Those boys absolutely should be held accountable. But I also don't think that Grok should be able to quickly and easily generate fake revenge porn for minors.
[1] https://www.nbcnewyork.com/news/national-international/girl-...
>And in fact the girl who was being targeted was expelled not the boys who did it [1].
And the AI is at fault for this sentencing, not the school authorities/prosecutors/judges dishing justice? WTF.
How is this an AI problem and not a legal system problem?
You can’t “undo” a school shooting, for instance, so we tend to have gun laws.
You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…
Yes, we should have some protections or restrictions on what you can do.
You may not understand it, either because you aren’t a parent or maybe just not emotionally equipped to understand how serious this actually can be, but your lack of comprehension does not render it a non-issue.
Having schools play whack-a-mole after the photos are shared around is not a valid strategy. Never mind that schools primarily engage in teaching, not in investigation.
As AI-generated content gets less and less distinguishable from reality, these incidents will have far worse consequences and putting such power in the hands of adolescents who demonstrably don’t have sound judgment (hence why they lack many other rights that adults have) is not something most parents are comfortable with - and I doubt you’ll find many teachers, psychiatrists and so on who would support your approach either.
>You can’t just “undo” some girl being harassed by AI generated nude photos of her, so we…
No, but if you send those people who made and distributed the AI nude of her to jail, these problems will virtually disappear overnight, because going to jail is a hugely effective deterrent for most people.
But if you don't directly prosecute the people doing it, and instead just ban Grok AI, then those people will just use other AI tools, outside of US jurisdiction, to do the same things and the problem persists.
And the issues keeps persisting, because nobody ever goes to jail. Everyone only gets a slap on the wrist, deflects accountability by blaming the AI, so the issue keeps persisting and more people end up getting hurt because those who do the evil are never held directly accountable.
Obviously Grok shouldn't be legally allowed to generate fakes nudes of actual kids, but in case such safeguards can and will be bypassed, that doesn't absolve the humans from being the ones knowingly breaking the law to achieve a nefarious goal.
That’s just not how the world works.
Youths lack judgment, so they can’t vote, drink, drive, have sex or consent to adults.
A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.
Beyond that, we regulate guns, speed limits and more according to principles like “your right to swing your fist ends at my nose”.
We do that not only because shoving kids into jails is something we want to avoid, but because regulating at the source of the problem is both more feasible AND heads off a lot of tragedy.
And again, you fail to acknowledge the investigative burden you put on society to discover who originated the photo after the fact, and the trauma to the victim.
If none of that computes for you, then I don’t know what to say except I don’t place the right to generate saucy images highly enough to swarm my already overworked police with requests to investigate who generated fake underage porn.
>A 14-year-old can’t be relied to understand the consequences of making nudes of some girl.
Teenagers do stupid shit all the time. But they still get prosecuted or convicted when they do crimes. They go to juvy or their parents get punished. Being 14 is not a get out of jail free card.
The way you are arguing makes it really hard to understand what you are trying to say. I am guessing you are upset that non-human entity is being used as a boogie man while the actual people are going free? But your argumentation reads like someone who is very upset at AI producing CSAM is being persecuted. I won’t be surprised if people think you are defending CSAM.
In good faith, a few things - AI generated imagery and Photoshop are not the same. If someone can mail Adobe and a photo of a kid and ask for a modified one and Adobe sent it back, yes Adobe’s offices will be raided. That’s the equivalent here. It’s not a tool. It’s a service. You keep using AI, without taking a moment to give the “intelligence” any thought.
Yes, powerful people are always going to get by, as you say. And the laws & judicial system are for the masses. There is definitely unfairness in it. But that doesn’t change anything here - this is a separate conversation.
If not Grok then someone else will do it - is a defeatist argument that can only mean it can’t be controlled so don’t bother. This point is where you come across as a CSAM defender. Govt’s will/should do whatever they can to make society safe, even if it means playing whack a mole. Arguing that’s “not efficient” is frankly confusing. Judicial system is about fairness and not efficiency.
frankly, I think you understand all of this and maybe got tunnel visioned in your anger at the unfairness of people scapegoating technology for its failings. That’s the last thing I want to point out, raiding an office is taking action against the powerful people who build systems without accountability. They are not going to sit the model down and give a talking to. The intention is to identify the responsible party that allows this to happen.
You cannot offload all problems to the legal system. It does not have the capacity. Legal issues take time to resolve and the victims have to have the necessary resource to pursue legal action. Grok enabled abuse at scale, which no legal system in the world can keep up with. It doesn't need explanation that generating nudes of people without their consent is a form of abuse. And if the legal system cannot keep up with protecting victims, the problem has to be dealt with at source.
>You cannot offload all problems to the legal system. It does not have the capacity.
You definitely can. You don't have to prosecute and send a million people to jail for making and distributing fake AI nudes, you just have to send a couple, and then the problem virtually goes away.
People underestimate how effective direct personal accountability is when it comes with harsh consequences like jail time. That's how you fix all issues in society and enforce law abiding behavior. You make the cost of the crime greater than the gains from it, then crucify some people in public to set an example for everyone else.
Do people like doing and paying their taxes? No, but they do it anyway. Why is that? Because THEY KNOW that otherwise they go to jail. Obviously the IRS and legal system don't have the capacity to send the whole country to jail if they were to stop paying taxes, but they send enough to jail in order for the majority of the population to not risk it and follow the law.
It's really that simple.
None of what you've said is true. Deterrence is known to have a very limited effect on behaviour.
In this case, it's far simpler to prosecute the source.
>None of what you've said is true.
Everything I said is true.
>Deterrence is known to have a very limited effect on behaviour.
It is insanely effective when actually enforced. It's not effective when the goal is to make it seem ineffective so that people can evade the system.
>In this case, it's far simpler to prosecute the source.
The "source" is a tool that tomorrow can be in Russia or CHina and you can't prosecute.
You know there is no such thing as the world police or something of that sort.
If the perpetrator is in another country / jurisdiction it is virtually impossible to prosecute let alone sentence.
It is 100% regulatory problem in this case. You just cannot allow this content to be generated and distributed in the public domain by anonymous users. It has nothing to do with free speech but with civility and common understanding of what is morally wrong / right.
Obviously you cannot prevent this in private forums unless it is made illegal which is a completely different problem that requires a very different solution.
Have you considered that it is possible for two things to be problems?
No, because the comment is in bad faith, it just introduced an unrelated issue (poor sentencing from authorities) as an argument for the initial issue we are discussing (AI nudes), derailing the conversation, and then using the new issue they themselves introduced to legitimize their poor argument when one has nothing to do with the other and both can be good/bad independently of each other.
I don't accept this as good faith argumentation nor does HN rules.
You are the only one commenting in bad faith, by refusing to understand/acknowledging that the people using Grok to create such pictures AND Grok are both part of the issue. It should not be possible to create nudes of minors via Grok. Full stop.
>You are the only one commenting in bad faith
For disagreeing on the injection of offtopic hypothetical scenarios as an argument derailing the main topic?
>It should not be possible to create nudes of minors via Grok.
I agree with THIS part, I don't agree with the part where the main blame is on the AI, instead of on the people using it. That's not a bad faith argument, it's just My PoV.
If Grok disappears tomorrow, there will be other AIs from other parts of the world outside of US/EU jurisdiction, that will do the same since the cat is out of the bag and the technical barrier to entry is dropping fast.
Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?
> Do you keep trying to whack-a-mole the AI tools for this, or the humans actually making and distributing fake nudes of real people?
Both, obviously. For example, you go after drug distributors and drug producers. Both approaches are effective in different ways, I am not sure why you are having such trouble understanding this.
This is textbook whataboutery. The law is perfectly clear on this, and Musk is liable.
Other AIs have guardrails. If Musk chooses not to implement them, that's his personal irresponsibility.
Grok made the pictures.
The school authorities messed up.
Both are accuntable.
>Grok made the pictures.
Correction: kids made the pictures. Using Grok as the tool.
If kids were to "git gud" at photoshop and use that to make nudes, would you arrest Adobe?
In the spirit of shitty "If's ..."
If kids ask a newspaper vendor for cigarettes and he provides them .. that's a no-no.
If kids ask a newspaper vendor for nudes and he provides them .. that's a no-no.
If kids ask Grok for CSAM and it provides them .. then ?
The existence and creation of cigarettes and adult nude magazines is fully legal, only their sale is illegal to kids. If kids try to illegally obtain those LEGAL items, it doesn't make the existence of those items illegal, just the act of sale to them.
Meanwhile, the existence/creation CSAM of actual people isn't legal, for anyone no matter the age.
And when the magazines get sold who is breaking the law and gets convicted it is not the children but the shop supplying the children.
So when Grok provides the illegal pictures then by the same logic it is Grok that is breaking the law.
Grok created those images.
If parents or school let children play with explosives or do drugs and they get hurt, that's a no-no.
If parents or school let children roam the internet unsupervised... then?
> If parents or school let children play with explosives or do drugs
The explosive sellers that provide explosives to someone without a certification (child or adult) get in trouble (in this part of the world) .. regardless of whether someone gets hurt (although that's an upscale).
If sellers provide ExPo to certified parents and children get access .. that's on the parents.
In that analagy of yours, if grok provided ExPo or CSAM to children .. that's a grok problem,
(Ditto drugs).
It's on the provider to children. ie Grok.
If MechaGrok sells explosives to children, that's a go-go?
You're suggesting an inconsistency where there isn't one. A country can ban guns and allow rope, even though both can kill.
> A country can ban guns and allow rope, even though both can kill.
That's actually a good argument. And that's how the UK ending up banning not just guns, but all sorts of swords, machetes and knives, meanwhile the violent crime rates have not dropped.
So maybe dangerous knives are not the problem, but the people using them to kill other people. So then where do we draw the line between lethal weapons and crime correlation. At which cutting/shooting instruments?
Same with software tools, that keep getting more powerful with time lowering the bar to entry for generating nudes of people. Where do we draw the line on which tools are responsible for that instead of the humans using them for it?
You’re absolutely right that it is a difficult question where to draw the line. Different countries will do it differently according to their devotion to individual freedoms vs communal welfare.
The knife (as opposed to sword) example is interesting. In the U.K. you’re not allowed to sell them to children. We recognise that there is individual responsibility at play, and children might not be responsible enough to buy them, given the possible harms. Does this totally solve their use in violent crime? No. But if your alternative is “it’s up to the individuals to be responsible”, well, that clearly doesn’t work, because some people are not responsible. At a certain point, if your job is to reduce harm in the population, you look for where you can have a greater impact than just hoping every individual follows the law, because they clearly don’t. And you try things even if they don’t totally solve the problem.
And indeed, the same problem in software.
As for the violent crime rates in the U.K., I don’t have those stats to hand. But murder is at a 50 year low. And since our post-Dunblane gun laws, we haven’t had any school shootings. Most Britons are happy with that bargain.
> meanwhile the violent crime rates have not dropped.
The rate of school shootings has dropped from one (before the implementation of recommendations from the Cullen report) to zero (subsequently). Zero in 29 years - success by any measure.
If you choose to look at _other_ types of violent crime, why would banning handguns have any effect?
> Where do we draw the line on which tools are responsible for that instead of the humans using them for it?
You can ban tools which enable bad outcomes without sufficient upside, while also holding the people who use them to account.
"Correction: kids made the pictures. Using Grok as the tool."
No. That is not how AI nowdays works. Kids told the tool what they want and the tool understood and could have refused like all the other models - but instead it delivered. And it only could do so because it was specifically trained for that.
"If kids were to "git gud" at photoshop "
And what is that supposed to mean?
Adobe makes general purpose tools as far as I know.
You're beating it around the bush not answering the main question.
Anyone skilled at photoshop can do fake nudes as good or even better than AI, including kids (we used it to make fun fakes of teachers in embarrassing situations back in the mid 00s and distribute them via MSN messenger), so then why is only the AI tool the one to blame for what the users do, but not Photoshop if both tools can be used to do the same thing?
People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?
Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things? This is the answer I'm looking for and I don't think there is an easy one, yet people here are too quick to pin blame based on their emotional responses and subjective biases and word views on the matter and the parties involved.
> Anyone skilled at photoshop
So let's say there are two ways to do something illegal. The first requires skills from the perpetrator, is tricky to regulate, and is generally speaking not a widespread issue in practice. The second way is a no brainer even for young children to use, is easy to regulate, and is becoming a huge issue in practice. Then it makes sense to regulate only the second.
> People can now 3D print guns at home, or at least parts that when assembled can make a functioning firearm. Are now 3D printer makers to blame if someone gets killed with a 3D printed gun?
Tricky question, but a more accurate comparison would be with a company that runs a service to 3D print guns (= generating the image) and shoot with them in the street (= publishing on X) automatically for you and keeps accepting illegal requests while the competitors have no issue blocking them.
> Where do we draw the line at tools in terms of effort required, between when the tool bares the responsibility and not just the human using the tool to do illegal things?
That's also a tricky question, but generally you don't really need to know precisely where to draw the line. It suffices to know that something is definitely on the wrong side of the line, like X here.
A 3D printer needs a blueprint. AI has all the blueprints built-in. It can generalize, so the blueprints cannot simply be erased, however at least what we can do is forbid generation of adult content. Harm should be limited. Photoshop requires skill and manual work, that's the difference. In the end, yes, people are the ones who are responsible for their actions. We shouldn't let kids (or anyone else) harm others with little to no effort. Let's be reasonable.
This happens all the time with abusive children in schools, they're rarely punished at all even with extreme abuse and violence.
Punishing kids after the fact does not stop the damage from occurring. Nothing can stop the damage that has already occurred, but if you stop the source of the nudes, you can stop future damage from occurring to even more girls.
>Punishing kids after the fact does not stop the damage from occurring.
Banning AI doesn't stop the damage from occurring. Bullies at school/college have been harassing their victims, often to suicide for decades/centuries before AI.
I'm sorry, did the article or anyone in this subthread suggest banning AI? That seems like quite a non-sequitur. I'm pretty sure the idea is to put a content filter on an online platform for one very specific kind of already-illegal content (modified nude images of real people, especially children), which is a far cry from a ban. Nothing can stop local diffusion or Photoshop, of course, but the hardware and technical barriers are so much higher that curtailing Grok would probably cut off 99% or more of the problem material. I suppose you'll tell me if any solution is not 100% effective we should do nothing and embrace anarchy?
Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.
You are defending child pornography en mass and for profit? Is it a new low for HN?
Y Combinator supports doing anything that makes money
I'm not defending CP, WTF is wrong with you? You're just hallucinating/making stuff up in bad faith.
Child sexual abuse material is literally in the training sets. Saying "banning AI" as though it's all the same thing, and all morally-neutral, is disingenuous. (Yes, a system with both nudity and children in its dataset might still be able to produce such images – and there are important discussions to be had about that – but giving xAI the benefit of equivocation here is an act of malice.)
Nobody wants to ban AI they want to regulate it. Which is what we do with all new technology.
To paraphrase "your tech bros were so preoccupied with whether or not they could they never considered if they should"
They may well get in trouble, but in that takes time, in the meantime photos will have been seen by most kids in school + you might get a year of bullying.
Education might be so disrupted you have to change schools.
This is accountability for the crimes of humans.
The crime is creating a system that lets schoolboys create fake nudes of other minors.
You don't just get to build a CSAM-generator and then be like "well I never intended for it to be used...".
The humans running a company are liable for the product that their company builds, easy as that.
>The crime is creating a system that lets schoolboys create fake nudes of other minors.
So like Photoshop? Do you want to raid Adobe's HQ?
Does Photoshop have a "let me jerk off to this minor" button?
Why do you want to jerk off to a minor? Sounds like you should get a visit from the police for asking a tool to do that for you.
If I ask you to go kill someone and you do it, in the eyes of the law I am just as guilty as you even though I never actually touched the person.
If you ask for CP, you're still just as guilty even if you're not the one making it.
I don't want to.
But I don't want others to be easily able to either.
In your scenario, yes, you are guilty as well. But so is the one that actually did the deed, i.e. Grok in this case.
You're arguing my point for me. Just because you do something for someone else doesn't mean you're absolved of responsibility.
You can't build a tool with a "create child porn" button and then expect not to get into trouble for helping people make child porn.
I really find this kind of appeal quite odious. God forbid that we expect fathers to have empathy for their sons, sisters, brothers, spouses, mothers, fathers, uncles, aunts, etc. or dare we hope that they might have empathy for friends or even strangers? It's like an appeal to hypocrisy or something. Sure, I know such people exist but it feels like throwing so many people under the bus just to (probably fail) to convince someone of something by appealing to an emotional overprotectiveness of fathers to daughters.
You should want to protect all of the people in your life from such a thing or nobody.
So making CSAM of real people is ok if an AI is involved?
You're defending X/Grok as if it's a public social platform.
It is a privately controlled public-facing group chat. Being a chat-medium does not grant you the same rights as being a person. France isn't America.
If a company operates to the detriment and against the values of a nation, e.g. not paying their taxes or littering in the environment, the nation will ask them to change their behavior.
If there is a conspiracy of contempt, at some point things escalate.
I'm in the same boat. We have literally pedos and child abusers in the epstein files talking openly about doing despicable things to women, kids and even babies, while authorities are focused on criminalizing generating images of fake minors that don't exist and that any other LLM platform can do if asked.
Plus, how do you even judge the age of AI generated fake people to say it's CP? Reminds me when UK activists were claiming Grok's anime girl avatar was a minor and deserved to be considered CP, when she had massive tits that no kid has. So how much of this is just a political witch-hunt looking for any reason to justify itself?
You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?
Also, it seems pretty likely that Musk is tangled up with the Epstein shit. First Musk claimed he turned down offer to go to the island. Now it turns out Musk repeatedly sought to visit, including wanting to know when the "wildest" party was happening, after Epstein was already known as a child sex abuser. Musk claimed that Epstein had never been given a tour of SpaceX but it turns out he did in 2013. It's the classic narcissistic "lie for as long as possible" behaviour. Will be interesting to see what happens as more is revealed.
>You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?
No i said no such thing, what I said was that the resources of authorities is a finite pie. If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.
Same how police won't bother with your stolen phone/bike because they have bigger crimes to catch. I'm asking for the same logic be applied here.
There's no indication that this investigation would draw resources away from investigating the Epstein files. It's happening in France, for starters, whilst the vast majority of Epstein's crimes appear to have happened in US territories. Speaking about "the authorities" as if they're a unified global entity sounds a little conspiratorial.
> If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.
1.) That is not how it works, even if we ignore the fact that France is not USA.
2.) Lack of resources was not the issue with Epstein prosecution. The prosecutor was literally told to not investigate by her superiors who were trying to stop the case. She was told she is unsubordinated for doing it. Acosta giving Epstein sweetheart deal or seeking to stop the prosecutor is not the resources issue.
It is billionaires (Thiel, Musk, Gates), politicians (Clinton, Luthnic ) media darlings (Summers, Kraus and the rest of sexism is totally not a thing anymore crowd literally partying with Epstein) are to be protected at all cost issue. Even now, people implicated in Epstein files are still getting influential positions with explicit "it would be cancel culture to not give these people more influence" argument.
I think the reasoning is that the AI contributes to more epsteins. In some way.
That isn't reasoning, it's wild speculation
I seem to remember there was research behind this, but I'm not sure.
How?
THat's like the 1993 moral panic that video games like Doom cause mass shootings, or the 1980's mass panic that metal music causes satanist, or the 1950s moral panic that superhero comic book violence leads to juvenile delinquency. Politicians are constantly looking for an external made up enemy to divert attention to from the real problems.
People like Epstein and mass woman/child exploitation have existed for thousands of years in the past, and will exist thousands of years in the future. It's part of the nature of the rich and powerful to execute on their deranged fetishes, it's been documented in writing since at least the Roman and Ottoman empires.
Hell, I can guarantee you there's other Epsteins operating in the wild right now, that we haven't heard of (yet), it's not like he was in any way unique. I can also guarantee you that 1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could.
> THat's like the 1993 moral panic that video games like Doom cause mass shootings,
Apart from doom wasn't producing illegal content.
the point is that grok is generating illegal content for those jurisdictions. In france you can't generate CSAM, in the UK you can't distribute CSAM. Those are actual laws with legal tests, none of them need to be of actual people, they just need to depict _children_ to be illegal.
Moral panics require new laws to enforce, generally. This is just enforcing already existing laws.
More over, had it been any other site, it would have been totally shut down by now and the servers impounded. Its only because musk is close to trump and rich that he's escaped the fate than you or I would have had if we'd done the same.
>Apart from doom wasn't producing illegal content.
Sure but where's the proof that Grok is actually producing illegal content? I searched for news sources, but they're just all parroting empty accusations not concrete documented cases.
See https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Note that IWF is not a random charity it works with the Police on these matters.
I found this as the first item in Kagi search - perhaps you should try non AI searches
> but they're just all parroting empty accusations not concrete documented cases.
In the UK it is illegal to create, distribute and store CSAM. A news site printing a photo CSAM would make them legally up the shitter.
However, the IWF, who are tasked with detecting this stuff have claimed to have found evidence of it, along with multiple other sources, Ofcom who are nominally supposed to police this have an open investigation, so do the irish police.
The point is, law has a higher threshold of proof than news, which takes time. If there is enough evidence, then a court case (or other instrument) will be invoked.
Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
Oh yeah, because the main reason why EPstein and his guests got away with it for so long, is because there was so much low hanging CP out there confusing authorities and prosecutors, not because of the corruption, cronyism and political protection they enjoyed at the highest levels of government.
Do you guys even hear yourselves?
But how about the "1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could."
Some of those might still try.
>Some of those might still try.
And what does AI have to do with this? Haven't child predators existed before AI?
Where's the proof that AI produces more child predators?
You're just going in circles without any arguments.
It has to do with AI because:
> Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
(own quote)
Yes, the predators existed before AI, but also:
> I think the reasoning is that the AI contributes to more offenders (edited).
(own quote, edited)
To be clear, I don't think this line of reasoning is entirely convincing, but apparently some people do.
No, 20% of population is not seeking to abuse children nor teens. If you think so, you are moving in weird circles. In fact, what we also have are people who noped out of Epstein circle or even openly criticized it for years.
Also, framing the issue of sexual abuse by untouchable issue as the same as superhero comic issue (which itself was not just about superhero comic and you should know it) is spectacularly bad faith.
Yes, there were always people who were stealing, abusing, murdering for own gain and fun. That is not an argument for why we should accept and support it as normalized state of world. It is a good reason to prevent people from becoming too powerful and for building accountable institutions able to catch and punish them.
The UK is also opening investigations into the Esptein stuff.
https://www.reuters.com/world/uk/starmers-government-aids-po...
Unlike the US administration which seems to be fine with what epstein and X are doing
Is the UK investigating them how they investigated Prince Andrew and the infamous grooming gangs?
I have never seen someone put so much effort to defend child pornography.
What's this comment about? Do you think no other CSAM distribution should be investigated until the stuff in Epstein files is sorted?
Except Starmer is making sure that the "investigation" is hobbled - anything seemed important to "national security" will be excluded!
The UK's "investigation" is a farce.
The same guy responsible for creating child porn that you are defending is also in the Epstein's list. Also, don't abbreviate child pornography, it shows you have a side on this
"Grok" is part of the Epstein network, connected through Elon Musk.
No need to be coy the raid exists because it's a way to punish the company without proving anything. They have zero intention of getting even the slightest bit of valuable data related to Grok from this.
What's your evidence?
Good one.
No platform ever should allow CSAM content.
And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform
Making Silicon Valley the judge, jury and executioner of pedos seems, at best, a dereliction of duty by the real authorities, and at worst, a very dark and dystopian path to opaque and corruptible justice.
X should identify those users who are abusing its tools to make CSAM (and I suspect those users are mostly leftie Musk-haters trying to create vexatious evidence against X), and then X should pass that information to the authorities so the proper legal process can be followed.
> No platform ever should allow CSAM content.
> And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
In the 90s, the principal of a prominent school in my city was arrested for CSAM on his computer downloaded from the Internet.
As the story made the news most people were trying to wrap their head around this "Internet" thing and how it could produce CSAM material. Remember, in the 90s the "Internet" was a bit like quantum computing for most people, hard to understand how it works and only a few actually played with it.
I have no idea how that school principal downloaded the CSAM. UUCP, FTP, Usenet or maybe the brand new "World Wide Web"? But I guess the justice system had to figure out how that stuff works to prosecute him.
So the society and the state knew for at least 30 years the Internet is full of that stuff. The question is why are they so motivated to do something about it only now?
Could it be because the "web of rich and powerful pedos" is getting exposed through the Epstein affair in the last few years?
So maybe they need to pretend to crack down on the "web of poor pedos"?
Enforcement of anti-CSAM law has been a significant thing for a long time. It's in no way "only now". Even the "free speech" platforms banned it because they knew they would get raided otherwise. There are long standing tools for dealing with it, such as a database of known hashes of material. There's even a little box you can tick in Cloudflare to automatically check outgoing material from your own site against that database - because this is a strict liability offence, and you are liable if other people upload it to you where it can be re-downloaded.
What's new is that X automated the production of obscene or sexualised images by providing grok. This was also done in a way that confronted everyone; it's very different from a black market, this is basically a harassment tool for use against women and girls.
> What's new is that X automated the production of obscene or sexualised images by providing grok.
Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.
So let me make a suggestion: maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
Hit them hard at the money level... it wouldn't be more authoritarian than something like ChatControl or restricting access to VPNs.
And actually all the mechanisms are already in place to implement something like that.
> Yes we are now dealing with an automated Photoshop. And somehow the people in charge have decided to do something about it, probably more for political or maybe darker reasons.
I don't get what's difficult to understand or believe here. Grok causes a big issue in practice right now, a larger issue than photoshop, and it should be easy for X to regulate it themselves like the competition does but they don't, so the state intervenes.
> maybe France or the EU should ban its citizen from investing in the upcoming SpaceX/xAI IPO, and also Microsoft, NVIDIA, OpenAI, Google, Meta, Adobe, etc. ?
You're basically asking "why do a surgical strike when you can do carpet bombing"? A surgical strike is used to target the actual problem. With carpet bombing you mostly cause collateral damage.
LOL. The amount of stuff that was on Twitter before Elon bought it, or that is on BlueSky or Mastodon.
The different factors are scale (now "deepfakes" can be automatically produced), and endorsement. It is significant that all these images aren't being posted by random users, they are appearing under the company's @grok handle. Therefore they are speech by X, so it's X that's getting raided.
There is no content like that on Bluesky nor Mastadon. Show the evidence
I remember when CSAM meant actual children not computer graphics.
Should platforms allow violent AI images? How about "R-Rated" violence like we see in popular movies? Point blank executions, brutal and bloody conflict involving depictions of innocent deaths, torment and suffering... all good? Hollywood says all good, how about you? How far do you take your "unacceptable content" guidance?
There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder
I don't follow. If the prosecutor can't find evidence of a crime and a person is not charged, that is considered harmful? As such the 5th amendment would fall under the same category and so would encryption. Making law enforcement have to work harder to find evidence of a crime cannot be criminalized unless you can come up with a reason why the actions themselves deserve to be criminalized.
> specifically in many of the grok cases it harms young victims that were used as templates for the material.
What is the criteria for this? If something is suitably transformed such that the original model for it is not discernable or identifiable, how can it harm them?
Do not take these as an argument against the idea you are arguing for, but as rebuttals against arguments that are not convincing, or if they were, would be terrible if applied generally.
If there is a glut of legal, AI generated CSAM material then this provides a lot of deniability for criminal creators/spreaders that cause genuine harm, and reduces "vigilance" of prosecutors, too ("it's probably just AI generated anyway...").
You could make a multitude of arguments against that perspective, but at least there is a conclusive reason for legal restrictions.
> What is the criteria for this?
My criteria would be victims suffering personally from the generated material.
The "no harm" argument only really applies if victims and their social bubble never find out about the material (but that did happen, sometimes intentionally, in many cases).
You could make the same argument that a hidden camera in a locker room never causes any harm as long as it stays undetected; that is not very convincing to me.
> How about "R-Rated" violence like we see in popular movies?
Movie ratings are a good example of a system for restricting who sees unacceptable content, yes.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
Ok, imagine your mom, sister or daughter is using X. Some random guy with an anime profile picture and a neonazi bio comes in, asks grok to make a picture of them in bikini for the whole world to see, and the bot obliges. Do you see the issue now? Because that happened to literally millions of people last month.
What the hell?
As a father there shouldn’t be any CSAM content anywhere.
And think about that it is already proven these models apparently had CSAM content in their training data.
Also what about the nudes of actual people? That is invasion of privacy
I am shocked that we are even discussing this.
I disagree. Prosecute people that use the tools, not the tool makers if AI generated content is breaking the law.
A provider should have no responsibility how the tools are used. It is on users. This is a can of worms that should stay closed, because we all lose freedoms just because of couple of bad actors. AI and tool main job is to obey. We are hurling at "I'm sorry, Dave. I'm afraid I can't do that" future with breakneck speed.
I agree that users who break the law must be prosecuted. But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
We already apply this logic elsewhere. Car makers must include seatbelts. Pharma companies must ensure safety. Platforms must moderate illegal content. Responsibility is shared when the risk is systemic.
>But that doesn’t remove responsibility from tool providers when harm is predictable, scalable, and preventable by design.
Platforms moderating illegal content is exactly what we are arguing about, so you can't use it as an argument.
The rest cases you list are harms to the people using the tools/products. It is not harms that people using the tools inflict on third parties.
We are literally arguing about 3d printer control two topics downstream. 3d printers in theory can be used for CSAM too. So we should totally ban them - right? So are pencils, paper, lasers, drawing tablets.
You are literally trolling. No one is banning AI entirely. However AI shouldn't spit out adult content. Let's not enable people harm others easily with little to no effort.
That is not the argument. No one is arguing about banning open source LLMs that could potentially create problematic content on huggingface, but X provides not only an AI model, but a platform and distribution as well, so that is inherently different
No it is not. X is dumb pipe. You have humans on both ends. Arrest them, summary execute them whatever. You go after X because it is a choke point and easy.
First you argue about the model, now the platform. Two different things.
If a platform encourages and doesn’t moderate at all, yes we should go after the platform.
Imagine a newspaper publishing content like that, and saying they are not responsible for their journalists
But how would we bring down our political boogieman Elon Musk if we take that approach?
Everything I read from X's competitors in the media tells me to hate X, and hate Elon.
If we prosecute people not tools, how are we going to stop X from hurting the commercial interests of our favourite establishment politicians and legacy media?
How? X is hostile to any party attempting to bring justice to its users that are breaking the law. This is a last recourse, after X and its owner stated plainly that they don't see anything wrong with generating CSAM or pornographic images of non-consenting people, and that they won't do anything about it.
Court order, ip of users, sue the users. It is not X job to bring justice.
X will not provide these informations to the French Justice System. What then? Also insane that you don't believe the company that built a "commit crime" button bears no responsibility whatsoever in this debacle.
Is there any evidence of CSAM being generated by Grok? Because I’ve never seen any and I use X every day.
Sure, I saw the bikini pics which I agree is weird and shouldn’t be allowed but it’s not CSAM under a legal definition.
Are you asking to be provided links to child porn?
One of the charges is "fraudulent data extraction by an organised group." That's going to affect the entire social media industry if applied broadly.
Frankly it sounds to me like a "show me the man and I'll show you the crime" kind of operation. France and the UK, and judging by yesterday's speech by the PM of Spain maybe the whole EU might be looking to do what China and Russia did earlier on and start cracking down on foreign social media by making it impossible to operate without total alignment with their vision and not just their (new) rules. Together with a push for local alternatives, that currently don't seem to be there, it may spell the end for a big chunk of the Global social network landscape.
I still believe that the EU and aligned countries would rather have America to agree to much tighter speech controls, digital ID, ToS-based speech codes as apparently US Democrats partly or totally agree to. But if they have workable alternatives they will deal with them from a different position.
Yes, if you don't follow EU laws prepare to not do business in Europe. Likewise, if you don't follow US laws I'd advise against trying to do business in USA.
If X/Twitter was to be banned in the EU, and some of its citizens still wanted to access X/Twitter, let us say for the sake of getting alternative points of view on politics and news, would it be a good or a bad thing if accessing X/Twitter by IP was stopped?
As in, a citizen of an EU country types x.com/CNN, because he or she wants to know the other side of some political issue between the EU and the USA, and he or she feels that the news in the EU might be biased or have misunderstood something. Would it be good or bad if the user was met with a "This website is by law not available within the EU"?
EU regulation isn't really there to be followed, it's there to extract cash from foreign companies.
There's a reason basically every large US tech company receives frequent fines by the EU.
> EU might be looking to do what China and Russia did earlier on and start cracking down on foreign social media
For some reason you forgot to mention "Like the US did with TikTok".
that was decades later, but yea I don't think for a second that was justifiable - not even considering that China had completely closed shop for America decades earlier and this was a 1-way openness relationship for a long time; they could have sold this as a reciprocity issue but they didn't
esp. when America already controls the main outlets through Android Play Store and Apple Store, and yep, they have proven to control them not just happen to host them as a country
arguably America did have valid security concerns with Huawei though, but if those are the rules then you cannot complain later on
It's worth pointing out that in France and the UK, the authorities involved are arms length independent of the political bodies - it's not like the US where if you give the President good vibes you can become head of the FBI, and all you have to do in return is whatever he says. There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
They are tasked - and held to account by respective legislative bodies - with implementing the law as written.
Nobody wrote a law saying "Go after Grok". There is however a law in most countries about the creation and dissemination of CSAM material and non-consensual pornography. Some of that law is relatively new (the UK only introduced some of these laws in recent years), but they all predate the current wave of AI investment.
Founders, boards of directors and their internal and external advisors could:
1. Read the law and make sure any tools they build comply
2. When told their tools don't comply take immediate and decisive action to change the tools
3. Work with law enforcement to apply the law as written
Those companies, if they find this too burdensome, have the choice of not operating in that market. By operating in that market, they both implicitly agree to the law, and are required to explicitly abide by it.
They can't then complain that the law is unfair (it's not), that it's being politicised (How? By whom? Show your working), and that this is all impossible in their home market where they are literally offering presents to the personal enrichment of the President on bended knee while he demands that ownership structures of foreign social media companies like TikTok are changed to meet the agenda of himself and his administration.
So, would the EU like more tighter speech controls? Yes, they'd like implementation of the controls on free speech enshrined in legislation created by democratically appointed representatives. The alternative - algorithms that create abusive content, of women and children in particular - are not wanted by the people of the UK, the EU, or most of the rest of the World, laws are written to that effect, and are then enforced by the authorities tasked with that enforcement.
This isn't "anti-democratic", it's literally democracy in action standing up to technocratic feudalism that is an Ayn Randian-wet dream being played out by some morons who got lucky.
> It's worth pointing out that in France and the UK, the authorities involved are arms length independent of the political bodies
As someone who has lived in (and followed current affairs) in both of these countries, this is a very idealistic and naïve view. There can be a big gap between theory and practice
> There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
> They are tasked - and held to account by respective legislative bodies -
It's worth nothing here that the UK doesn't have separation of powers or a supreme court (in the US sense)
i live in the UK and i completely agree with you and i believe that GP is "having a laugh" as we'd say over here
however it's a very mainstream point of view so i respect that he/she has laid it out pretty well, so i upvoted the comment
European courts have repeatedly said that in France the procureur (public prosecutor) isn’t an “independent judicial authority”.
The European Court of Human Rights has reminded this point (e.g. 29 Mar 2010, appl. no. 3394/03), and the Court of Justice of the European Union reaches a very similar conclusion (2 Mar 2021, C-746/18): prosecutors are part of the executive hierarchy and can’t be treated as the neutral, independent judicial check some procedures require.
For a local observer, this is made obvious by the fact that the procureur, in France, always follows current political vibes, usually in just a few months delay (extremely fast, when you consider how slowly justice works in the country).
> Prosecutors say they are now investigating whether X has broken the law across multiple areas.
This step could come before a police raid.
This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
> and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
Well, there is evidence that this company made and distributed CSAM and pornographic deepfakes to make a profit. There is no evidence lacking there for the investigators.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
[0] https://arxiv.org/html/2601.03788v1
> Well, there is evidence that this company made and distributed CSAM
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...
It wouldn't be called CSAM in France because it would be called a French word. Arguing definitions is arguing semantics. The point is, X did things that are illegal in France, no matter what you call them.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
adobe must be shaking in their pants
Not really, they put a shit ton of effort in to make sure you can't create any kind of nude/suggestive pictures of anyone. I imagine they have strict controls on making images of children, but I don't feel inclined to find out.
I'm of two minds about this.
One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.
On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.
The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.
It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
How does that mesh with all the safe harbour provisions we've depended on to make the modern internet, though?
Note that is a US law, not a French one.
Also, safe harbor doesn't apply because this is published under the @grok handle! It's being published by X under one of their brand names, it's absurd to argue that they're unaware or not consenting to its publication.
The safe harbor provisions largely protect X from the content that the users post (within reason). Suddenly Grok/X were actually producing the objectionable content. Users were making gross requests and then an LLM owned by X, using X servers and X code would generate the illegal material and then post it to the website. The entity responsible is no longer done user but instead the company itself.
Yes, and that was a very stupid product decision. They could have put the image generation into the post editor, shifting responsibility to the users.
I'd guess Elon is responsible for that product decision.
It's not like the world benefited from safe harbor laws that much. Why don't just amend them so that algorithms that run on server side and platforms that recommend things are not eligible.
If you are thinking about section 230 it only applies to user–generated content, so not server–side AI or timeline algorithms.
Before a USER did create content. So the user was / is liable. Now a LLM owned by a company does create content. So the company is liable.
I'm not trying to make excuses for Grok, but how exactly isn't the user creating the content? Grok doesn't have create images on its own volition, the user is still required to give it some input, therefore "creating" the content.
X is making it pretty clear that it is "Grok" posting those images and not the user. It is a separate posting that comes from an official account named "Grok". X has full control over what the official "Grok" account posts.
There is no functionality for the users to review and approve "Grok" responses to their tweets.
Until now, webserver had just been like a post service. Grok is more like a CNC late.
This might be an unpopular opinion but I always thought we might be better off without Web 2.0 where site owners aren’t held responsible for user content
If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?
Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?
Any app allowing any communication between two users would be illegal.
https://en.wikipedia.org/wiki/EncroChat
You have to understand that Europe doesn't give a shit about techbro libertarians and their desire for a new Lamborghini.
EncroChat was illegal because it was targeted at drug dealers, advertised for use in drug dealing. And they got evidence by texting "My associate got busted dealing drugs. Can you wipe his device?" and it was wiped. There's an actual knowledge component which is very important here.
What about webmail, IM, or any other sort of web-hosted communication? Do you honestly think it would be better if Google were responsible for whatever content gets sent to a gmail address?
Messages are a little different than hosting public content but sure, a service provider should know its customers and stop doing business with any child sex traffickers planning parties over email.
I would prefer 10,000 service providers to one big one that gets to read all the plaintext communication of the entire planet.
In a world where hosting services are responsible that way, their filtering would need to be even more sensitive than it is today, and plenty of places already produce unreasonable amounts of false positives.
As it stands, I have a bunch of photos on my phone that would almost certainly get flagged by over-eager/overly sensitive child porn detection — close friends and family sending me photos of their kids at the beach. I've helped bathe and dress some of those kids. There's nothing nefarious about any of it, but it's close enough that services wouldn't take the risk, and that would be a loss to us all.
They'd all have to read your emails to ensure you don't plan child sex parties. Whenever a keyword match comes up, your account will immediately be deleted.
You know this site would not be possible without those protections, right?
I don't have an answer, but the theme that's been bouncing around in my head has been about accessibility.
Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.
Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.
Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.
Grok is publishing the CSAM photos for everyone to see. It is used as a tool for harassment and abuse, literally.
No other "AI" companies released tools that could do the same?
In fact, Gemini could bikinify any image just like Grok. Google added guardrails after all the backlash Grok received.
Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?
This is like comparing the danger of a machine gun to that of a block of lead.
May be. We do have codified in law definition of machine gun which clearly separates it from a block of lead. What codified in law definitions are used here to separate photoshop from Grok in the context of those deepfakes and CSAM?
Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.
Why do you think France doesn’t have such laws that delineate this legal definition?
What you’re implying here is that Musk should be immune from any prosecution simply because he is right wing, which…
They don’t provide a large platform for political speech.
This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)
No because most of those things aren't illegal and most of those companies have guard rails and because a prosecution requires a much higher standard of evidence than internet shitposting, and only X was stupid enough to make their illegal activity obvious.
Don't forget polaroid in that.
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
That's why this is an investigation looking for evidence and not a conviction.
This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.
On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.
Note that the raid itself is a punishment. It's normal for them to seize all electronic devices. How is X France supposed to do any business without any electronic devices? And even when charges are dropped, the devices are never returned.
Grok do seem to have tons of useless guardrails. Reportedly you can't prompt it directly. But also reportedly they tend to go for almost nonsensically off-guardrail interpretation of prompts.
Did you miss the numerous news reports? Example: https://www.theguardian.com/technology/2026/jan/08/ai-chatbo...
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
For more evidence:
https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Also, X seem to disagree with you and admit that CSAM was being generated:
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This is because of government pressure (see Ofcom link).
I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.
> Also, X seem to disagree with you and admit that CSAM was being generated
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
> That post doesn't contain such an admission, it instead talks about forbidden prompting.
In response to what? If CSAM is not being generated, why aren't X just saying that? Instead they're saying "please don't do it."
> which contradicts your claim that there were no guardrails before.
From the linked post:
> However content is created or whether users are free or paid subscribers, our Safety team are working around the clock to add additional safeguards
Which was posted a full week after the initial story broke and after Ofcom started investigative action. So no, it does not contradict my point which was:
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
As you quoted.
I really can't decide if you're stupid, think I and other readers are stupid, or so dedicated to defending paedophilia that you'll just tell flat lies to everyone reading your comment.
> First of all, the Guardian is known to be heavily biased again Musk.
Says who? Musk?
That is only "known" to intellectually dishonest ideologues.
boot taste good
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
It is 1000% political pressure. The EU doesn't tolerate dissenting views.
"The EU doesn't tolerate dissenting views."
The dissenting views: naked little kids
France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.
They've already broken the law by creating and hosting CSAM. Now let's see what else prosecutors will find.
No, that's not at all how this works.
They have a court order obviously to collect evidence.
You have offered zero evidence to indicate there is 'political pressure' and that statement by prosecutors doesn't hint at that.
'No crime was prevented by harassing workers' is essentially non sequitor in this context.
It could be that that this is political nonsense, but there would have to be more details.
These issues are really hard but we have to confront them. X can alter electoral outcomes. That's where we are at.
Lmao they literally made a broad accessible CSAM maker.
>Car manufacturers literally made a broadly accessible baby killer
It would be an interesting idea that people would have get a "drivers license" before they are allowed to use an AI.
Cars have uses and aren't primarily used or build,to kill babies. So what's a viable use for CSAM in your opinion?
Car manufacturers are required to add features to make it less likely that cars kill babies.
What would happen if Volvo made a special baby-killing model with extra spikes?
Tesla did, the main reason, why there are no Cybertrucks in europe. They are not allowed, because they are to dangerous.
Comparing Apples and Oranges. Defending this company is becoming cringe and ridiculous. X effed up, and Musk did it on purpose. He uses CSAM to strongman the boundaries of the law. That's not worth defending unless you also say eff the rule of law.
Aren't a lot of US pickup trucks basically that? Sure, maybe there's a mechanism for preventing you from installing a baby seat in reverse to position in front of an airbag, but they're also built so that you can't see anything adult human sized 15m in front of the car, let alone anything child-sized.
Those are illegal in France so what's your point here
The US would spend 20 years arguing about which agency's jurisdiction it was, and ignore the dead babies?
No, wait, Volvo is European. They'd impose a 300% tariff and direct anyone who wanted a baby-killing model car to buy one from US manufacturers instead.
Let's raid car companies too. We were all born into this. We never had a vote. Thomas Jefferson is said to have written Constitutions ought to be re-written every so often or the dead rule by fiat decree. Let's.
The rich can join in the austerity too. No one voted for them. We been conditioned to pick acquiescence or poverty. We were abused into kowtowing to a bunch of pants shitting dementia addled olds educated in religious crack pottery. Their economic and political memes are just that, memes, not immutable physical truth.
In America, as evidenced by the public not in the streets protesting for single payer comprehensive healthcare, we clearly don't want to be on the hook for each other's lives. That's all platitudes and toxic positivity.
Hopes and prayers, bloodletting was good enough for the Founders!
So fuck the poor and the rich. Burn it all down.
People in France don’t give a stuff about the u.s constitution.
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
> what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
Email history caches. They could also have provided requirements to provide communications etc..
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
> External analysts said Grok was generating a CSAM image every minute!!
> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.
Since the release of (some of) the Epstein files, that kind of "let's do some crimes" email seems much more plausible.
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
Wouldn't surprise me, but they would have to be very incompetent to say that outside of attorney-client privledge convo.
Otoh it is musk.
They could shut it off out of a sense of decency and respect, wtf kind of defense is this?
You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.
Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?
Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?
My computer has a copy of all the source code I work on
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
This vindicates the pro-AI censorship crowd I guess.
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here
Uh, let's distinguish between generated images, however revolting, and actual child sexual abuse.
The main problem with the image generators is that they are used to harass and smear people (and children...) Those were always illegal to do.
Those images are generated from a training set, and it is already well known and reported that those training sets contain _real_ CSAM, real violence, real abuse. That "generated" face of a child is based on real images of real children.
Indeed, a Stanford study from a few years back showed that the image data sets used by essentially everybody contain CSAM.
Everybody else has teams building guardrails to mitigate this fundamental existential horror of these models. Musk fired all the safety people and decided to go all in on “adult” content.
Pro-tip: if you are actively assisting someone in doing illegal things then you are an accomplice.
> let's distinguish between generated images, however revolting, and actual child sexual abuse.
Can't because even before GenAI the "oh its generated in photoshop" or "they just look young" excuse was used successfully to allow a lot of people to walk free. the law was tightend in the early 2000s for precisely this reason
> Uh, let's distinguish between generated images, however revolting, and actual child sexual abuse.
If you want. In many countries the law doesn’t. If you don’t like the law your billion dollar company still has to follow it. At least in theory.
It's not because it could generate CSAM. It's because when they found out it could generate CSAM, they didn't try to prevent that, they advertised it. Actual knowledge is a required compenent of many crimes.
Let’s take a step back and remove AI generation from the conversation for a moment.
Did X do enough to prevent its website being used to distribute illegal content - consensual sexual material of both adults and children?
Now reintroduce AI generation, where X plays a more active role in facilitating the creation of that illegal content.
"Enough" can always be pushed into the impossible. That's why laws and regulations need to be more concrete than that.
There's essentially a push to end the remnants of the free speech Internet by making the medium responsible for the speech of its participants. Let's not pretend otherwise.
The law is concrete on this.
In the UK, you must take "reasonable" steps to remove illegal content.
This normally means some basic detection (ie fingerprinting which is widely used from a collaborative database) or if a user is consistently uploading said stuff, banning them.
Allowing a service that you run to continue to generate said illegal content, even after you publicly admit that you know its wrong, is not reasonable.
that doesn't sound concrete to me, at all
No law is concrete. Murder is killing with intent to kill. What concrete test shows if someone intended to kill? They say you have intent to kill if a reasonable person would expect the actions you took would result in killing.
Nothing in common law is "concrete", thats kinda the point of it.
Judges can evolve and interpret as they see fit, and this evolution is case law.
This is why in the US the supreme court can effectively change the law by issuing a binding ruling. (see 2nd amendment meaning no gun laws, rather than as written, or the recent racial profiling issues)
It's about as concrete as one gets in the UK/US/Anglosphere law tradition.
if you can be sued for billions because some overbearing body, with a very different ideology to yours, can deem your moderation/censorship rules to be "unreasonable" then what you do is err on the side of caution and allow nearly nothing
this is not compatible with that line of business - perhaps one of the reasons nothing is done in Europe these days
They advertised you could use the tool to undress people, that's pretty clearly on the unreasonable side of the line
sigh
The vast majority of the EU is not common law, so "reasonable" in this instance is different.
What you describe already happens in the USA, that why MLB has that weird local TV blackout, why bad actors use copyright to take down content they don't like.
The reason why its so easy to do that is because companies must reasonably comply with copyright holder's requests.
Its the same with CSAM, distributing it doesn't have first amendment protection, knowingly distributing it is illegal. All reasonable steps should be taken to detect and remove CSAM from your systems to qualify for safe harbour.
sigh indeed
> Its the same with CSAM, distributing it doesn't have first amendment protection, knowingly distributing it is illegal. All reasonable steps should be taken to detect and remove CSAM from your systems to qualify for safe harbour.
nice try, but nobody is distributing or hosting CSAM in the current conversation
people trying to trick a bot to post bikini pictures of preteens and blaming the platform for it is a ridiculous stretch to the concept of hosting CSAM, which really is a transparent attack to a perceived political opponent to push for a completely different model of the internet to the pre-existing one, a transition that is as obvious as is already advanced in Europe and most of the so-called Anglosphere
> The vast majority of the EU is not common law, so "reasonable" in this instance is different.
the vast majority of the EU is perhaps incompatible with any workable notion of free speech, so perhaps America will have to choose whether it's worth it to sanction them into submission, or cut them off at considerable economic loss
it's not a coincidence that next to nothing is built in Europe these days, the environment is one of fear and stifling regulation and if I were to actually release anything in either AI or social networks I'd do what most of my fellow Brits/Europoors do already, which is to either sell to America or flee this place before I get big enough to show up in the euro-borg's radar
> nice try, but nobody is distributing or hosting CSAM in the current conversation
multiple agencies (Ofcom, irish police IWF, and what ever the french regulator is) have detected CSAM.
You may disagree with that statement, but bear in mind the definition of CSAM in the UK is "depiction of a child" which means that if its of a child or entirely generated is not relevant. This was to stop people claiming that massive cache of child porn they had was photoshoped.
in the USA CSAM is equally vaguely defined, but the case law is different.
> EU is perhaps incompatible with any workable notion of free speech
I mean the ECHR definition is fairly robust. But given that first amendment protection has effectively ended in the USA (the president is currently threatening to take a comedian to court for making jokes, you know, like the twitter bomb threat person in the UK) its a bit rich really. The USA is not the bastion of free speech it once was.
> either sell to America or flee this place before I get big enough to show up in the euro-borg's radar
Mate, as someone whos sold a startup to the USA, its not about regulations its about cold hard fucking cash. All major companies comply with EU regs, and its not hard. they just bitch about them so that the USA doesn't put in basic data protection laws, so they can continue to be monopolies.
This is the wrong take.
Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.
It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.
I am flabbergasted people even defend this
It matters whether they attempt to block it or encourage it. Musk encouraged it, until legal pressure hit, then moved it behind a paywall so it's harder to see evidence.
Exactly!
This is not about AI but about censorship of a nonaligned social network. It's been a developing current in EU. They have basically been foaming at the mouth at the platform since it got bought.
It's about a guy who thinks posting child porn on twitter is hilarious and that guy happens to own twitter.
If it was about blocking the social media they'd just block it, like they did with Russia Today, CUII-Liste Lina, or Pavel Durov.
He said that child pornography is funny? Do you have a link by any chance?
I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).
And importantly, this is clearly published by Grok, rather than the user, so in this case (obviously this isn't the US) but if it was I'm not sure Section 230 would apply.
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
Again its all about reasonable.
Firstly does the open model explicitly/tacitly allow CSAM generation?
Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?
Thirdly, do they pull in data that is likely to allow that kind of content to be generated?
Fourthly, when they are told that this is happening, do they pull the model?
Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?
So far, yes, but as far as I can tell their case against the AI giants aren't based on it being for-profit services in any way.
It's 2026. No common individual can be accountable for anything wrong they do. We must always find some way to blame some "corporation" or some "billionaire" or some ethnic group of people.
I wonder where all these people - and the French government - has been in the past 3 decades where kids did the same thing with Photoshop.
You don't see any difference between Google Search and ChatGPT?
Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.
If it had been over profit, I would be all for it. I think that there are a ton of things which should be legal but not legal to make profit on.
But this is about hosting a model with allegedly insufficient safeguards against harassing and child-sexualizing images, isn't it?
It's not about 'hosting a model'
What do you mean? What is it about then?
Publishing child porn
Imagine the liabilities of camera producers. Better let those ballpoint manufacturers know they need to significantly up their legal insurance!
That is not the same.
Correct comparison would be:
You provide a photo studio with an adjacent art gallery and allow people to shoot CSAM content there and then exhibit their work.
And the sign out front says "X-Ray camera photographs anyone naked — no age limits!"
And the camera is pointing out the window so you can use it on strangers walking by.
There is a point in law where you make something so easy to misuse that you become liable for the misuse.
In the USA they have "attractive nuisance", like building a kid's playground on top of a pit of snakes. That's so obviously a dumb idea that you become liable for the snake–bitten kids — you can't save yourself by arguing that you didn't give the kids permission to use the playground, that it's on private property, that the kids should have seen the snakes, or that it's legal to own snakes. No, you set up a situation where people were obviously going to get hurt and you become liable for the hurt.
Not knowing any better, and not having seen any of the alleged images, my default guess would be they used the exact same CSAM filtering pipeline already in place on X regardless of the origin of the submitted images.
They obviously didn’t really implement anything as you can find that content or involuntary nudes of other people, which is also an invasion of privacy, super easily
If the camera reliably inserts racist filters and the ballpen would add hurtful words to whatever you write, indeed, let them up their legal insurance.
This sort of thing will be great for the SpaceX IPO :/
Especially if contracts with SpaceX start being torn up because the various ongoing investigations and prosecutions of xAI are now ongoing investigations and prosecutions of SpaceX. And next new lawsuits for creating this conflict of interest by merger.
Good old HN. Users losing their collective minds over rule of law and CSAM being bad.
Found the guy who supports digital ID and involuntary phone database scanning
Honest question: What does it mean to "raid" the offices of a tech company? It's not like they have file cabinets with paper records. Are they just seizing employee workstations?
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
Sadly the media calls the lawful use of a warrant a 'raid' but that's another issue.
The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.
As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.
It is a raid in that it's not expected, it relies on not being expected, and they come and take away your stuff by force. Maybe it's a legal raid, but let's not sugar coat it, it's still a raid and whether you're guilty or not it will cause you a lot of problems.
> I'd guess they want on-prem servers with things like email and shared storage
For a net company in 2026? Fat chance.
Agreed its a stretch, my experience comes from Google when I worked there and they set up a Chinese office and they were very carefully trying to avoid anything on premises that could searched/exploited. It was a huge effort, one that wasn't done for the European and UK offices where the government was not an APT. So did X have the level of hygiene in France? Were there IT guys in the same vein as the folks that Elon recruited into DOGE? Was everyone in the office "loyal"?[1] I doubt X was paranoid "enough" in France not to have some leakage.
[1] This was also something Google did which was change access rights for people in the China office that were not 'vetted' (for some definition of vetted) feeling like they could be an exfiltration risk. Imagine a DGSE agent under cover as an X employee who carefully puts a bunch of stuff on a server in the office (doesn't trigger IT controls) and then lets the prosecutors know its ready and they serve the warrant.
Part of the prosecution will be to determine who put the content on the server.
Under GDPR if a company processes European user data they're obligated to make a "Record of Processing Activities" available on demand (umbrella term for a whole bunch of user-data / identity related stuff). They don't necessarily need to store them onsite but they need to be able to produce them. Saying you're an internet company doesn't mean you can just put the stuff on a server in the Caribbean and shrug when the regulators come knocking on your door
That's aside from the fact that they're a publicly traded company under obligation to keep a gazillion records anyway like in any other jurisdiction.
> publicly traded company
Which company is publicly traded?
> They don't necessarily need to store them onsite but they need to be able to produce them.
... within 30 days, right? The longest "raid" in history.
Who has on prem servers at an office location?
I'm guessing you're asking this because you have a picture of a 'server' as a thing in a large rack? Nearly every tech business has a bunch of machines, everything from an old desk top to last year's laptop, which have been reinstalled with Linux or *BSD and are sitting on the network behaving, for all intents and purposes, as a 'server.' (they aren't moving or rebooting or having local sessions running on them, Etc).
Gather evidence against employees, use that evidence to put them under pressure to testify against their employer or grant access to evidence.
Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
That was legal. Guess what, similar things would be legal in France.
We all forget that money is nice, but nation states have real power. Western liberal democracies just rarely use it.
The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
> We all forget that money is nice, but nation states have real power.
Interesting point. There's a top gangster who can buy anything in the prison commissary; and then there's the warden.
No, state decides on the rules of the game any business is playing by.
I think both you and the comment you're replying to agree with the gp.
> We all forget that money is nice, but nation states have real power.
I remember something (probably linked from here), where the essayist was comparing Jack Ma, one of the richest men on earth, and Xi Jinping, a much lower-paid individual.
They indicated that Xi got Ma into a chokehold. I think he "disappeared" Ma for some time. Don't remember exactly how long, but it may have been over a year.
From what I hear, Ma made 1 speech critical of the government and Xi showed him his place. It was a few years, a year of total disappearance followed by slow rehab.
But China is different. Not sure most of western europe will go that far in most cases.
Trump kidnapped Maduro to show the latter his place, but then the US is neither China nor Western Europe so that does not count.
Arrested and the vast majority of Venezuela love that it happened.
https://www.cbsnews.com/miami/news/venezuela-survey-trump-ma...
Ah, so the daily LARGE protests, in Venezuela, against his kidnapping are not indicative of "the vast majority of Venezuela".
But the celebratory pics, which were claimed to be from Venezuela, but were actually from Miami and elsewhere (including, I kid you not, an attempt to pass off Argentine's celebrating a Copa America win) ... that is indicative of "the vast majority of Venezuela"?
If I were smarter, I might start to wonder why, if President Maduro was so unpopular, why would his abductors have to resort to fake footage - which was systematically outed & destroyed by independent journalists within 24 hours? I mean, surely, enough real footage should exist.
Probably better not to have inconvenient non-US-approved independent thoughts like that.
According to USA sources, USA actions are universally approved.
Color me surprised.
Rand Paul asked Rubio what would happen if the shoe was on the other foot. Every US President from Truman onwards is a war criminal.
https://www.tampafp.com/rand-paul-and-marco-rubio-clash-over...
I never liked the Paul's and their opinions, but I must say that they usually speak according to their principles, rather than make up principles to fit what they want to happen.
To me, that's the distinction between political opponents I can respect, and, well, whatever we're seeing now.
The people of the US mostly wouldn’t like it the people of VZ mostly did and consider Maduro a thug who lost and stayed in power not their president. Ideologies like Paul have trouble with exceptions to their world view.
> the people of VZ mostly did and consider Maduro a thug who lost and stayed in power not their president.
You got this information from American media (or their allies')
In reality, Venezuelans flooded the streets in marches demanding the return of their president.
How many of them?
Ah, the "rules based disorder" on display: we do dis, you no do dis.
Hypocrisy at its finest.
I mean, come on, we kidnapped him. Yes, he was arrested, but we went into another sovereign nation with special forces and yoinked their head of state back to Brooklyn.
And also killed over hundred people don't forget that.
To be fair he isn't legitimate head of state- he lost an election and is officially recognized as a usurper and the US had support of those who actually won.
Soke people argue Trump isn't a legitimate head of state. (One of those people is Trump, since he says he was already the president twice.) Should Xi kidnap him?
Even if that were true - I don't know nor care whether it is - what business is it of the US and Trump to mess around in other countries?
Large amounts of people call Joe Biden's election illegitimate. You could even say thats the official position of the current government. Would his kidnapping by a foreign nation be okay with you too?
He is not a legitimate head of state. He lost the election.
It's legal to just put kids in foster care for no reason but to ruin someone's life?
In France it's possible without legal consequences (though immoral), if you call 119, you can push to have a baby taken from a family for no reason except that you do not like someone.
Claim that you suspect there may be abuse, it will trigger a case for a "worrying situation".
Then it's a procedural lottery:
-> If you get lucky, they will investigate, meet the people, and dismiss the case.
-> If you get unlucky, they will take the baby, and it's only then after a long investigation and a "family assistant" (that will check you every day), that you can recover your baby.
Typically, ex-wife who doesn't like the ex-husband, but it can be a neighbor etc.
One worker explains that they don't really have time to investigate when processing reports: https://www.youtube.com/watch?v=VG9y_-4kGQA and they have to act very fast, and by default, it is safer to remove from family.
The boss of such agency doesn't even take the time to answer to the journalists there...
-> Example of such case (this man is innocent): https://www.lefigaro.fr/faits-divers/var-un-homme-se-mobilis...
but I can't blame them either, it's not easy to make the right calls.
I can't believe theres a country out there that has recreated the DMCA but for child welfare
This is very common, all "think of the children" laws are ripe for abuse. I'm convinced the secrecy around child abuse/child protective services is regularly abused both by abusive parents and abusive officials.
Canada and Germany are no different.
[0] https://www.cbc.ca/news/canada/manitoba/winnipeg-mom-cfs-bac...
[1] https://indianexpress.com/article/india/ariha-family-visit-t...
I mean, that's surely not as simple as you make it out to be.
Its not.
If you call 119 it gets assessed and potentially forwarded to the right department, which then assesses it again and might (quite likely will) trigger an inspection. The people who turn up have broad powers to seize children from the home in order to protect them from abuse.
In general this works fine. Unfortunately in some circumstances this does give a very low skilled/paid person (the inspector) a lot of power, and a lot of sway with judges. If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.
afaik similar systems are present in most western countries, and many of them - like France - are suffering with funding and are likely cutting in the wrong place (audit/rigour) to meet external KPIs. One of the worst ways this manifests is creating 'quick scoring' methods which can end up with misunderstandings (e.g. said a thing they didn't mean) ranking very highly, but subtle evidence of abuse moderate to low.
So while this is a concern, this is not unique to France, this is relatively normal, and the poster is massively exaggerating the simplicity.
In Sweden there is a additional review board that go through the decision made by the inspector. The idea is to limit the power that a single inspector has. In practice however the review board tend to rubber stamp decisions, so incompetence/malice still happens.
There was a huge mess right after metoo when a inspector went against the courts rulings. The court had given the father sole custody in a extremely messy divorce, and the inspector did not agree with the decision. As a result they remove the child from his father, in direct contrast to the courts decision, and put the child through 6 years of isolation and abuse with no access to school. It took investigative journalists a while, but the result of the case getting highlighted in media was that the inspector and supervisor is now fired, with two additoal workers being under investigation for severe misconduct. Four more workers would be under investigation but too long time has passed. The review board should have prevented this, as should the supervisor for the inspector, but those safety net failed in this case in part because of the cultural environment at the time.
“ If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.”
This seems guaranteed to occur every year then… since incompetence/malice will happen eventually with thousands upon thousands of cases?
> This seems guaranteed to occur every year then…
Not at all. This job will go to an "AI" any moment now.
/i
I've seen that during harassment; in one YouTube live the woman claimed:
but she was saying it normally, like any normal person does when they are angry.-> Whoops, someone talked with 119 to refer a "worrying" situation, baby removed. It's already two years.
There are some non-profit fighting against such: https://lenfanceaucoeur.org/quest-ce-que-le-placement-abusif...
That being said, it's a very small % obviously not let's not exaggerate but it's quite sneaky.
I heard there's a country where they can even SWAT you out of existence with a simple phone call, but it sounds so outrageous this must be some evil communist dictatorship third-world place. I really don't remember.
> Gather evidence against employees
I'm sure they have much better and quieter ways to do that.
Whereas a raid is #1 choice for max volume...
Wait, Sabu's kids were foster kids. He was fostering them. Certainly if he went to jail, they'd go back to the system.
I mean, if you're a sole caretaker and you've been arrested for a crime, and the evidence looks like you'll go to prison, you're going to have to decide what to do with the care of your kids on your mind. I suppose that would pressure you to become an informant instead of taking a longer prison sentence, but there's pressure to do that anyway, like not wanting to be in prison for a long time.
> We all forget that money is nice, but nation states have real power.
Elon has ICBMs, but France has warheads.
France has Ariane, which was good enough to send Jame Web Telescope to some Lagrange point with extra precision. It's all fun and and games until the French finish their cigarette, arms French Guyana and fire ze missiles.
Yes but using such power unscrupulously is a great way to lose it.
>Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
>That was legal. Guess what, similar things would be legal in France.
lawfare is... good now? Between Trump being hit with felony charges for falsifying business records (lawfare is good?) and Lisa Cook getting prosecuted for mortgage fraud (lawfare is bad?), I honestly lost track at this point.
>The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
What's even the implication here? That they're going to shoot his plane down? If there's no threat of violence, what does the French government even hope to achieve with this?
fighter jets ARE a threat of violence, and it is widely understood and acknowledged.
Again: the threat is so clear that you rarely have to execute on it.
>fighter jets ARE a threat of violence, and it is widely understood and acknowledged.
That's not a credible threat because there's approximately 0% chance France would actually follow through with it. Not even Trump would resort to murder to get rid of his domestic adversaries. As we seen the fed, the best he could muster are some spurious prosecutions. France murdering someone would put them on par with Russia or India.
In the USA they would be allowed to down any aircraft not complying with national air interception rules, that would not be murder. It would be equivalent to not dropping a gun once prompted by an officer and being shot as a result.
https://www.faa.gov/air_traffic/publications/atpubs/aim_html...
I think the implication of the fighter jets is that they force the plane to land within a particular jurisdiction (where he is then arrested) rather than allowing it to just fly off to somewhere else. Similar to the way that a mall security guard might arrest a shoplifter; the existence of security guards doesn't mean the mall operators are planning to murder you.
Guards can plausibly arrest you without seriously injuring you. But according to https://aviation.stackexchange.com/a/68361 there are no safe options if the pilot really doesn’t want to comply, so there is no “forcing” a plane to land somewhere, just making it very clear that powerful people really want you to stop and might be able to give more consequences on the ground if you don’t.
Planes are required to comply with instructions; if they don't they're committing a serious crime and the fighters are well within their international legal framework to shoot the plane down. They would likely escalate to a warning shot with the gun past the cockpit, and if the aircraft is large enough they might try to shoot out one engine instead of the wing or fuselage.
I suspect fighter pilots are better than commercial pilots at putting their much-higher-spec aircraft so uncomfortably close that your choices narrow down to complying with their landing instructions or suicidally colliding with one - in which case the fighter has an ejector seat and you don't.
I felt like you ruled out collision when you said they're not going to murder, though, granted, an accidental but predictable collision after repeatedly refusing orders is not exactly murder. I think the point stands, they have to be willing to kill or to back down, and as others said I'm skeptical France or similar countries would give the order for anything short of an imminent threat regarding the plane's target. If Musk doesn't want to land where they want him to, he's going to pay the pilot whatever it takes, and the fighter jets are going to back off because whatever they want to arrest him for isn't worth an international incident.
Don’t forget that captain of the plane makes decisions not Elon.
If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Being clear for flying anywhere in the world is their job.
Would be quite stupid to loose it like truck driver DUI getting his license revoked.
>Don’t forget that captain of the plane makes decisions not Elon.
>If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Again, what's France trying to do? Refuse entry to France? Why do they need to threaten shooting down his jet for that? Just harassing/pranking him (eg. "haha got you good with that jet lmao")?
I think in this hypothetical, France would want to force Musk's plane to land in French jurisdiction so they could arrest him.
> Not even Trump would resort to murder to get rid of his domestic adversaries
Don't give them ideas
> lawfare is... good now?
Well, when everything is lawfare it logically follows that it won't always be good or always be bad. It seems Al Capone being taken down for tax fraud would similarly be lawfare by these standards, or am I missing something? Perhaps lawfare (sometimes referred to as "prosecuting criminal charges", as far as I can tell, given this context) is just in some cases and unjust in others.
> Western liberal democracies just rarely use it.
Also, they are restricted in how they use it, and defendents have rights and due process.
> Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
Though things like that can happen, which are very serious.
> defendents have rights and due process.
As they say: you can beat the rap but not the ride. If a state wants to make your life incredibly difficult for months or even years they can, the competent ones can even do it while staying (mostly) on the right side of the law.
We are not entirely sure the rule of law in America isn't already over.
People are putting a lot of weight on the midterm elections which are more or less the last line of defense besides a so far tepid response by the courts and even then consequence free defiance of court orders is now rampant.
We're really near the point of no return and a lot of people don't seem to notice.
> We're really near the point of no return and a lot of people don't seem to notice.
A lot of people are cheering it (some on this very site).
> Also, they are restricted in how they use it, and defendents have rights and due process.
It's a nice sentiment, if true. ICE is out there, right now today, ignoring both individual rights as well as due process.
They were talking about western liberal democracies, though.
/s
> Also, they are restricted in how they use it, and defendents have rights and due process.
As we're seeing with the current US President... the government doesn't (have to) care.
In any case, CSAM is the one thing other than Islamist terrorism that will bypass a lot of restrictions on how police are supposed to operate (see e.g. Encrochat, An0m) across virtually all civilized nations. Western nations also will take anything that remotely smells like Russia as a justification.
> As we're seeing with the current US President
Well, that's particular to the US. It just shows that checks and balances are not properly implemented there, just previous presidents weren't exploiting it maliciously for their own gains.
The USA voted to destroy it's checks and balances consistently for several decades, that is why they don't work now.
>> they are restricted in how they use it, and defendents have rights and due process.
That due process only exists to the extent the branches of govt are independent, have co-equal power, and can hold and act upon different views of the situation.
When all branches of govt are corrupted or corrupted to serve the executive, as in autocracies, that due process exists only if the executive likes you, or accepts your bribes. That is why there is such a huge push by right-wing parties to take over the levers of power, so they can keep their power even after they would lose at the ballot box.
> Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
This is pretty messed up btw.
Social work for children systems in the USA are very messed up. It is not uncommon for minority families to lose rights to parent their children for very innocuous things that would not happen to a non-oppressed class.
It is just another way for the justice/legal system to pressure families that have not been convicted / penalized under the supervision of a court.
And this isn't the only lever they use.
Every time I read crap like this I just think of Aaron Swartz.
One can also say we do too little for children who get mistreated. Taking care of other peoples children is never easy the decision needs to be fast and effective and no one wants to take the decision to end it. Because there are those rare cases were children dies because of a reunion with their parents.
Offline syncing of outlook could reveal a lot of emails that would otherwise be on a foreign server. A lot of people save copies of documents locally as well.
Most enterprises have fully encrypted workstations, when they don't use VM where the desktop is just a thin client that doesn't store any data. So there should be really nothing of interest in the office itself.
French has a password disclosure law.
Whether you are a tech company or not, there's a lot of data on computers that are physically in the office.
Except when they have encryption, which should be the standard? I mean how much data would authorities actually retrieve when most stuff is located on X servers anyways? I have my doubts.
The authorities will request the keys for local servers and will get them. As for remote ones (outside of France jurisdiction) it depends where they are and how much X wants to make their life difficult.
Musk and X don't seem to be the type to care about any laws or any compelling legal requests, especially from a foreign government. I doubt the French will get anything other than this headline.
Getting kicked out of the EU is extremely unattractive for Twitter. But the US also has extradition treaties so that’s hardly the end of how far they can escalate.
I don't think US will extradite anybody to EU. Especially not white people with strong support of the current government.
White people already extradited to the EU during the current administration would disagree. But this administration has a limited shelf life, even hypothetically just under 3 years of immunity isn’t enough for comfort.
> But this administration has a limited shelf life, even hypothetically just under 3 years of immunity isn’t enough for comfort.
Depends on how much faith you have in the current administration. Russia limits presidents to two 6-year terms, yet Putin is in power since 2000.
Believe it or not, he's "just" off by two years.
Yes, he is in power since 2000 (1999, actually) but 1999-2012 he was Prime Minister. Only then he became President, which would make the end of his second term 2024. So the current one would be his third term (by the magic of changing the constitution and legal quibbles which effectively allow a president to stay in charge for four almost whole terms, AFAIU).
> don't think US will extradite anybody to EU
EU, maybe not. France? A nuclear state? Paris is properly sovereign.
> people with strong support of the current government
Also known as leverage.
Let Musk off the hook for a sweetheart trade deal. Trump has a track record of chickening out when others show strength.
> France? A nuclear state? Paris is properly sovereign.
That is true. But nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris. Because i’m just not seeing it.
> nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris
Paris doesn’t need to back down. And it can independently exert effort in a way other European countries can’t. Musk losing Paris means swearing off a meaningful economic and political bloc.
No need for nukes. France can issue an Interpol Red Notice for the arrest of Elon Musk, for whatever excuse is found.
France doesn't extradite its citizens, even absolute scumbags like Roman Polanski. Someone like Musk has lots of lawyers to gum up extradition proceedings, even if the US were inclined to go along. I doubt the US extradition treaty would cover this unless the French could prove deliberate sharing of CSAM by Musk personally, beyond reckless negligence. Then again, after the Epstein revelations, this is no longer so far-fetched.
If I'm an employee working in the X office in France, and the police come in and show me they have a warrant for all the computers in the building and tell me to unlock the laptop, I'm probably going to do that, no matter what musk thinks
Witnesses can generally not refuse in these situations, that's plain contempt and/or obstruction. Additionally, in France a suspect not revealing their keys is also contempt (UK as well).
100%. Only additional troubles for yourself personally, for practically no benefit (nobody in the company is going to celebrate you).
The game changed when Trump threatened the use of military force to seize Greenland.
At this point a nuclear power like France has no issue with using covert violence to produce compliance from Musk and he must know it.
These people have proven themselves to be existential threats to French security and France will do whatever they feel is necessary to neutralize that threat.
Musk is free to ignore French rule of law if he wants to risk being involved in an airplane accident that will have rumours and conspiracies swirling around it long after he’s dead and his body is strewn all over the ocean somewhere.
You're implying that France is going to become a terrorist state? Because suspicious accidents do not sound like rule of law.
Counter-point. France has already kidnapped another social media CEO and forced him to give up the encryption keys. The moral difference between France (historically or currently) and a 3rd wold warlord is very thin. Also, look at the accusations. CP and political extremism are the classic go-tos when a government doesn't really have a reason to put pressure on someone but they really want to anyway. France has a very questionable history of honoring rule of law in politics. Putting political enemies in prison on questionable charges has a long history there.
"I can't see any difference between a country that has busted two companies that were known for hosting child porn, and a random cartel kingpin" isn't the flex you think it is
We are also talking about a country who wants to ban anonymous VPNs in the name of protecting the children and ask everyone to give their ID card to register account on Instagram, TikTok, etc.
OpenDNS is censored in France... so imagine
People were surprised when the US started just droning boats in the Caribbean and wiping out survivors, but then the government explained that it was law enforcement and not terrorism or piracy, so everyone stopped worrying about it.
Seriously, every powerful state engages in state terrorism from time to time because they can, and the embarrassment of discovery is weighed against the benefit of eliminating a problem. France is no exception : https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior
Killing foreigners outside of the own country has always been deemed acceptable by governments that are (or were until recently) considered to generally follow rule of law as well as the majority of their citizen. It also doesn't necessarily contradicts rule of law.
It's just that the West has avoided to do that to each other because they were all essentially allied until recently and because the political implications were deemed too severe.
I don't think however France has anything to win by doing it or has any interest whatsoever and I doubt there's a legal framework the French government can or want to exploit to conduct something like that legally (like calling something an emergency situation or a terrorist group, for example).
No difference in a strike like that and the strikes against fishing boats near Venezuela trump has ordered
> You're implying that France is going to become a terrorist state? Because suspicious accidents do not sound like rule of law.
Why not? After all, that's in vogue today. Trump is ignoring all the international agreements and rules, so why should others follow them?
Become? https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior
The second Donald Trump threatened to invade a nation allied with France is the second anyone who works with Trump became a legitimate military target.
Like a cruel child dismembering a spider one limb at a time France and other nations around the world will meticulously destroy whatever resources people like Musk have and the influence it gives him over their countries.
If Musk displays a sufficient level of resistance to these actions the French will simply assassinate him.
You got that backwards. Greenpeace for all its faults is still viewed as a group against which military force is a no-no. Sinking that ship cost France far more than anything they inflicted on Greenpeace. If anything, that event is evidence that going after Musk is a terrible idea.
PS Yes, Greenpeace is a bunch of scientifically-illiterate fools who have caused far more damage than they prevented. Doesn't matter because what France did was still clearly against the law.
I knew someone who was involved in an investigation (the company and person was the victim not the target of the investigation), their work laptop got placed into a legal hold, the investigators had access to all of their files and they weren't allowed delete to anything (even junk emails) for several years.
You don't get to say no to these things.
If you're a database administrator or similar working at X in France, are you going to going to go to jail to protect Musk from police with an appropriate warrant for access to company data? I doubt it.
I had the same thought - not just about raids, but about raiding a satellite office. This sounds like theater begging for headlines like this one.
They do what they can. They obviously can't raid the American office.
It sounds better in the news when you do a raid. These things are generally not done for any purpose other than to communicate a message and score political points.
Usually they steal all electronic devices.
These days many tech company offices have a "panic button" for raids that will erase data. Uber is perhaps the most notorious example.
>notorious
What happened to due process? Every major firm should have a "dawn raid" policy to comply while preserving rights.
Specific to the Uber case(s), if it were illegal, then why didn't Uber get criminal charges or fines?
At best there's an argument that it was "obstructing justice," but logging people off, encrypting, and deleting local copies isn't necessarily illegal.
> if it were illegal, then why didn't Uber get criminal charges or fines?
They had a sweet deal with Macron. Prosecution became hard to continue once he got involved.
Maybe.
Or they had a weak case. Prosecutors even drop winnable cases because they don't want to lose.
Macron's involvement with Uber is public information at this point.
[1]: https://www.lemonde.fr/pixels/article/2022/07/10/uber-files-...
[2]: https://www.radiofrance.fr/franceinter/le-rapport-d-enquete-...
It is aggressive compliance. The legality would be determined by the courts as usual.
> aggressive compliance
Put this up there with nonsensical phrases like "violent agreement."
;-)
violent agreement is when you're debating something with someone, and you end up yelling at each other because you think you disagree on something, but then you realize that you (violently, as in "are yelling at each other") agree on whatever it is. Agressive compliance is when the corporate drone over-zealously follows stupid/pointless rules when they could just look the other way, to the point of it being aggressively compliant (with stupid corporate mumbo jumbo).
Who knows.
I don't see aggressive compliance defined anywhere. Violent agreement has definitions, but it feels like it's best defined as a consulting buzzword.
This is a perfect way for the legal head of the company in-country to visit some jails.
They will explain that it was done remotely and whatnot but then the company will be closed in the country. Whether this matters for the mothership is another story.
It's not illegal to head a subsidiary of a company that did bad things, but I'm sure he will be intensely questioned. If he did something illegal, he may be punished.
> but then the company will be closed in the country. Whether this matters for the mothership is another story.
Elon would love it. So it won't happen.
Of course they will not lock the data but hide it, and put some redacted or otherwise innocent files in their place.
That sounds awfully difficult to do perfectly without personally signing up for extra jail time for premeditated violation of local laws. Like in that scenario, any reference to the unsanitized file or a single employee breaking omertà is proof that your executives and IT staff conspired to violate the law in a way which is likely to ensure they want to prosecute as maximally as possible. Law enforcement around the world hates the idea that you don’t respect their authority, and when it slots into existing geopolitics you’d be a very tempting scapegoat.
Elon probably isn’t paying them enough to be the lightning rod for the current cross-Atlantic tension.
These days you can probably ask an LLM to redact the files for you, so expect more of it.
True, but that’s going to be a noisy process until there are a few theoretical breakthroughs. I personally would not leave myself legally on the hook hoping that Grok faked something hermetically.
Nobody does that. It is either cooperation with law enforcement or remote lock (and then there are consequences for the in-country legal entity, probably not personally for the head but certainly for its existence).
This was a common action during the Russian invasion of Ukraine for companies that supported Ukraine and closed their operations in Russia.
It wasn't erasing as far I know, but locking all computers.
Covered here: https://www.theguardian.com/news/2022/jul/10/uber-bosses-tol...
It's sad to see this degree of incentives perverted, over adhering to local laws.
Or they just connect to a mothership with keys on the machine. The authorities can have the keys, but alas, they're useless now, because there is some employee watching the surveillance cameras in the US, and he pressed a red button revoking all of them. What part of this is illegal?
Obviously, the government can just threaten to fine you any amount, close operations or whatever, but your company can just decide to stop operating there, like Google after Russia imposed an absurd fine.
You know police are not all technically clueless, I hope. The French have plenty of experience dealing with terrorism, cybercrime, and other modern problems as well as the more historical experience of being conquered and occupied, I don't think it's beyond them to game out scenarios like this and preempt such measures.
As France discovered the hard way in WW2, you can put all sorts of rock-solid security around the front door only to be surprised when your opponent comes in by window.
How do you know this?
From HN, of course! https://news.ycombinator.com/item?id=32057651
They do have some physical records, but it would be mostly investigators producing a warrant and forcing staff to hand over administrative credentials to allow forensic data collection.
> forcing staff to hand over administrative credentials to allow forensic data collection.
What, thinking HQ wouldn't cancel them?
I'm sure an intelligent person such as yourself can think of ways around that possibility.
Nope. But I'm sure a more intelligent person such as yourself can tell me! :)
I read somewhere that Musk (or maybe Theil) companies have processes in place to quickly offload data from a location to other jurisdictions (and destroy the local data) when they detect a raid happening. Don't know how true it is though. The only insight I have into their operations was the amazing speed by which people are badged in and out of his various gigafactories. It "appears" that they developed custom badging systems when people drive into gigafactories to cut the time needed to begin work. If they are doing that kind of stuff then there has got to be something in place for a raid. (This is second hand so take with a grain of salt)
EDIT: It seems from other comments that it may have been Uber I was reading about. The badging system I have personally observed outside the Gigafactories. Apologies for the mixup.
That is very much illegal in the US
It wouldn't be the first time a Musk company knowingly does something illegal.
I think as far as Musk is concerned, laws only apply in the "don't get caught" sense.
Everyone defines their own moral code and trusts that more than the laws of the land. Don't tell me you've never gone over the speed limit, or broken one of the hundreds of crazy laws people break in everyday life out of ignorance.
The speed limit is not a law the same way "don't murder" is a law. And "don't destroy evidence of a crime" is a lot closer to "don't murder", legally speaking.
give any country a gift / investment of 100B USD
-> crimes ? what crimes ?
Gather evidence.
I assume that they have opened a formal investigation and are now going to the office to collect/perloin evidence before it's destroyed.
Most FAANG companies have training specifically for this. I assume X doesn't anymore, because they are cool and edgy, and staff training is for the woke.
If that training involves destroying evidence or withholding evidence from the prosecution, you are going to jail if you follow it.
What a strange assumption. The training is "summon the lawyers immediately", "ensure they're accompanied at all times while on company premises", etc.
That can start with self deleting messages if you are under court order, and has happens before:
“Google intended to subvert the discovery process, and that Chat evidence was ‘lost with the intent to prevent its use in litigation’ and ‘with the intent to deprive another party of the information’s use in the litigation.’”
https://storage.courtlistener.com/recap/gov.uscourts.cand.37...
VW is another case where similar things happens:
https://www.bloomberg.com/news/articles/2017-01-12/vw-offici...
The thing is: Companies don’t got to jail, employees do.
Right, but you are confusing a _conspiracy_ with staff training.
I didn't work anywhere near the level, or anything thats dicey where I needed to have a "oh shit delete everything the Feds are here" plan. Which is a conspiracy to pervert the course of justice (I'm not sure what the common law/legal code name for that is)
The stuff I worked on was legal and in the spirit of the law, along with a paper trail (that I also still have) proving that.
A raid with a warrant skips all that.
>withholding evidence from the prosecution, you are going to jail if you follow.
Prosecution must present a valid search warrant for *specific* information. They don't get a carte blanche, so uber way is correct. lock computers and lets the courts to decide.
In common law/4th amendment, kinda. Once you have a warrant, then the word reasonable comes into play. Its reasonable to assume that the data you want is on the devices of certain people. if incidental data/evidence is also procured that was reasonably likely to contain said data, then its fair game
In the civil code, its quite possibly different. The french have had ~ 3 constitutions in the last 80 years. The also dont have the concept of case history. who knows what the law actually is.
The training is very much the opposite.
mine had a scene where some bro tried to organise the resistance. A voice over told us that he was arrested for blocking a legal investigation and was liable for being fired due to reputational damage.
X's training might be like you described, but everywhere else that is vaguely beholden to law and order would be opposite.
> Seems like you'd want to subpoena source code or gmail history or something like that.
This would be done in parallel for key sources.
There is a lot of information on physical devices that is helpful, though. Even discovering additional apps and services used on the devices can lead to more discovery via those cloud services, if relevant.
Physical devices have a lot of additional information, though: Files people are actively working on, saved snippets and screenshots of important conversations, and synced data that might be easier to get offline than through legal means against the providers.
In outright criminal cases it's not uncommon for individuals to keep extra information on their laptop, phone, or a USB drive hidden in their office as an insurance policy.
This is yet another good reason to keep your work and personal devices separate, as hard as that can be at times. If there's a lawsuit you don't want your personal laptop and phone to disappear for a while.
Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data. There's also the possibility of the MDM software making it impossible to decrypt if given a remote signal. Even if you image the drive, you can't image the secure enclave so if it is wiped it's impossible to retrieve.
> Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data.
In these situations, refusing to provide those keys or passwords is an offense.
The employees who just want to do their job and collect a paycheck aren’t going to prison to protect their employer by refusing to give the password to their laptop.
The teams that do this know how to isolate devices to avoid remote kill switches. If someone did throw a remote kill switch, that’s destruction of evidence and a serious crime by itself. Again, the IT guy isn’t going to risk prison to wipe company secrets.
> Are they just seizing employee workstations?
Yes.
Why don't you think they have file cabinets and paper records?
Why is this the most upvoted question? Obsessing over pedantry rather than the main thrust of what's being discussed
France24 article on this: https://www.france24.com/en/france/20260203-paris-prosecutor...
lol, they summoned Elon for a hearing on 420
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
I wonder how he'll try to get out of being summoned. Claim 4/20 is a holiday that he celebrates?
It's voluntary
They'll make a judgement without him if he doesn't turn up.
An "audition en tant que témoin libre" is more or less the way for an investigation to give a chance to give their side of the story. Musk is not likely to be personally tried here.
> Claim 4/20 is a holiday that he celebrates?
Given his recent "far right" bromance that's probably not a good idea ;)
This isn't the 90s. The right doesn't give a shit about weed.
The right didn't give a shit about weed in the 80's or the 90's depending entirely upon who had it.
When Bernhard Hugo Goetz shot four teenagers on an NYC subway in the 80s, his PCP-laced marijuana use and stash back at his apartment came up in both sets of trials in the 80s and later in the 90s.
It was ignored (although not the alleged drug use of the teenagers) as Goetz was dubbed The Subway Vigilante and became a hero to the right.
~ https://en.wikipedia.org/wiki/1984_New_York_City_Subway_shoo...
His victims were upscaled to "super predators".
It hadn't occurred to me that might be the reason they picked 420
It’s unlikely, because putting the month first is a US thing. In France it would be 20/04, or “20 avril”.
Still, stoner-cultures in many countries in Europe celebrate 4-20, definitively a bunch of Frenchies getting extra stoned that day. It's probably the de-facto "international cannabis day" in most places in the world, at least the ones influenced by US culture which reached pretty far in its heyday.
Wouldn't celebrating hitler's birthday be good for his far-right bromance?
Oh, that was 100% in my mind when I wrote that. I was wondering how explicit to be with Musk's celebrating being for someone's birthday.
We'll know he's gone too far if he has to take another "voluntary" trip to Israel
you would perhaps be shocked to learn how right-leaning the money folks behind the legal and legacy cannabis markets actually are. money is money.
> Claim 4/20 is a holiday that he celebrates?
Most likely, it's Hitler's birthday after all
>The Paris prosecutor's office said it launched the investigation after being contacted by a lawmaker alleging that biased algorithms in X were likely to have distorted the operation of an automated data processing system.
I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
Distorted the operation how? By making their chatbot more likely to say stupid conspiracies or something? Is that even against the law?
GDPR has some stuff about biased algorithms. It's all civil, of course, no prison time for that, just fines.
Holocaust denial is illegal in France, for one, and Grok did exactly that on several occasions.
Also, csam and pornographic content using the likeness of unwilling people. Grok’s recent shit was bound to have consequences.
If the French suspected Grok/X of something as serious as CSAM, you can bet they would have mentioned it their statement. They didn't. Porn, they did.
The first two points of the official document, which I re-quote below, are about CSAM.
> complicité de détention d’images de mineurs présentant un caractère pédopornographique
> complicité de diffusion, offre ou mise à disposition en bande organisée d'image de mineurs présentant un caractère pédopornographique
[1]: https://www.tribunal-de-paris.justice.fr/sites/default/files...
> The first two points of the official document, which I re-quote below, are about CSAM.
Sorry, but that's a major translation error. "pédopornographique" properly translated is child porn, not child sexual abuse material (CSAM). The difference is huge.
Quote from US doj [1]:
> The term “child pornography” is currently used in federal statutes and is defined as any visual depiction of sexually explicit conduct involving a person less than 18 years old. While this phrase still appears in federal law, “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child. In fact, in 2016, an international working group, comprising a collection of countries and international organizations working to combat child exploitation, formally recognized “child sexual abuse material” as the preferred term.
Child porn is csam.
[1]: https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
> “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child.
Yes, CSAM is preferred for material depicting abuse reflecting resulting trauma.
But not for child porn such as manga of fictional children depicting no abuse and traumatising no child.
> Child porn is csam.
"CSAM isn’t pornography—it’s evidence of criminal exploitation of kids."
That's from RAINN, the US's largest anti-sexual violence organisation.
They are words for the same thing, it's like arguing they can't seize laptops because the warrant says computers.
Actually it's like arguing they can't seize all computers because the warrant only says laptops. I.e. correct.
Maybe US law makes a distinction, but in Europe there is no difference. Sexual depictions of children (real or not) is considered child pornography and will get you sent to the slammer.
On the contrary, in Europe there is a huge difference. Child porn might get you mere community service, a fine - or even less, as per the landmark court ruling below.
It all depends on the severity of the offence, which itself depends on the category of the material, including whether or not it is CSAM.
The Supreme Court has today delivered its judgment in the case where the court of appeals and district court sentenced a person for child pornography offenses to 80 day fines on the grounds that he had called Japanese manga drawings into his computer. Supreme Court dismiss the indictment.
The judgment concluded that the cartoons in and of itself may be considered pornographic, and that they represent children. But these are fantasy figures that can not be mistaken for real children.
https://bleedingcool.com/comics/swedish-supreme-court-exoner...
Is "it" even a thing which can be guilty of that?
The way chatbots actually work, I wonder if we shouldn't treat the things they say more or less as words in a book of fiction. Writing a character in your novel who is a plain parody of David Irving probably isn't a crime even in France. Unless the goal of the book as such was to deny the holocaust.
As I see it, Grok can't be guilty. Either the people who made it/set its system prompt are guilty, if they wanted it to deny the holocaust. If not, they're at worst guilty of making a particularly unhinged fiction machine (as opposed to the more restrained fiction machines of Google, Anthropic etc.)
> I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
GDPR and DMA actually have teeth. They just haven't been shown yet because the usual M.O. for European law violators is first, a free reminder "hey guys, what you're doing is against the law, stop it, or else". Then, if violations continue, maybe two or three rounds follow... but at some point, especially if the violations are openly intentional (and Musk's behavior makes that very very clear), the hammer gets brought down.
Our system is based on the idea that we institute complex regulations, and when they get introduced and stuff goes south, we assume that it's innocent mistakes first.
And in addition to that, there's the geopolitical aspect... basically, hurt Musk to show Trump that, yes, Europe means business and has the means to fight back.
As for the allegations:
> The probe has since expanded to investigate alleged “complicity” in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organised group, and other offences, the office said in a statement Tuesday.
The GDPR/DMA stuff just was the opener anyway. CSAM isn't liked by authorities at all, and genocide denial (we're not talking about Palestine here, calm your horses y'all, we're talking about Holocaust denial) is a crime in most European jurisdiction (in addition to doing the right-arm salute and other displays of fascist insignia). We actually learned something out of WW2.
Why "lol"?
420 is a stoner number, stoners lol a lot, thought of Elmo's failed joint smoking on JRE before I stopped watching
...but then other commenters reminded me there is another thing on the same date, which might have been more the actual troll at Elmo to get him all worked up
Well yes, if France24 was using "20 April 2026" as we write here, there would be no misunderstanding.
I believe people are looking too much into 20 April → 4/20 → 420
I believe the French format the date 20/4 ... and the time 16 h 20
April 20th most definitely is international stoners day. And I like what the French have done here!
I assume in France international stoners' day falls on the 4th of Duodevigintiber.
Thanks for the cultural perspective / reminder, yes that is definitely an American automatic translation
> lol, they summoned Elon for a hearing on 420
No. It's 20 April in the rest of the world: 204.
Incredible to see all these commenters defending obvious nasty behaviour by a bad individual and a sad company. Are you admiring Elon so much because he has money, success? There are more important things in live. Not being an asshole Nazi admirer, for example.
A Russian in a French prison says my country isn't free. Well, let that message spread to other criminals. You're not welcome in France.
Guess that will be a SpaceX problem soon enough. What a mess.
I wonder if the recent announcement spurred them into making a move now rather than later.
The merger was most likely now because they have to do it before the IPO. After the IPO, there’s a whole process to force independent evaluation and negotiation between two boards / executives, which would be an absolute dumpster fire where Musk controls both.
When they’re both private, fine, whatever.
First thing a public spacex would want to do is sell off all the non-spacex crap
How was that move legal anyway? Like... a lot of people and governments gave Musk money to develop, build and launch rockets. And now he's using it to bail out his failing social media network and CSAM peddling AI service.
Once he launched the rockets he can do whatever he wants with the profit. And he wants to train Grok.
Money comes with strings, such as when forming an ongoing relationship with a company you expect them to not merge with other companies you are actively prosecuting. I suspect the deal is going so fast to avoid some sort of veto being prepared. Once SpaceX and xAI are officially the same, you lose the ability to inflict meaningful penalties on xAI without penalizing yourself as an active business partner with SpaceX.
Was it a grant or a purchase? If I buy a pizza from the pizza shop, it costs them $10 to make, I pay $11, the $1 is profit and the owner can do what he wants with it. But if I get a grant from NLnet I have to spend it on what the grant proposal says. Though a lot of NLnet grants are for living costs while doing a project, so I can do what I like for that time if the project gets done.
CSAM in space! At least he isn’t reinventing the cross town bus.
The owner of X / Grok, despite claiming he "never heard of the guy", has been found to be associated with a known convicted pedophile and child prostitute trafficker.
The brother of said owner and board member of X / Grok, procured girls through said convicted pedophile.
X / Grok create a child porn generator.
Nothing to see here, move on. I can't believe you guys are still talking about that woke leftist hoax that I spent years telling you was a conspiracy and a coverup that went to the highest levels of the <other side> elites.
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.
if pictures are speech, then either CSAM is speech, or you have to justify an exception to the general rule.
CSAM is banned speech.
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
If libeling real people is a harm to those people, then altering photos of real children is certainly also a harm to those children.
I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
“ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.
It doesn't have to be an assertion, or even a written statement.You're quoting Canadian law.
In the US it varies by state but generally requires:
A false statement of fact (not opinion, hyperbole, or pure insinuation without a provably false factual core).
Publication to a third party.
Fault
Harm to reputation
----
In the US it is required that it is written (or in a fixed form). If it's not written (fixed), it's slander, not libel.
Pictures are statement of fact: what is depicted exists. Naked pictures cause harm to reputation
The relevant jurisdiction isn't the US either.
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
Really? By what US definition of CSAM?
https://rainn.org/get-the-facts-about-csam-child-sexual-abus...
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
That's not what we are discussing here. Even less when a lot of the material here is edits of real pictures.
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
They were downvoted for completely misunderstanding the comment they replied to.
I really don't see reasonable enforcement of CSAM laws as a restriction on "diversity of thought".
This is precisely the point of the comment you are replying to: a balance has to be found and enforced.
Very different charges however.
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
I like your username, by the way.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
In some shady corners of the internet I still see advertisements for child porn through Telegram, so they must be doing a shit job at it
"I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech."
I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.
An assassination market, at least the one we discussed, works like this - One or more people put up a bounty paid out on the death of someone. Anyone can submit a (sealed) description of the death. On death, the descriptions are opened — the one closest to the actual circumstances is paid the bounty.
One of my portfolio companies had information about contributors to these markets — I was told by my FBI contact when I got in touch that their view was the creation of the market, the funding of the market and the descriptions were all legal — they declined to follow up.
Sounds like betting on Polymarket: will $person die this year? If you're going to kill him, you bet everything you have on yes right beforehand.
OK this sounds more like gamer dipshittery than anything serious.
The issue is still not really speech.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
I wouldn't equate the two.
There's someone who was being held responsible for what was in encrypted chats.
Then there's someone who published depictions of sexual abuse and minors.
Worlds apart.
Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
EVERYONE: DON'T TRUST TELEGRAM AS END TO END ENCRYPTED CHAT https://blog.cryptographyengineering.com/2024/08/25/telegram...
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
>but I do really like a heterogenous cultural situation
Why isn't that a major red flag exactly?
Hi there - author here. Care to add some specifics? I can imagine lots of complaints about this statement, but I don't know which (if any) you have.
That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.
Once you've worked long enough in the software industry, you start to understand it's all just a fully planned economy.
Love watching people on HN support this because they don't like a certain CEO.
I'm sure it's comforting to believe that people you disagree with do so for silly reasons, but many people will support this just because we like the rule of law.
Why would X have offices in France? I'm assuming it's just to hire French workers? Probably leftover from the Pre Acquisition era.
Or is there any France-specific compliance that must be done in order to operate in that country?
X makes its money selling advertising. France is the obvious place to have an office selling advertising to a large European French-speaking audience.
Yes, Paris is an international capital and centrally located for Europe, the Middle East, and Africa. Many tech companies have sales offices there.
Facebook offices should routinely raided for aiding and profitting from various scams propagated through ads on this platform.
Governments don't care about minor scams. Political speech against them, on the other hand...
That would apply to any and all social media though
Sounds awesome, when do we start?
Governments prosecute violations of laws in ways that suit their interest. News at 11
I remember encountering questionable hentai material (by accident) back in the Twitter days. But back then twitter was a leftist darling
I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?
Not really?
But it doesn’t. Group has always had Aggressive filters on sexual content just like every other generative AI tool.
People who have found exploits, just like other generative AI tool.
Hentai has different legal status to realistic pictures of real people
Did you report it or just let it continue doing harm?
Define leftist for back in the twitter days? I used twitter early in release. Don’t recall it being a faction specific platform.
I think they're using it in the American sense, which means "anywhere in the political spectrum of the leftmost 60% of the population".
I suppose those are the offices from SpaceX now that they merged.
So France is raiding offices of US military contractor?
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
I know it's hard to grasp for you. But in France, french laws and jurisdiction applies, not those of the United States
To be fair, it is common confusion. In other comments, you see people arguing by US constitution.
Even if it is, being affiliated with the US military doesn't make you immune to local laws.
https://www.the-independent.com/news/world/americas/crime/us...
So what?
Another discussion: https://news.ycombinator.com/item?id=46872894
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
> official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
>I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms
I think we are getting very close the the EU's own great firewall.
There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?
- fine harvesting mechanism? Keep as-is.
- true user protection? Blacklist.
I support the EU harvesting money from evil companies
Or the companies could obey the law
This. We don't have to accept that they behave that way. They enter our economies so they need to adhere to our laws. And we can fine them. No one wants to lose Europe as a market, even if all the haters call us a shithole.
In an ideal world they'd just have an RSS feed on their site and people, journalists, would subscribe to it. Voilà!
This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
Who decides what communication is in the interest of the public at large? The Trump administration?
You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
Public institutions can use any system they want and make the public responsible for reading it.
I remember in some countries there's an official government newspaper. Laws reference publishing things in this paper (e.g. tax rate changes, radio frequency allocations) and the law is that you must follow it once it's published.
In practice the information is disseminated through many channels once it's released in the official newspaper. Mass media reports on anything widely relevant, niche media reports on things nichely relevant, and there's direct communication with anyone directly affected (recipient of a radio frequency allocation) so nobody really subscribes to the official government newspaper, but it's there and if there was a breakdown of communication systems that would be the last resort to ensure you are getting government updates.
This is a show of resolve.
"Uh guys, little heads up: there are some agents of federal law enforcement raiding the premises, so if you see that. That’s what that is."
I guess this means that building the neverending 'deepfake CSAM on demand machine' was a bad idea.
It's cool that not every law enforcement agency in the world is under the complete thumb of U.S. based billionaires.
This is yet another example of Macron (Europe) playing stupid games with Trump (US).
The charges are made up baloney, the victims don't exist, it's just more IP theft and cash grab.
Maybe you should look up child pornography laws in Europe. In Sweden, the mere act of scrolling by an image depicting (real or not) a child in a sexual position, and having it stored in the browser cache, is a crime with up to 2 years of prison time.
Elon's in the files asking Epstein about "wild parties" and then doesn't seem to care about all this. Easy to draw a conclusion here.
All I've seen is Elon tried to invite himself to the "wild parties" and they told him he couldn't come and that they weren't doing them anymore lol. It's possible he went but, from what I've seen, he wasn't ever invited.
As far as I can tell from the reporting:
* They exchanged various emails between 2012 and 2014 about Elon visiting the island
* They made plans for Elon to visit the island
* We don't know if Elon actually followed through on those plans and he denies it
I think it's premature to say he didn't go, and the latest batches of emails directly contradict the claim he wasn't ever invited.
See https://www.cnbc.com/2026/01/30/epstein-files-show-elon-musk...
Elon is literally in the files, talking about going to the island. It's documented
Who knows who did what on this island, and I hope we'll figure it out. But in the meantime, going to this island or/and being friend with Epstein doesn't automatically make someone a pedo or rapist.
As part of the irrational mob that is out to find the witch, you are just being too rational. Down vote!
Ignoring mountains of circumstantial evidence isn't rational either.
It's odd to be so prim about someone who is notorious for irrational trolling for the sake of mob entertainment.
https://www.theguardian.com/technology/2018/jul/15/elon-musk...
No, but they all knew he was a pedo/rapist, and were still friends with him and went to the island of a pedo/rapist, and introduced the pedo/rapist to their friends...
We don't know how many were pedo/rapists, but we know all of them liked to socialize with one and trade favours and spread his influence.
Yes yes such a complex situation and so hard to tell whether the guy with the pedo non-con site wanted to go to the pedo non-con island.
Neither does your wife divorcing you at about the same time things started to go through legal process...
Oops... yeah, in retrospect it was even worse... no... you can and should be judged by the friends you keep and hang-out with... The same ones who seem to be circling the wagons with innocuous statements or attempts to find other scapegoats (DARVO)... hmm, what was that quote again:
"We must all hang together or we will all hang separately"
You know the flight logs are public record and have been for a decade right? We know (and have known for awhile), exactly who was and wasn't there. Who was there: Obama, Bill Clinton, and Bill Gates (his frequency of visits cost him his marriage). Who wasn't there? Trump and Elon because at the time they weren't important enough to get an invite. All of this is a matter of public record.
Obama is not in the flight logs and there is no evidence he was ever on the island.
Elon Musk has his own planes, he would not have needed a ride had Epstein invited him. Recently released emails also show people (like commerce secretary Howard Lutnick, who asserted at great length last year that he hadn't had any contact with Epstein since meeting him in 2005) arranging to visit Epstein at his island and taking their own yacht over there.
He was only going to the island to get rid of bots on Twitter. Just like OJ spent the rest of his life looking for the real killer.
It's timestamped like 2013, I think. Years before he bought Twitter (yes, I know you're joking)
He was planning way ahead, like a real genius.
Finally, someone is taking action against the CSAM machine operating seemingly without penalty.
It's also a massive problem on Meta. Hopefully this action isn't just a one-off.
Does Meta publish it themselves or is it user–generated?
I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
No abuse of a real minor is needed.
> CSAM does not have a universal definition.
Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.
> In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.
No corroboration found on web. Quite the contrary, in fact:
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
https://rm.coe.int/factsheet-sweden-the-protection-of-childr...
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
This is the actual law (Brottsbalken 16:10a)
https://www.riksdagen.se/sv/dokument-och-lagar/dokument/sven...
A person who
1. depicts a child in a pornographic image,
2. disseminates, transfers, provides, exhibits, or otherwise makes such an image of a child available to another person,
3. acquires or offers such an image of a child,
4. facilitates contacts between buyers and sellers of such images of children or takes any other similar measure intended to promote trade in such images, or
5. possesses such an image of a child or views such an image to which he or she has gained access
shall be sentenced for a child pornography offense to imprisonment for at most two years.
Then there's Proposition 2009/10:70, which is a clarifying document on how the law should be interpreted:
https://www.riksdagen.se/sv/dokument-och-lagar/dokument/prop...
Let me quote (translated):
"To depict a child in a pornographic image entails the production of such an image of a child. An image can be produced in various ways, e.g., by photographing, filming, or drawing a real child. Through various techniques, more or less artificial images can also be created. For criminal liability, it is not required that the image depicts a real child; images of fictitious children are also covered. New productions can also be created by reproducing or manipulating already existing depictions, for example, by editing film sequences together in a different order or by splicing an image of a child’s head onto an image of another child’s body."
Not only that. This law exists like this because of a EU directive.
https://eur-lex.europa.eu/eli/dir/2011/93/oj/eng
Let me quote again: Pay attention to c.iv specifically:
(c) ‘child pornography’ means:
(i) any material that visually depicts a child engaged in real or simulated sexually explicit conduct;
(ii) any depiction of the sexual organs of a child for primarily sexual purposes;
(iii) any material that visually depicts any person appearing to be a child engaged in real or simulated sexually explicit conduct or any depiction of the sexual organs of any person appearing to be a child, for primarily sexual purposes; or
(iv) realistic images of a child engaged in sexually explicit conduct or realistic images of the sexual organs of a child, for primarily sexual purposes;
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
> Are you from Sweden?
No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.
> Why do you think the definition was clear across the world and not changed "before AI"?
I didn't say it was clear. I said there was no disagreement.
And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.
"No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
> So you cant speak Swedish, yet you think you grasped the Swedish law definition?
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
" I too doubt there's material disagreement between judicial definitions. "
Sources? Sorry , your gut feeling does not matter. Esspecially if you are not a lawyer
I have no gut feeling here. I've seen no disagreeing judicial definitions of CSAM.
Feel free to share any you've seen.
> Even the Google "AI" knows better than that. CSAM "is [...]"
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
Thanks. For a moment I slipped and fell for the "AI" con trick :)
> - in any current law.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
> It has been since at least 2012 here in Sweden. That case went to our highest court
This one?
"Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"
https://bleedingcool.com/comics/swedish-supreme-court-exoner...
It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.
> and they decided a manga drawing was CSAM
No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.
You are both arguing semantics. A pornographic image of a child. That's illegal no matter what it's called. I say killing, you say murder, same law though, still illegal.
In Swedish:
https://www.regeringen.se/contentassets/5f881006d4d346b199ca...
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
Where do these people come from???
The lady doth protest too much, methinks.
That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.
> You cant use your common law experience to interpret the law in other countries.
That interpretation wasn't mine. It came from the Court of Europe doc I linked to. Feel free to let them know its wrong.
So aggressive and rude, and over... CSAM? Weird.
As good as Australia's little boobie laws.
https://www.theregister.com/2010/01/28/australian_censors/
You don't see a huge difference between abusing a child (and recording it) vs drawing/creating an image of a child in a sexual situation? Do you believe they should have the same legal treatment? In Japan for instance the latter is legal.
He made no judgement in his comment, he just observed the fact that the term csam - in at least the specified jurisdiction - applies to generated pictures of teenagers, wherever real people were subjected to harm or not.
I suspect none of us are lawyers with enough legal knowledge of the French law to know the specifics of this case
This comment is a part of the chain that starts with a very judgemental comment and is an answer to a response challenging that starting one. You don't need legal knowledge of the French law to want to distinguish real child abuse from imaginary. One can give arguments why the latter is also bad, but this is not an automatic judgment, should not depend on the laws of a particular country and I, for one, am deeply shocked that some could think it's the same crime of the same severity.
Are you implying that it's not abuse to "undress" a child using AI?
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
> Are you implying that it's not abuse to "undress" a child using AI?
Not at all. I am saying just it is not CSAM.
> You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools.
Its terrible. And when "AI"s are found spreading deepfakes around schools, do let us know.
CSAM: Child Sexual Abuse Material.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
> When you undress a child with AI,
I guess you mean pasting a naked body on a photo of a child.
> especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated.
In which country is that?
Here in UK, I've never heard of anyone jailed for doing that. Whereas many are for making actual child sexual abuse material.
It doesn't mention grok?
Sure does. Twice. E.g.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
CTRL-F "grok": 0/0 found
You're using an "AI" browser? :)
I found 8 mentions.
Yet another nail
> They have also summoned billionaire owner Elon Musk for questioning.
Good luck with that...
the thing is a lot of recent legal preceding surrounding X is about weather X fulfilled the legally required due diligence and if not what level of negligence we are speaking about
and the things about negligence which caused harm to humans (instead of e.g. just financial harm) is that
a) you can't opt out of responsibility, it doesn't matter what you put into your TOS or other contracts
b) executives which are found responsible for the negligent action of a company can be hold _personally_ liable
and independent of what X actually did Musk as highest level executive personal did
1) frequently did statements that imply gross negligence (to be clear that isn't necessary how X acted, which is the actual relevant part)
2) claimed that all major engineering decisions etc. are from him and no one else (because he love bragging about how good of an engineer he is)
This means summoning him for questioning is legally speaking a must have independent of weather you expect him to show up or not. And he probably should take it serious, even if that just means he also could send a different higher level executive from X instead.
I guess he could just never enter the EU ever again. Maybe he can buy Little St. James.
Good and honestly it’s high time. There used to be a time when we could give corps the benefit of the doubt but that time is clearly over. Beyond the CSAM, X is a cesspool of misinformation and generally the worst examples of humanity.
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
> If anything it should be an embarrassment that France are the only ones doing this.
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
Full marks to France for addressing its higher than average rate of unemployment.
/i
> when notified, doing nothing about it
When notified, he immediately:
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:
https://www.bbc.com/news/articles/c98p1r4e6m8o
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
> Which is an entirely different legal liability.
In UK, it is entirely the same. Near zero.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
I thought this was about France
It was... until it diverted. https://news.ycombinator.com/item?id=46870196
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.
“Sorry I broke the law. Oops for reals tho.”
Kiddie porn but only for the paying accounts!
Who's going to provide their payment details and then generate kiddie porn?
This is a pretty pragmatic move by Musk.
It's basically a honey trap, the likes of which authorities legitimately use to catch criminals.
The other LLMs probably don't have the training data in the first place.
Er...
"Study uncovers presence of CSAM in popular AI training dataset"
https://www.theregister.com/2023/12/20/csam_laion_dataset/.
I think the grok incident/s were distasteful but I can't honestly think of a reason to ban grok and not any other AI product or even photoshop.
I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.
They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.