27 comments

  • latexr 4 hours ago

    In case someone is missing context, this is Google (apparently together with Meta, Microsoft, and Snap) coming out in favour of Chat Control legislation. This is something EU citizens have so far fought tooth and nail to repel. The fact that these US companies known for spying on people and invading privacy in the name of profit are lobbying for the legislation should be a warning to us all to avoid their services.

    • jonas21 3 hours ago

      They're not coming out in favor of Chat Control -- they're coming out in favor of having some option where they can operate without violating the law.

      The problem right now is that they can be held liable for distributing CSAM content on their services and, since April 3, they can also be fined if they try to detect that content. It's an impossible situation.

      Now, I'm not claiming that these companies always have noble intentions. But there's nothing nefarious here -- they just want regulatory certainty: do X, Y, and Z and you won't be fined or sued.

      • fc417fc802 1 hour ago

        Blatantly false. They aren't liable as long as they promptly action reports, just like everyone else.

        My impression is that they don't like the bad PR currently associated with various debates surrounding use of social media by children. At the same time they don't want to implement various policies that would be popular with the general public but would hurt their bottom line (ie they don't want to do the right thing).

        So instead they make a big deal about various imperfections to justify draconian solutions that would see them able to implement all sorts of privacy violating measures. Thankfully that failed so now they're engaging in a smear campaign.

        The current conduct of these companies in this regard is openly evil.

        • fwn 2 hours ago

          Yes, Big Tech is in a tough spot here. Obviously, no one wants to host CSAM or be fined for doing so.

          Implementing end-to-end encryption on relevant communication services could mitigate many risks that come with hosting user content.

          It would protect users from Big Tech spying and still allow affected users to report if something sketchy is going on. Best of both worlds.

          In any case, it would be a good start.

          • kkfx 1 hour ago

            It's not impossible; it's their centralised model that is. It's unthinkable to have private platforms on modern mainframes (data centers) instead of distributed, decentralised services where everyone holds a piece (DHT) or whatever they want (e.g. Nostr/Blossom), and is responsible for what they do.

            It's impossible to imagine having democratic societies where four fat cats know everything about everyone and most people know almost nothing about them, where information, instead of being scattered everywhere for resilience, is concentrated in just a few hands.

        • FabCH 4 hours ago

          Interesting way to frame the fact that the members of the european parliament voted 311 to 218 yesterday to reject the companies right to spy on you.

          I'm the first person to admit the EU has democratic deficit, but MEPs are directly elected by EU citizens and they chose this in a democratic process. The companies are certainly making a choice with this blogpost.

          • SpicyLemonZest 4 hours ago

            I dunno, man. If tech companies responded to a failure to extend interim guidance by terminating their CSAM detection programs, and claimed when challenged that the EU made them do it, I'm pretty confident there would be much more outrage about "malicious compliance". If the EU wants companies to stop detecting CSAM until the final guidance arrives, they should say so directly.

            • FabCH 3 hours ago

              They did.

              EU Commission reported that the false positive rate was 13-20%.

              German police reported that 50% of all reports were wrong.

              The system is rubbish and the EU MEPs were quite open about wanting it to go away.

              • bluGill 2 hours ago

                What is the false negative rate and total rates? Without those we are missing too much. If the false negative rate (saying fine but it isn't) then the whole thing is useless. If the total cases are a few hundred (either CASM isn't a problem or those doing it use other platforms cause they know they will be caught on these) I don't care much that some are false positives - odds are it didn't get me.

                • tremon 2 hours ago

                  You can not know the false negative rate without investigating 100% of all photos. You are asking for the impossible.

                  • fc417fc802 1 hour ago

                    Sure you can, random sampling should work. Don't just go making things up.

                    Of course actually carrying out that experiment would be absurd since I don't think anyone expects an appreciable percentage of clearnet material to be CSAM. The working assumption is that the goal is to find a needle in a haystack so GP's objection about needing to know the false negative rate is misguided.

                    • bluGill 51 minutes ago

                      I expect the equivelent of the fbi is investigating this using other sourcs and so has plenty of data without needing to randomly sample any non-suspect conversation. CASM has been a problem since before computers.

                    • bluGill 2 hours ago

                      if you want perfection. But the eu should be doing investigation that they can use statistics to create a good estimate.

                  • throwaway89201 2 hours ago

                    The report you're referring to by the European Commission [1] shows that the mass surveillance of Chat Control 1.0 is probably not very proportional. They even note themselves that "The available data are insufficient to provide a definitive answer to this question".

                    However, the "13-20%" that you're quoting is a dishonest propaganda number itself. It's the false positive rate that a single small company (Yubo) reported. The reported false positive rates of other companies are between 0.32% and 1.5%, which is still a high error rate in absolute numbers.

                    Just to be clear: the report itself is full of uncertainty, convenient half truths and false causality. They for example completely rely on Big Tech platforms themselves to count false positives when a moderation decision was reversed. Microsoft apparently even claims that no user ever appealed against a decision ("No appeals reported"). There is no independent investigation into the effectiveness of the regulation at all, while it is in direct conflict with fundamental rights and required to be proportional to its goals.

                    The section about "children identified" is also a complete mess where most countries can't even report the most basic data, and it isn't clear if mass surveillance contributed anything to new cases at all. But somehow they still conclude "voluntary reporting in line with this Regulation appears to make a significant contribution to the protection of a large number of children", which seems extremely baseless.

                    [1] https://www.europarl.europa.eu/RegData/docs_autres_instituti...

                    • SpicyLemonZest 3 hours ago

                      I'm sure a lot of HN commenters would agree that a CSAM detection system with a 13-20% false positive rate should be terminated, but we're not EU regulators. And you've got a sibling comment saying this would be malicious compliance, so even on HN it's not unanimous. Is there an example of a specific EU official, MEP, etc. explicitly stating that tech companies should not perform hash-based CSAM detection or should not perform CSAM detection at all?

                      • FabCH 3 hours ago

                        Yes? The Pirate Party has MEPs, it’s not exactly difficult to find their quotes. 3 seconds of searching was enough to find the following quote from MEP Markéta Gregorová:

                        „We can now finally say with certainty that Chat Control 1.0 will end on April 3 without replacement. The European Parliament has sent a clear signal: it is time to put an end to this ineffective and disproportionate derogation from privacy rules. Under the pretext of protecting children, millions of private messages from innocent citizens were being scanned for years without delivering adequate results. This system simply did not work and had no place in a democratic society.“

                        It doesn’t have to be unanimous on HN. It wasn’t even unanimous in the EUP.

                        But what it was is legal and democratic. And the discussion in the parliament explicitly included the fact that the companies will either have to stop, or find a different legal grounding.

                        The companies in this blog post are effectively admitting they are making a choice to go against the law.

                    • ceejayoz 3 hours ago

                      > I'm pretty confident there would be much more outrage about "malicious compliance".

                      As there should be.

                      The big tech companies have done that every time the EU passes some consumer protections, and have been spanked in court several times for the disingenuousness.

                      • generic92034 3 hours ago

                        Spanked? Hardly ever are there fines

                        A) actually being paid in the end and

                        B) high enough to be of any concern to the concern.

                  • throwaway89201 3 hours ago

                    So just a recap of what happened between the European Commission and the European Parliament and why the regulation has expired (it's a long story, I'm probably missing many nuances):

                    - In 2021 the European Parliament voted in favor of a temporary regulation that allowed companies to (i.e. voluntarily) scan private communications. Let's call it Chat Control 1.0. They chose to enact this because US companies were already scanning private messages in violation of the ePrivacy Directive which had come into force in the previous year. Instead of enforcing this directive, they chose to (temporarily) legalize the scanning of private messages while preparing more permanent legislation.

                    - In 2024 Chat Control 1.0 was extended for another 2 years. An amendment was adopted that explicitly noted that after this time "[the regulation] shall lapse permanently".

                    - From 2022 to 2025 the European Commission (together with member states) has proposed mandatory scanning, later updated with a proposal for client-side scanning (defeating end to end encryption), AI classification of image and text content, age verification and a lot of other invasive measures. This is what is known as Chat Control 2.0. The European Parliament has again and again voted against this proposal.

                    - In 2025/2026 the European Commission finally (temporarily) backed down from Chat Control 2.0 and instead proposed to extend Chat Control 1.0 for another 2 years, but has completely failed to negotiate with parliament to adopt a text that explicitly puts fundamental rights up front, something that a majority of the European Parliament had asked for since 2021.

                    - In response to this, the Civil Liberties Committee of the European Parliament tabled amendments [1] that explicitly limits the regulation to the subject matter and prevents it from being used to weaken end-to-end encryption. Many of these amendments were adopted.

                    - Consequently, many conservative members of the European Parliament voted down the entire extension of the regulation. They apparently felt that it was better to let the regulation expire so that they gain more negotiation power to adopt a version of the regulation that the has less safeguards or contains measures like in Chat Control 2.0.

                    [1] https://www.europarl.europa.eu/doceo/document/LIBE-AM-784377...

                    • sebastiennight 2 hours ago

                      I think your recap is missing a pretty large step at the very beginning, which is that AFAIR, the EU Parliament put together this temporary regulation to a posteriori allow the scanning that was already being done, outside of the law, by those US companies on EU citizen messages ; and the temporary regulation was put in place until a proper framework could be agreed upon.

                      • throwaway89201 2 hours ago

                        Yes indeed, thanks for the correction. It has been a complex story, and I already forgot that chapter. I edited it into my post (also modified a wrong date of the first derogation), although I'm probably missing more nuances.

                    • mehov 3 hours ago

                      The important thing you need to know about EU Chat Control is that the politicians will be exempted from the mass surveillance they are about to build.

                      https://fightchatcontrol.eu/

                      • Geof25 2 hours ago

                        Setup a political party and turn every citizen into politician. For a fee of course

                      • praptak 3 hours ago

                        When I see a corporation taking moral high ground I immediately assume their motives are nasty and 99% of the time I'm right.

                      • cherryteastain 51 minutes ago

                        We can see how monumentally important to preserving the right to privacy the work to stop Chat Control has been by the frothing anger we see displayed here. The companies which complain to Uncle Sam every time they're fined by the EU for getting caught red handed smothering competition now ask the EU for more regulation.

                        Ms. von der Leyen will need to find a way to make things up for Google et al considering she has been pushing this regulation at their behest with fervour. That will probably take the form of even further prostration of Europe to US big tech.

                        • userbinator 2 hours ago

                          What a doublespeak title.

                          "Reaffirming our commitment to mass surveillance"

                          That's more like it.

                          • dang 3 hours ago

                            Is there a more neutral and informative third-party article? The corporate press release is not a great genre fit for this site.

                            https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...

                            • fc417fc802 47 minutes ago

                              I kind of like it in this specific case because it's straight from the horse's mouth, laying bare their intent without inviting accusations of a third party having introduced political spin.

                            • moezd 3 hours ago

                              Alright, at least now we can confidently put company symbols next to this incessant push towards Chat Control in EU parliament. Know your enemy, I guess.

                              • djoldman 2 hours ago

                                Seems like if it were possible to implement end to end encryption where google had no way to decrypt a communication, google could avoid liability for facilitating transmission of CSAM?

                                Shouldn't this big liability be pushing the big tech firms to do so?

                                • notrealyme123 3 hours ago

                                  How about we protect the children from Google and meta? Making children into depressed social media addicts is not great.

                                  • bypdx 2 hours ago

                                    The world, including is not and should not be a safe place for children. The government is not and should not be a substitute for parenting.

                                    • bradley13 4 hours ago

                                      It's for the children!

                                      BS. It's for control and censorship and data harvesting.

                                      Meta alone spend $2 billion lobbying for age-restriction laws, which they tried to hide by pumping it through third parties. We don't know how much the other tech giants spent.

                                      • eeeficus 4 hours ago

                                        When you see the behemots of US tech coming together you can be sure it isn’t for anything good! These assholes are supporting and enabling the orange clown (a suspected pedophile) and they want us to believe that they suddenly care about the children.

                                        • dygd 3 hours ago

                                          > This is not just a matter of law, but of protecting children.

                                          They didn't even write this themselves.

                                          • CommenterPerson 4 hours ago

                                            Came here to write this exact same thing; saw it's already done.

                                          • IncreasePosts 4 hours ago

                                            How is matching images against known hashes of child porn enabling control, censorship, and data harvesting?

                                            • whatshisface 4 hours ago

                                              It is like letting a policeman into your house to make sure you are not committing crimes. The methods (installing an AI module behind your defenses against criminal hackers that is programmed to betray you) are too invasive.

                                              • IncreasePosts 24 minutes ago

                                                Real world analogies to tech usually don't work(I would download a car), but I think in this case it would be more like you hire a servant, and that servant helps you out with whatever you ask, but if your servant sees something absolutely disgusting and illegal, they call the police and tattle on you.

                                                Or another analogy, back in the day, when nearly everyone was taking pictures with film cameras, the person doing the developing of your film would definitely call the cops on you if you had them develop child porn.

                                              • ceejayoz 4 hours ago

                                                Because at some point someone in power puts the JD Vance meme that was going around in as a hash.

                                                • iamnothere 2 hours ago

                                                  Or leaks related to national security failures/coverups or exposing corruption. Or copyright infringement.

                                                • exyi 4 hours ago

                                                  Same tool is very handy if you hypothetically wanted to control spread of anything else, like anti ice apps for instance.

                                                  Also hash matching is so easily bypassed you can be sure they really want to add some "AI" detector as well

                                                  • IncreasePosts 23 minutes ago

                                                    How is scanning hashes of photos you upload to your cloud account going to give anyone the ability to stop you from downloading an app?

                                                    • gruez 3 hours ago

                                                      >Same tool is very handy if you hypothetically wanted to control spread of anything else, like anti ice apps for instance.

                                                      That's a weak argument because they can already do that today with google's play protect and apple's app notarization.

                                                      • fc417fc802 42 minutes ago

                                                        They already have one way of doing it therefore we should make a legal carve out to give them additional ways of doing it even though we don't want them to be able to in the first place.

                                                        That doesn't make sense. It's a defeatist attitude that serves only to advantage the opponent.

                                                    • eqvinox 4 hours ago

                                                      > matching images against known hashes

                                                      That's not how that works, last I checked. AIUI it's much more fuzzy. Has to be, being scum doesn't automatically make you an idiot, and a single bit change would make plain old hashes entirely useless.

                                                      Insert your favourite dystopia to see where that ends up and how companies benefit from it.

                                                      • IncreasePosts 27 minutes ago

                                                        Hash functions don't need to be bit-level sensitive. See: "perceptual hashing"

                                                      • raverbashing 4 hours ago

                                                        I'd give it that matching hashes is probably the least worse way of going about this

                                                        Except for that pesky detail of hash collisions

                                                    • echelon 4 hours ago

                                                      > Reaffirming our commitment to child safety

                                                      "We tried to build an even deeper panopticon to enslave you. Drats, you and your Democratic process. We thought we'd pulled the wool over your eyes claiming it was for the kids. We'll get you next time you peons. It's just a matter of time."

                                                      • matheusmoreira 4 hours ago

                                                        Too accurate... I hate that they will actually keep trying to force it through until they get the outcome they want. You didn't vote correctly this time, time to hold another referendum. Do try to vote more responsibly this time around.

                                                      • oybng 3 hours ago

                                                        days since google was evil: 0

                                                        • nothinkjustai 3 hours ago

                                                          I know people say Apple’s commitment to privacy is all talk, and there are valid criticisms of Apple and their business practices, but they seem better than the other big tech companies like Meta, MS, and Google by a very wide margin when it comes to privacy.

                                                        • OrvalWintermute 2 hours ago

                                                          BigTech is quickly trying to punt their legal liabilities from their alleged actions, and transfer that risk elsewhere e.g. https://www.nbcnews.com/tech/social-media/jury-orders-meta-p...

                                                          • kkfx 1 hour ago

                                                            Coming from a company that profiles children with Classroom, sure. Coming from those pushing for age verification, just to shift the Overton window of acceptability towards mandatory logins for everything, the end of the open web and free discussion, all to better feed on slaves.

                                                            No thanks, ChatControl is a THREAT to Democracy, and the companies pushing it must be ELIMINATED from the market for reasons of national and human security, along with the politicians lobbying for them.

                                                            • Thank you, but no. We don't want mass spying. The "child safety" argument is simply lies and manipulation.

                                                              • b00ty4breakfast 3 hours ago

                                                                "We are once again sending out checks to EU commissioners to get our handcrafted legislation put into law"

                                                                • FpUser 3 hours ago

                                                                  Translation: "reaffirming our commitment to spy upon, control and censor users"

                                                                  Fuck you.

                                                                  • raverbashing 4 hours ago

                                                                    > Reaffirming our commitment to regulatory capture and mass surveillance

                                                                    FTFY

                                                                    • burnt-resistor 2 hours ago

                                                                      "Think of the children" really means "think of the government and big tech control, privacy data monetization and data brokering we're able to force on these fools".

                                                                      While I want parents to be able to protect kids in a sensible manner, selling out everything and everyone else in civilization and our core values isn't a price we should ever consider sacrificing in so-called democratic societies.

                                                                      • sylos 4 hours ago

                                                                        Maybe if all of those companies hadn't paid large sums of money to one of the most famous child sex traffickers, their cries of "think of the children" wouldn't be so creepy

                                                                        • gruez 3 hours ago

                                                                          >Maybe if all of those companies hadn't paid large sums of money to one of the most famous child sex traffickers

                                                                          Source? Specifically that they paid "large sums" after it came out they were child sex traffickers? Otherwise you can't (or should) expect companies to be doing private investigations prior to donating.

                                                                          • agilob 3 hours ago

                                                                            Larry Page and Mark Zuckerberg, colleagues of Jerry Epstein, are committed to protect your children. From whom? Are they going to scan all emails and use AI to rat on their buddies?

                                                                          • ninjahawk1 3 hours ago

                                                                            It’s never “for the children”, it’s about control and money.

                                                                            • bypdx 2 hours ago

                                                                              The world, the internet is a place for adults. Government is NOT an acceptable substitute for parents.

                                                                              • kubb 4 hours ago

                                                                                This is great, Google vs EU. Which one does HN hate more? Can't wait to find out.

                                                                                • ahartmetz 4 hours ago

                                                                                  Let's try to keep the conversation about the issues instead of tribal outrage bait.

                                                                                  • debugnik 3 hours ago

                                                                                    This post is Google attempting outrage bait to push for mass surveillance. The comments can't get much better than the topic.

                                                                                    • gruez 3 hours ago

                                                                                      >instead of tribal outrage bait.

                                                                                      I'd say around at least a quarter of the comments in this thread are generic tribal/populist "outrage bait".

                                                                                      • kubb 3 hours ago

                                                                                        Riiight, my comment is the problem that lowers the quality of the very focused, issue-based discussion.

                                                                                        • ahartmetz 1 hour ago

                                                                                          I didn't downvote you and I saw your post as making fun of the bad direction more so than contributing to it.

                                                                                        • hackable_sand 1 hour ago

                                                                                          Try to keep your bot from posting comments then.

                                                                                        • nothinkjustai 3 hours ago

                                                                                          Somehow it’ll be Apple