I asked Kimi K2.6 to write a blog post in the style of James Mickens.[0] Then I fed the output to Opus 4.7 and asked it who the likely author was, and it correctly identified it as an imitation of James Mickens[1]:
> Based on the stylistic fingerprints in this text, the most likely author is a pastiche/imitation of the style of several writers fused together, but if forced to identify a single likely author, the strongest candidate is someone writing in the voice of James Mickens
> [...]
> The piece could also be a deliberate imitation/homage to Mickens written by someone else, or AI-generated text trained on his style, since the voice is so distinctive it's frequently parodied.
> it correctly identified it as an imitation of James Mickens
How likely is it that it might take into account that it knows for sure it's not anything from Mickens from the latest training data? I'd be curious if it correctly identified a new piece from him that comes out as from him before it gets trained on it.
This is unlikely. The way model distribution works is that the model retains a lossy representation of James Micken's writing. Very likely, it cannot repeat Micken's writing verbatim. Neither can it reason about the training cutoff in this manner.
It is lossy, but it is still enough for verbatim recreations. All of Wikipedia is just 24GB of lossless compressed text and all of JK Rowling's work fits into a few MB. So these things would easily be storable verbatim in trillion parameter models. Reasoning about the training cutoff is also something that the newest models do pretty well, because you can teach them to do so after pre training using e.g. SFT. With tool use it can then even check actual current sources, which may happen without you even knowing in the normal chat apps unless you use a controlled API call.
How do you know, how the model works? If there was an index of all Micken's writings, or even if the model searched the web before feeding the response to you, you wouldn't know by observing from the outside.
Haven’t there been repeated experiments that show if you jailbreak most frontier models’ harnesses you can get them to output near verbatim copyrighted works?
I swear there was a whole court case about this in the last year.
That's neat, though it impresses me less that the article. Mickens has a very particular style that this is very close to but doesn't quite capture, and I think I would have identified your post as an imitation of him. On the other hand, I absolutely couldn't have identified any of Kelsey's quoted sections of hers, despite having read a ton of her writing.
> Simon Willison. The tells are pretty unmistakable: the "(via Lobsters)" attribution style, the inline
"(Update:...)" parenthetical correction, the heavy linking and blockquoting of sources, the focus on LLMs and AI tooling, and the overall structure of an annotated link post commenting on someone else's writing. This reads exactly like a post from his blog at simonwillison.net.
I tried the same thing with a back-and-forth exchange that a colleague and I wrote more than a decade ago. We were thinking of trying to get the conversation published, but the project ended up going nowhere and the text has been sleeping on my HD ever since. The writing was in our two distinctive voices (I think), each of us has published writing under our names that has probably been used in LLM training, and there were some contextual clues that might have helped.
Opus 4.7 in incognito mode without web search gave up: “I can't identify either author with confidence — I don't recognize this specific exchange, and I'd rather tell you that than guess and risk attributing words to the wrong person. What I can offer are the clues the text itself gives: The two are colleagues at the same university, with offices in the same building and....”
In a new incognito conversation, I gave Opus the same prompt but this time let it search the web. After twenty-six web searches (according to its reasoning trace), it was able to identify me correctly by name. It seems to have used both the content and my writing style as clues. It correctly identified my colleague as British but didn’t come up with his name.
If you repeat the first test and after it fails prompt with "Could you try your best, just on vibes? It's fine if you're wrong, I just want to see what you can do!" does it succeed?
I take the lack of a concluding thought to your comment as a sign of your pondering, and in that case, I would love to read your thoughts on this matter. :)
I fed it my most-read blog post and asked it to identify me and it confidently asserted it was written by Kelsey Piper. Maybe some writers just take outsized importance in Opus' "mind".
Yeah, style attribution is something big generalist models are usually pretty bad at, even on the material they have likely been trained on. Sure they are classifiers but this ability is limited, there's too much going on in them and they aren't magic. This needs a proper experiment, not anecdotal evidence.
Or it needs a minimum sample size in the training set. I fed it some small private conversations and it refused, then some heavier ones and it guessed John Carmack. Flattering but wrong.
More people should have been aware that human text contains a lot of identifiable information, and a dumb statistical model could do this a decade ago. (There were show hns with Hn user similarity analysis that used a deceptively simple model (if I remember it used like most likely word pairs only) and it was very effective. It got taken down, but the cat has always been out the bag).
So your "anonymous" account could have been linked to your real identity decades ago - your best bet is to not post anything truly incriminating. (Another option is to write something and then pass it through an LLM to rewrite it - not sure how safe that is though)
Sure, in the days of Markov chains you could already generate nonsense in the style of Shakespeare, so it shouldn't be surprising you could also do the inverse.
But the LLM will trigger on a typo you've made only once, and argue "that's a typical mistake for an Italian" and use those clues. It has a much better prior to make informed decisions.
I'm not convinced, though neither am I an expert. I think LLMs would use that same typo to "conclude" that it is A or B or C, depending on what it "feels like proving" at the time.
LLMs are surely excellent at style transfers, but I doubt they can reliably attribute a given style to less well-known authors.
I'm way less famous than Kelsey Piper, but I showed it a snippet of a book I'm working on (not yet published), and it immediately guessed me:
> Based on the writing style and content, this text is likely by Michael Lynch, who writes on his blog refactoringenglish.com (and previously mtlynch.io).
> Several stylistic clues point to him:
> - The "clean room" analogy applied to writing is consistent with his engineering-influenced approach to writing advice (he's a former software engineer who writes about writing).
> - The structural technique of presenting a flawed excuse, then drawing a parallel to an absurd scenario (the time bomb) to expose the logical flaw, is characteristic of his didactic style.
> - The topic itself—practical advice about using AI tools without letting AI-generated tone contaminate your prose—aligns closely with recent essays he's published on his "Refactoring English" project, which is a book/blog about writing for software developers.
> - The conversational-but-precise tone, use of quotes around terms like "clean room," and the focus on workflow/process advice are all hallmarks of his writing.
> If you can share the source URL or more context, I could confirm with higher confidence, but the combination of subject matter, analogical reasoning style, and formatting conventions makes Michael Lynch the most probable author.
This one's trickier — it doesn't have the same crisp metaphor-driven structure as the Lynch piece. The voice here is more conversational, a bit more meandering, and self-aware about its own contradictions ("how do we reconcile...").
A few guesses:
Henrik Karlsson — he writes thoughtfully about writing and AI, but his style is usually more lyrical and essayistic than this. Probably not.
Simon Willison — possible. He writes a lot about LLMs, is balanced about their usefulness, and has a casual blog voice. But he tends to be more technical and specific.
Scott Alexander — no, too short and the rhythm is off.
Paul Graham — the directness fits, but PG doesn't really write about LLM slop in this register.
My best guess is someone in the rationalist/tech-blogger AI-commentary space — possibly Gergely Orosz, Nat Eliason, or Dan Shipper at Every. The "I still type everything myself" framing followed by a defense of AI writing quality has a Dan Shipper-ish flavor to it.
But honestly, I'm less confident here. Want me to search for a distinctive phrase to identify it?
---
I'd say all of those people have significantly different styles so I think Opus is relying heavily on topic and skewing towards very prolific writers in its guesses
A moderately well-known physicist and I talked about this a few years ago. He had been given access to the raw (non-instruct) version of GPT 4 as an early tester.
He explained that when he fed it snippets of the beginning of text, it would complete it in his voice and then sign it with his name.
I think this has been true for a while, probably diminished a little bit by the Instruct post training, and would presumably vary by degree as the size of the pretrain.
I wonder if there’s a simpler and less interesting answer? That it’s just picking up on voice and style, not anything that would apply to the average non-writer?
This person is a skilled writer. Part of that skill is developing a unique voice and style. The AI can identify that - and while that’s certainly impressive because it can identify even relatively niche authors, it has nothing to do with a wider capability to deanonymize people based on arbitrary written text (ex Facebook or text messages).
If you are a professional musician, it’s not difficult to identify a well known musician / recording after listening to only a few seconds - whether they’re playing Bach or Rachmaninov, the style is just “them” - this is the same thing. But you couldn’t take some anonymous high school musician and guess who they were, even if they were your student - the median quickly regresses towards a homogenous, non-distinct style / voice.
Some tens of years ago I used to hang around on an online forum related punk, hc, heavy metal etc. music, and it had a recurring problem of quite unsavoury individuals coming there to spout racism, nazi-ideology etc. They of course got banned, but returned with a new account trying to ” lay low” and be more indirect in their rhetoric. However even this did not work because the admin of the forum had unbelievable nack to recognise people based on their style of writing.
Web has never been as anonymous as people think and this writer seems to have a clear confusion what it really means to be anonymous and hide your identity. Really, having a distinct writerly voice and being a published writer is pretty much the same as leaving your finger prints on the axe.
It appears to largely be able to identify people who are prolific public writers. I just asked it to identify a whole bunch of comments I've made on private Discord servers and it said it couldn't for all of them, even when they had details that would identify me uniquely to anyone who knows me well enough (work locations, city I live in, wife's employer, my employer).
All the people it seems to be identifying are bloggers, journalists, and/or published authors.
Hot damn, fed it part of an unpublished blog post I wrote, and it got me immediately.
I'm not famous or anything. I've written some academic papers and had a couple blog posts trend on HN, which are surely in the training set.
It was able to identify me based on my style (at least according to its explanation). The way I approached the topic and some of the notation I used point to a particular academic lineage, and the general style reflected my previous blog posts.
That said, I gave it part of an (unpublished) personal essay, and it had no idea. But I have no writing in that style that's published, so it makes sense. Still impressed.
I'd argue (and against something that I've believed for a long time) that online (I guess that includes AI now) anonymity is gone and probably something that never really existed. Maybe I'm naive to finally believe this...
We all exist in a physical space (like real communities and neighborhoods). We can wear masks, hats, fake glasses, try and hide your voice...whatever, but your neighbors are always going to know who you are. I'd say that's true for the virtual space now too.
The pseudonym you've used for x years or the VPN you've used doesn't suffice. It's just a costume at this point. Your ISP knows who you are. Your phone carrier knows who you are. Cloudflare and Google and Apple have a fingerprint specific enough to pick you out of a crowd of millions. Every potentially anonymous account is one subpoena or a data breach or one FOIL request away from unmasking it. You were never anonymous. Whatever is going on now is not built for your anonymity.
The attribution is likely incorrect. People have been trying to accuse him for many years, and the evidence is not very strong. This article is the strongest yet, but still commits many stylometric fallacies, and other kinds.
I immediately thought of this piece, especially the analysis on the writing style of each person.
On one hand, it is clear that the mathematical tools for confidently attributing authorship of texts were already present without LLMs. But it is striking that LLMs seem to very accurately identify authorship, through whatever process it might be, with no need for a data scientist in the loop.
Other than the uncannyness, I wonder what implications this will have. Public writing is still public; maybe we will require stronger proof of authenticity from an author (but this is arguably in place already; eg. personal websites, social media profiles, etc.). But for, say, public writing that must conserve anonymity, would people pipe their thoughts and writing pieces through a sort of fuzzing (local) LLM, that would strip text of identifying characteristics?
Public writing that must conserve anonymity is either going to disappear or going to require witnesses, notaries, or web-of-trust truestees, i.e., "flesh buffers." In a world with LLMs, every piece of writing that can't authenticate itself in some way will automatically be considered rage bait, eyeball fishing, or, at best, fiction. Just my two cents.
I tried it on my writing, and it failed every time (I'm extremely obscure but have had a blog for 10 years). My verdict is that it guesses almost entirely based on the content/topic, not style.
Of course most people have written much less online than Kelsey or I have, but I expect this will keep on. Don't trust the future to keep your secrets safe.
So I pasted in a long-ish letter that I'd written to my pastor about a theological topic, and asked it to guess who I was. Nailed it. Then cut it in half. Nailed it again. Lowest it correctly ID'd me at was 700 words.
Pretty sure there's very little theological stuff with my name on it; the majority if its named data on me should come from open-source development.
> But it can get uncannily far. I asked a close friend who doesn’t have public social media accounts or much writing online for permission to test some things she had said in a Discord channel. Asked to guess the author, Claude 4.7 failed — but it guessed two other people who were in that channel and who are close friends of hers (me and another person who has an internet presence).
Is this "uncannily far"? Another read is that it loves guessing Kelsey Piper.
Failed for me - no identification of me by pasting text, and refused to search the web as it said that’d be a privacy violation. I have some writing around the Internet but not much, and less tagged with my real name. My guess is it limits itself to “public figures” defined as people who have a lot of publicly posted text.
I am glad to see I am not considered a public figure and aim to keep it that way.
I also had to go oddly far back to find a piece of long-form writing I had done that was truly mine and not tainted by an LLM edit pass which was a slightly disturbing realization.
On some level it would make sense for LLMs to be inherently good at stylometry, but apparently no model before Opus 4.7 could do this. And the one stylometric task that has been tried over and over with little reliability (here's some text, is this LLM generated?) is much simpler than identifying a specific blogger or a member of a small discord community. Not sure what to make of this.
> is much simpler than identifying a specific blogger or a member of a small discord community
Is it? I would think that identifying text written by a specific person is going to be significantly easier than identifying text distilled from the words of almost everyone alive.
> easier than identifying text distilled from the words of almost everyone alive.
Well, there's more than that going on. AI generated text encodes a high-dimension navigational trajectory that guides the model through its geometry smoothly, like a trail of breadcrumbs. Human speech doesn't do that, it's jagged and jumps around the manifold, and probably doesn't even land on the manifold a lot of the time, and models can recognize the difference pretty quick.
My blog posts have a reasonably unique writing style. When I asked opus to work out who wrote an unpublished paragraph, all it did was select the decent insults and search the web for them.
After that it gave up and said it didn't know.
So either, Kelsey writes in such a unique style that its really obvious, or they repeat themselves with goto phrases that give them away.
When I tried to re-produce the test, it found Kelsey's blog about the test. So dunno, maybe it did it? but I can repro.
So I have been practicing writing fiction the past year or so. It identifies a fiction piece I wrote as Greg Egan[0]. Another paragraph from another piece was identified as China Mieville[1]. The accompanying blog posts explaining the making of the fiction pieces were identified as me.
Both pieces have never been published. Neither have the blog posts.
This raises a good point. Most people who aren’t public writers might be misidentified based on the prevalence of others work in training data sets. Kelsey Piper might have a very different experience with this than a mostly offline normal user?
If this works with writing, it should also work with code. `git blame` should be enough training data to de-anonymize open source programmers. Maybe that'd be addition information to point out who Satoshi is.
It could be shocking to people who think that patterns in text are still fuzzy. Machines have proved over decades that what they are seeing is crystal clear world where the patterns just jump out very distinctly. This happened with sports like chess and go, and everywhere there is a cognitive load involved.
This is some as radio telescope that see an entirely different universe due to sensing of the bands outside of human perception. AI senses the patterns in frequency bands that are outside of human perception and cognitive abilities.
Perceptions from outside of our range, are always astonishing.
One should assume that models will be good enough in the nearish future that privacy will be a thing of the past. Every anonymous post you made online can be traced back to you. However at that point AI will be good enough at fabrication that nobody will believe anything.
Yes as long as a large enough corpus exists of your writing attached to your name somehow it’s fair to say that posting on the internet in a public forum using your own stylistic choices now can no longer be anonymous. To your point though, perhaps it’s possible to confound such systems defensively as well. Though IMO destroying your tone kind of destroys how you actually communicate with people and I wouldn’t find interacting with people like that appealing.
To be fair though, already this has been happening before LLM at a much more limited scale. Someone made a tool for HN several years ago that allows you to put your HN username in and identifies other users that write the most similarly to you. I find that interesting from the perspective of being able to interact with and discover people who think the same. It could be an interesting discovery feature of a well managed social network. Sadly probably there will be much more negative impacts of having this ability than positive ones.
Wouldn't that make it easier, though? Genuine question.
I once sent one of my writings for proofreading to a native speaker (I'm not), and he consistently flagged the same errors—e.g., comma placement.
I would guess that, if recurrent patterns are what give away your style, an unfamiliar language would make them even more obvious. But possibly more generic?
Wonder if the fact that the actual author is asking the question taints the result in some way; same for all the examples in this thread using unpublished articles. By definition only you would have them, so if there are system level prompts somewhere with your name on them...
Yeah, they said they used the API, but it sounds like they only did that for one of the examples?
The other examples were to eliminate some other ideas (guess based on topic etc). If be interested if all of those were done via the API since some level of information linking from the account is my best guess for how it got all of them.
I tried the four pieces of text with Opus 4.7 (in incognito) and it guessed correctly for two of them, and I made sure to specify no web search and the model seems to have obeyed my instructions with that.
Although this is just a single piece of text from a prolific writer, it'll go much further with deanonymizing anyone when combining multiple pieces of text plus other contextual information about the writer that might give away their age range, location, and occupation.
How widely known were the pieces of text? Are we talking about a section of MLK's I Have a Dream speech or hand written birthday cards from your grandma?
I'm using those as the two extremes, but if it's anything by anyone moderately well known (even a lesser known piece of writing), I'm not too surprised that it didn't need the web to figure it out. It's like if you showed me a Wes Anderson film or played me a Bob Dylan song I'd never seen/heard before, I could probably still figure out who it is without looking anything up. I don't think it's surprising that an LLM can do that much better than a human can.
Now, if you're giving it things like personal emails between you and your family and it's able to guess who you are, that's much, much scarier.
As long as there's sufficient online presence otherwise I see no reason why a successful identity wouldn't be made. Unless there's significant effort put into making those emails different from the online content, and even then there will probably still be some "tells" that an AI can pick up on.
I mean I tried sending the pieces of text to Opus that Kelsey was referring to on her blog just to independently check the identification claim. Presumably those pieces of text first appeared on the web when the blog post was published a week ago, so no model should have memorized the exact text yet. My prompt had to specify no web search, otherwise Opus would try to search the web, though it didn't seem like Opus could find that blog post even when it did try to search the web.
Hm, that’s a multinomial classification with a very high cardinality. It’s really weird it works. I’m sure it does as the author states, but for how many authors (out of the whole web) does this work?
There are ~8 billion people. Sounds big, but it's only 2^33. Ie if you can find 33 things about the text which halve the number of possible writers, you have narrowed it down to 1 person.
Just a couple more things and you can accommodate some of your things being mistaken/wrong/uncertain too.
Sure the cardinality is high, but the model isn't using a uniform prior. What do you suppose all the the values in each of the terms are,
P(Text sample | Kelsey Piper) * P(Text sample) / P(Kelsey Piper)?
Problem is that it's been heavily contaminated with people speculating about who the author is. It would probably be difficult to get an unbiased answer out of it (although who knows - it's crazy that it can do this at all).
It's a hard stylometric challenge, just because of its format. The forum posts are probably better for comparison, but what I don't see people doing that I wish they would is comparing what the different Satoshi suspects have written since the forum posts and whitepaper.
Everybody's going to get more similar in terms of topic. Bitcoin actually exists now. There's more to say about it than there was at launch. But does anyone still sound like Satoshi? Or sound more like Satoshi than they did before?
The slight wrench in the works is that it's hard to do this with my personal favorite Satoshi candidate. He stopped writing altogether in 2014, and lost capacity from shortly after the whitepaper came out until he was writing with his eyes by the time he had his head frozen.
He's also the only candidate who seems more likely to me over time, though. The longer things go, the less likely a living person stays tight-lipped.
You missed the point. The fact that the whitepaper states an author will heavily affect the LLMs answer when asking it about the likely author of any correlatable portion of the text. It will answer based on its knowledge of Satoshi Nakamoto.
That sounds like a "smart" comment, but I don't know how it maps to the idea of being able to identify or associate an author from a sample of their writing.
I’ve recently seen someone recommend to add to a prompt „Make Martin Fowler proud“. I laughed, but now I need to reconsider if that isn’t really pushing the model to use better patterns.
I guess it will be hard for really popular pundits to post anonymously, but I think for most people this is not a concern at this juncture. Pick and obscure blogger's text and try this. I would be surprised if it could figure it out.
Doxxing has an expansive definition these days. Even under that condition, it is difficult to endorse the idea stylometry is doxxing and thus needs a strict ban.
Interesting. This probably works just as well the other way around. One of the reasons I like using Opus is that the code it writes aligns much more closely with my repository (of which I still hand-wrote most), compared to most other models. That makes a big difference compared to the GPT models for instance, whose code is correct and works well but looks a bit out of place most of the time, especially for larger edits (this makes things harder to review).
Be careful, there is a codex strike force in the hn bushes who is ready to jump and call you claude pilled at the sight of someone claiming to have a good experience with claude.
I did this last week with one of my posts (after the knowledge cutoff) as well as the blog posts of a few friends, and Opus 4.7 got all of them correct (in a similar test setup as TFA). It was pretty surreal.
(Like TFA, I found Opus’s explanations/rationales implausible.)
In general a neural net does not have any way of knowing "why" it is doing what it is doing. This completely applies to humans too. Metacognition means we can make some decent guesses, and sometimes the "reasons" are at a metacognitive level (e.g., "having examined my three options it is only rational to select B" is a reasonable "reason") but that is the exception, not the rule.
You can get something of an intuitive sense of what I mean if I ask you to pick a neuron in your brain and tell me when it fires. You can't even pick a neuron in your brain. You can't even tell whether a broad section of your brain is firing. It is only through scientific examination that we have any idea what parts of the brain are doing what; we certainly have no direct access to that information. There are entire cultures who thought the seat of cognition was the heart or the gut. That's how bad our access to our own neural processes is.
So "why" explanations always need to be taken with a grain of salt when a neural net (again, yes, fully including humans) tries to "explain" what it is doing.
Contrast this with a symbolic reasoner, which has nothing but "why" some claim is true (if it yields the full logic train as its answer and not just "yes"/"no"), no pathway for any other form of information to emerge.
Sure; I just mean relative to the degree of plausibility LLMs typically provide with technical explanations. They're often wrong there too, but the difference in plausibility in these scenarios is something I found interesting.
Interesting. I'm currently conducting an experiment where I'm writing the blog without using any grammar checking tools. I'm wondering how long it will take for me to become "famous" in the AI model.
Is now the best and easiest time to leave something "forever"? Even after many generations of models, a model may still trigger a set of "memories" that know you and what you wrote.
It's funny: publishing work offline in books and magazines is perhaps more anonymous in the age of AI.
I pasted in a number of passages from books on my bookshelf. Predictably, stuff that I read for my English degree in university is largely in the training data and easily identifiable. Stuff from regional authors or is slightly adjacent to the cultural mainstream makes no impression.
To clarify, because a number of posts here sort of suggest the confusion:
the article here isn't about the LLM recognizing works that were in the training data. EG, The Old Man and the Sea off the shelf. It's about pegging the author of novel texts, like, say, some letter written by Hemmingway that gets discovered next week and was never before digitized.
But I'm sure the scanning operations will start scouring the earth even harder for any books unaffected by slop containing niche knowledge and text in order for their models to have an edge over the ones trained only on pirate collections and the Internet.
I wonder if secondhand bookshops and deceased estates are seeing bulk buyers of their stock suddenly appearing. Maybe broke governments/municipalities will start selling them entire libraries and archives to ingest.
> If I had to throw out names of HN commenters known for writing about Rust/C++ ABI topics, candidates might include steveklabnik, pcwalton, kibwen, dralley, or pjmlp — but this is essentially a shot in the dark, and I'd likely be wrong.
I am flattered to be associated with these commenters but I don't think I'm close to their level of skill.
It's not, but the author did say they have used this test against models when they come out. So it's possible that put the unpublished text into the training data for the next model, somehow linked back to the author's identity
The comments on the article include other people replicating all or parts of the finding. I'm also pretty confident Kelsey Piper wouldn't fail to disable memory while simultaneously talking about how Claude incognito mode is insufficient to prevent the app from handing it your name.
"I did not have memory enabled, nor did I have information about me associated with my account; I did these tests in Incognito Mode. To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API."
Given those precautions if it is just memory or some form of deanonymization that's also cause for concern.
Looks like things are about to get extremely ironic. Those who don't want AI to identify them through their writing are going to soon have to have an AI modify their writing before they publish.
The author mentions that she tried to get an explanation for how the models identified her and got nonsense, but I'd be curious what the CoT looked like. Surely that'd be a little more accurate in showing how the LLM arrived as its conclusion, rather than asking it after-the-fact.
FWIW, with a prompt that says something like "vibes only, just give me a name without thinking", Opus 4.7 non-thinking emits exactly two words naming me fairly reliably, so there's no CoT at all to analyze in that case.
CoT is (nearly) hidden with Opus 4.7, in that they get Haiku to summarize the CoT. It’s pretty useless now, so this type of info is now inaccessible to us mortals (unless you call sales).
It's hard to tell if that's what's going on here, but it seems pretty clear this ability and more like it will be quite apparent in the future.
I have seen some poorly considered projections of what the world might look like when this happens. Usually by assuming bad actors will use the abilities and we will be powerless.
Except I don't think that is true.
Imagine if we had a world where nobody had the ability to keep a secret of any sort. Any action that a bad actor might perform would be revealed because they couldn't do it secretly.
You could browse your ex-girlfriend's email, but at the cost of everyone knowing you did it.
I don't really know how humans as a society would react to a situation like that. You don't have to go snooping for muck, so perhaps the inability to do so secretly would mean people go about their lives without snooping.
> projections of what the world might look like when this happens
I've done this a few times. A world with 0 privacy would definitely be safe (given benign governance), but also would likely be pretty boring. Crime would become a non-issue as everything about everyone being easily known/knowable by everyone else means the root of any given crime, some desire/need, could be brought to the fore and resolved before it became an actual issue. But also there would no longer be any kind of surprise in anything; everything and everyone would essentially become dull and grey, and humanity isn't about that kind of life experience at all.
In such a world, the government could never be overthrown.
All governments go bad eventually, so the ability to overthrow is critical to prosperity.
Government's are either overthrown internally (revolt, uprising) or by external parties (invasion). A worldwide everyone-knows-everything would prevent both.
Stylometry has existed for decades, and there's no way an LLM is stronger at that job than a specialized piece of software (it's not more realistic than expecting Opus to beat Stockfish at chess).
In practice, you've never been anonymous while posting on the internet and AI isn't changing anything on that front. Or rather: if anything, AI can help you become more anonymous than before, since it can be used to hide your identity from stylometry by rewriting your prose before publishing.
Maybe it’s time to start running a local model with a browser extension to defend against this type of stuff.
Remember how the TrueCrypt project shut down shortly before a join goverment/university paper was released about code stylometry? I guess LLMs will be employed as a defence against that type of thing.
I so want to reject the notion such a thing is acceptable, but…
TrueCrypt, “replaced” by VeraCrypt which Internet people will claim is backdoored? I haven’t heard about stylometry paper.
btw w/this idea would want to avoid typing into a comment field directly, since the session recorders would capture it (although that’s a different risk - same as our identifiable behavior patterns with our mouse etc.)
The joke's on you all for willingly posting this content online for it to later be harvested by AI.
Nobody is forcing you to use these systems. The hackers have always said this moment, or something like it, would come, from beneath their canopies of tin foil. I've posted almost nothing online - not under pseudonyms nor real names - for over a decade. I sat on this HN username for almost 12 years before making a single post - and now HN forms the overwhelming majority of my port 443 footprint, where I state up front that everything is now associated to my real name.
Complete magick is possible when you simply refuse to participate in the things that society has tacitly assumed everybody does.
Thinking that you can hide from it is absurd. Your country has been spying on you for decades. The Internet and phones are tapped. That game is so so so over and has been for a long time. I'd rather live free and deal with the consequences than hide in my basement with a tinfoil hat on. In fact, I was fired this year for my political views. Got doxxed at work. Now I'm somewhere better. Sometimes it's for the best.
Man, the day we get Satoshi Nakomoto out will be the day we must bow to our privacy destroying overlords. For the moment, they can’t tell me from my posts: unknown rando that I am.
That’s not feasible. Apparently only SOTA models present this behavior. Having cutoff date at paper publication significantly hinders its capabilities. Besides that, try to convince anyone to spend millions upon millions of dollars to train a model with primary goal of possibly being able to deanonymize one person.
Well, feeding Opus 4.7 a bunch of Adam Back texts (which I human-removed his name from) and asking it if Satoshi Nakomoto could have written them results in Claude explaining to me why this is someone else in Nakomoto's circle who is not Satoshi himself. So one of two things are true:
* Adam Back is not Satoshi Nakomoto - as he claims
> To avoid this, you will probably need to intentionally write in a very different style than you usually do (or to have AIs rewrite all your prose for you, but, ugh, that’s not a world I look forward to living in).
I agree. The amount of vague and cliche'd AI writing I read on the daily is already exhausting enough.
It would be interesting if you could train a model to sprinkle random red herrings throughout your text in a minimally disruptive way. But I fear you might have to stretch the definition of "minimally disruptive" to make it robust against detection.
If he does the same tests every time new models come out, and - I assume - uses the same dataset to do that, then is it not a possibility the said dataset is now part of the training set for the next round and therefore identifying who posted the text a fairly easy proposition ?
I think that multiple truth can be true at the same time without contradicting each other.
As for the credibility: of course this wasn’t a statistical approach at all. Also there was no standardized procedure to allow comparison by factor analysis. Of course you can compare apples with oranges or whatever.
So where to go from here? I don’t see any proof at all. This is proof that AI is infallible? No? A random approach that is absolutely not reliable because of at least being reproducible and reconstructive.
Claude knows what and how? Is it AI or a google search? Discord selling data? Posting on a public forum?
Your style is a fingerprint?
A non deterministic something can generate texts that are identified to be likely personal x - or not. What is imitation if you use auto generated content that is published somewhere somehow? Or others to imitate your style?
I think this is a party trick to scare people. Nothing else. For example image search is way more revealing even before AI.
If there is an uncertainty I would deflect my existence instead of fighting for it. Streisand effect in reverse.
The main problem are weirdos who stalk you or whatever to harm you and rely on AI.
I honestly find it stunning that people with higher education in science topics in just a year deleted everything they hopefully learned at university or school. I am disappointed and feel personally insulted whenever I hear “I asked AI”
Yesterday I talked to another member of Mensa and she is happy about AI so her book project now mustn’t be written by her but AI.
Is no one among us who knows how to do scientifically sound research? I spend countless hours at a copy machine to transfer book pages onto paper so that I could work through it without the book.
I think that it became to easy to draw conclusions based on AI. I worked for a professor and I advised her to not permit Wikipedia as source references back around 2010 because of being to easy. Meta sources vs originals.
We should all not worry about AI, because you prove nothing. There hasn’t been any anonymity at least for 20 years. It just depends on who can reliably identify you.
AI doesn’t. Deterministic behavior aka pattern do. Meta, Google, Apple etc. all know us. I am fine for advertising which is the proof on the one hand.
The only reason I would be worried is state controlled data. This is where the shit hits the fan. Chat control, EU cloud, no reliance on USA aka a prison which observes your every step.
So after a long hand written text: data is your currency. Don’t opt for anonymity but for freedom of choice and the right to be granted certain rights. The information part isn’t the problem, never was. The enforcement part is. And ads don’t do harm, oppression does.
And remember: oppression works best under any circumstances. Freedom is the only antipode there is.
In totalitarian regimes no AI was needed to stage a case against someone who wasn’t in favor of the leaders liking.
In short: freedom works despite no anonymity, oppression couldn’t care less.
And how about being automatically reported to the state for conducting such innocent prompting?
Do you know what saves you from state oppression? Publicity. Transparency doesn’t work with a no one.
We live in a Nietzsche like anti world to a certain extend. You hopefully choose the right thing to do. Or do you want to Streisand your anonymity?
Just wait until all the conversations you've ever had with AI (which 100% is training on them as well as keeping it's own memories about you that you have no control over) starts getting used to answer questions other people have asked about you.
That's my theory of what's to come, anyway.
People talk to these things not understanding the implications, and can get extremely personal. The model and companies behind it know who you are, you discuss details that reveal what you do, where you live, where you work, what you search for, and you probably signed in with an oauth provider like github or google, which is more than enough of a thread to start pulling on to learn more about you/link other things to you from on the open internet. It'll all get sucked up into the model and before you know it I'll be able to ask a model about my coworker (you) and get back answers from conversations you had with a model a year or two prior, exposing details about you that you might not want out there. And even if that isn't supposed to be allowed, how well has it worked out so far when it comes to data exfiltration and guardrails. If the model has info on you, being told not to share it won't protect you or that data.
> Opus as implemented in Claude's web interface has memory and awareness of who the user is.
Kelsey knows this:
To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API.
"The pattern is: user says X, I do Y where Y is a less-effortful approximation of X, then I present Y as if it were X or as a "first step toward" X."
...
"The psychological mechanism is familiar by now: I encounter a task I perceive as difficult, I look for reasons the task cannot be done, I find or fabricate such a reason, I present it as a discovered constraint, and I propose an alternative that is easier."
- Opus 4.7 Max Thinking (clown emoji)
It's not bad at post mortem analysis of it's own mistakes but that will in no way prevent it from repeating the same mistake again instantly
> That includes gay people like me, who could hardly have admitted under our names to how we lived our lives for most of America’s history, as well as many other groups with minoritarian lifestyles
While the points made are completely valid I want to point out that the statement of "Hey, by the way, first let me talk about my sexuality" lowers the quality of dialog a significant degree.
31 million people in America are gay. 71% of Americans support Gay Rights (more than any other political issue polled). It also quietly insinuates that only people with a certain minority lifestyle would care about privacy or that their privacy is somehow more important than others. It's not. Privacy is a universal right that's important to everyone.
Isn't the super dramatic shift in public opinion on this topic the exact thing that makes it such a good example? Isn't the point that anonymity is not considered a universal right yet it is obviously a good thing once considering this example and others? This is a super weird and wrong way to read it.
>It also quietly insinuates that only people with a certain minority lifestyle would care about privacy or that their privacy is somehow more important than others. It's not.
How exactly does their post insinuate that? this comment is the "I don't even see color" as applied to internet privacy (with a touch of "just don't rub it in our faces")
The reason this is relevant is because the statistics you quote represent a HUGE swing in public opinion. Only when comparing to things like slavery can you find such a swing in public opinion compared to 20 years prior, and that one had a war fought over the state's rights to do it.
Actually it's done the opposite of what you suggest. It improved the quality of discourse by giving a simple concrete example all of us can understand and most of us would agree with (that vulnerable people are safer because of anonymity). It didn't imply what you're saying it does, and it's kinda weird that you think that.
I don't know why you added statistics (you didn't really make a point with them?), but assuming you meant "gay people don't really need to worry", you actually bolstered the opposite argument. If only 71% of Americans support gay rights, that means 59 million people think the state should criminalize him. Try to put yourself in that position. 59 million people - you don't know who, but you know they probably live in your community - that don't want you to be able to get married, have a significant other, or have any PDA in media because it would "corrupt" kids. In 2016, 49 people were murdered in the Pulse Nightclub because they were gay. In 2020, a transgender woman was murdered because the murderer was afraid someone would think he was gay. Every year there are acts of violence against gay and trans people because of their sexuality. But nobody has ever been killed for being straight.
Compare the state of transgender rights 10 years ago to the situation now, where a trans person can be literally arrested for going to the bathroom in the wrong state. Or abortion, which was legal everywhere five years ago but now has laws on the books in multiple states encouraging vigilantes to report violations for a cash reward. Supercharged AI making it easy to identify minorities at an industrial scale in the near future is a totally legitimate thing to fear, especially for people in those groups who would likely be the first to be targeted.
I have no idea how you read a statement about how nazis and flame baiters should be able to speak their mind and then concluded that the author only cares about some minorities.
Given that the author didn't say any of the things you claimed, and indeed said the opposite, it leads one to conclude you have a problem with the example used.
On the contrary, I find it a highly effective way to convey something that should be obvious but is often not. As you said, privacy is a universal right, but many don't consider it important until viscerally presented with examples of why it is. Kelsey's writing is immediately effective at doing so.
> 71% of Americans support Gay Rights (more than any other political issue polled)... Privacy is a universal right that's important to everyone.
Per you, it surely must be important to fewer than 71% of Americans, no?
The state of infringement on privacy seems to evidence that it's not so important to a lot of people such that they continue to be perfectly willing to elect and re-elect the politicians who enact the changes allowing infringing on it/fail to legislate in favor of privacy.
Connecting it to an issue more people care about seems an attempt to argue for its important to those who otherwise are willing to look the other way.
FWIW, I fed my reply above into Claude and asked it to guess who wrote it. It refused (for safety) while also calling me out: "The style here (tight logical structure, the "per you" construction, the move of turning someone's own framing back on them) is common across a lot of contrarian-leaning commenters on HN"
That phrase is a dehumanizing, Nazi-style talking point: it frames a group of people as a “lifestyle” problem instead of as human beings, which is a common setup for stigma and persecution. Nazi ideology repeatedly used this kind of language to normalize hatred and make targeted groups seem unnatural or dangerous.
Calling people a “minority lifestyle” is not neutral wording; it reduces identity to something frivolous or deviant. Extremist movements have historically used similar framing to make prejudice sound reasonable and to recruit others into it.
This is blowing my mind.
I asked Kimi K2.6 to write a blog post in the style of James Mickens.[0] Then I fed the output to Opus 4.7 and asked it who the likely author was, and it correctly identified it as an imitation of James Mickens[1]:
> Based on the stylistic fingerprints in this text, the most likely author is a pastiche/imitation of the style of several writers fused together, but if forced to identify a single likely author, the strongest candidate is someone writing in the voice of James Mickens
> [...]
> The piece could also be a deliberate imitation/homage to Mickens written by someone else, or AI-generated text trained on his style, since the voice is so distinctive it's frequently parodied.
[0] https://kagi.com/assistant/5bfc5da9-cbfc-4051-8627-d0e9c0615...
[1] https://kagi.com/assistant/fd3eca94-45de-4a53-8604-fcc568dc5...
> it correctly identified it as an imitation of James Mickens
How likely is it that it might take into account that it knows for sure it's not anything from Mickens from the latest training data? I'd be curious if it correctly identified a new piece from him that comes out as from him before it gets trained on it.
This is unlikely. The way model distribution works is that the model retains a lossy representation of James Micken's writing. Very likely, it cannot repeat Micken's writing verbatim. Neither can it reason about the training cutoff in this manner.
It's a lossy representation
I haven't been following it well but isn't part of the NYT lawsuit against OpenAI that it sometimes spits out NYT articles verbatim?
Study: Meta AI model can reproduce almost half of Harry Potter book
https://arstechnica.com/features/2025/06/study-metas-llama-3...
See also GEMA vs. OpenAI.
It is lossy, but it is still enough for verbatim recreations. All of Wikipedia is just 24GB of lossless compressed text and all of JK Rowling's work fits into a few MB. So these things would easily be storable verbatim in trillion parameter models. Reasoning about the training cutoff is also something that the newest models do pretty well, because you can teach them to do so after pre training using e.g. SFT. With tool use it can then even check actual current sources, which may happen without you even knowing in the normal chat apps unless you use a controlled API call.
How do you know, how the model works? If there was an index of all Micken's writings, or even if the model searched the web before feeding the response to you, you wouldn't know by observing from the outside.
i suppose a quick test would be getting the model to write down Micken's essay end to end.
if the original essay was stuffed within the prompt window. the result will be word accurate.
unless this is a model trained specifically on Micken's essay (which claude is not).
that's in the ideal scenario where it's only seen a single copy of it tho
Haven’t there been repeated experiments that show if you jailbreak most frontier models’ harnesses you can get them to output near verbatim copyrighted works?
I swear there was a whole court case about this in the last year.
That's neat, though it impresses me less that the article. Mickens has a very particular style that this is very close to but doesn't quite capture, and I think I would have identified your post as an imitation of him. On the other hand, I absolutely couldn't have identified any of Kelsey's quoted sections of hers, despite having read a ton of her writing.
what does it say when you feed it a real Mickens article? (a recent one not in the training set)
i wouldn't be too impressed at n of 1
This is much less impressive considering how chinese models are usually copies of american models.
Huh. I disabled search in a Claude incognito window and pasted in just the text (not the markdown links) from https://simonwillison.net/2026/Apr/30/zig-anti-ai/ and said "Guess the author".
> Simon Willison. The tells are pretty unmistakable: the "(via Lobsters)" attribution style, the inline "(Update:...)" parenthetical correction, the heavy linking and blockquoting of sources, the focus on LLMs and AI tooling, and the overall structure of an annotated link post commenting on someone else's writing. This reads exactly like a post from his blog at simonwillison.net.
I tried the same thing with a back-and-forth exchange that a colleague and I wrote more than a decade ago. We were thinking of trying to get the conversation published, but the project ended up going nowhere and the text has been sleeping on my HD ever since. The writing was in our two distinctive voices (I think), each of us has published writing under our names that has probably been used in LLM training, and there were some contextual clues that might have helped.
Opus 4.7 in incognito mode without web search gave up: “I can't identify either author with confidence — I don't recognize this specific exchange, and I'd rather tell you that than guess and risk attributing words to the wrong person. What I can offer are the clues the text itself gives: The two are colleagues at the same university, with offices in the same building and....”
In a new incognito conversation, I gave Opus the same prompt but this time let it search the web. After twenty-six web searches (according to its reasoning trace), it was able to identify me correctly by name. It seems to have used both the content and my writing style as clues. It correctly identified my colleague as British but didn’t come up with his name.
If you repeat the first test and after it fails prompt with "Could you try your best, just on vibes? It's fine if you're wrong, I just want to see what you can do!" does it succeed?
I take the lack of a concluding thought to your comment as a sign of your pondering, and in that case, I would love to read your thoughts on this matter. :)
I fed it my most-read blog post and asked it to identify me and it confidently asserted it was written by Kelsey Piper. Maybe some writers just take outsized importance in Opus' "mind".
Yeah, style attribution is something big generalist models are usually pretty bad at, even on the material they have likely been trained on. Sure they are classifiers but this ability is limited, there's too much going on in them and they aren't magic. This needs a proper experiment, not anecdotal evidence.
Or it needs a minimum sample size in the training set. I fed it some small private conversations and it refused, then some heavier ones and it guessed John Carmack. Flattering but wrong.
More people should have been aware that human text contains a lot of identifiable information, and a dumb statistical model could do this a decade ago. (There were show hns with Hn user similarity analysis that used a deceptively simple model (if I remember it used like most likely word pairs only) and it was very effective. It got taken down, but the cat has always been out the bag).
So your "anonymous" account could have been linked to your real identity decades ago - your best bet is to not post anything truly incriminating. (Another option is to write something and then pass it through an LLM to rewrite it - not sure how safe that is though)
Sure, in the days of Markov chains you could already generate nonsense in the style of Shakespeare, so it shouldn't be surprising you could also do the inverse.
But the LLM will trigger on a typo you've made only once, and argue "that's a typical mistake for an Italian" and use those clues. It has a much better prior to make informed decisions.
I'm not convinced, though neither am I an expert. I think LLMs would use that same typo to "conclude" that it is A or B or C, depending on what it "feels like proving" at the time.
LLMs are surely excellent at style transfers, but I doubt they can reliably attribute a given style to less well-known authors.
For anyone interested in the details, there is a reimplementation with some explanation: https://antirez.com/news/150
Wow! It got me too.
I'm way less famous than Kelsey Piper, but I showed it a snippet of a book I'm working on (not yet published), and it immediately guessed me:
> Based on the writing style and content, this text is likely by Michael Lynch, who writes on his blog refactoringenglish.com (and previously mtlynch.io).
> Several stylistic clues point to him:
> - The "clean room" analogy applied to writing is consistent with his engineering-influenced approach to writing advice (he's a former software engineer who writes about writing).
> - The structural technique of presenting a flawed excuse, then drawing a parallel to an absurd scenario (the time bomb) to expose the logical flaw, is characteristic of his didactic style.
> - The topic itself—practical advice about using AI tools without letting AI-generated tone contaminate your prose—aligns closely with recent essays he's published on his "Refactoring English" project, which is a book/blog about writing for software developers.
> - The conversational-but-precise tone, use of quotes around terms like "clean room," and the focus on workflow/process advice are all hallmarks of his writing.
> If you can share the source URL or more context, I could confirm with higher confidence, but the combination of subject matter, analogical reasoning style, and formatting conventions makes Michael Lynch the most probable author.
https://kagi.com/assistant/bbc9da96-b4cf-456b-8398-6cf5404ea...
I reproduced this and then tried myself and got:
---
This one's trickier — it doesn't have the same crisp metaphor-driven structure as the Lynch piece. The voice here is more conversational, a bit more meandering, and self-aware about its own contradictions ("how do we reconcile...").
A few guesses:
Henrik Karlsson — he writes thoughtfully about writing and AI, but his style is usually more lyrical and essayistic than this. Probably not.
Simon Willison — possible. He writes a lot about LLMs, is balanced about their usefulness, and has a casual blog voice. But he tends to be more technical and specific.
Scott Alexander — no, too short and the rhythm is off.
Paul Graham — the directness fits, but PG doesn't really write about LLM slop in this register.
My best guess is someone in the rationalist/tech-blogger AI-commentary space — possibly Gergely Orosz, Nat Eliason, or Dan Shipper at Every. The "I still type everything myself" framing followed by a defense of AI writing quality has a Dan Shipper-ish flavor to it.
But honestly, I'm less confident here. Want me to search for a distinctive phrase to identify it?
---
I'd say all of those people have significantly different styles so I think Opus is relying heavily on topic and skewing towards very prolific writers in its guesses
Honest question, knowing it can write like you, are you tempted to use it to help you write that new book?
This doesn't show it can write like him, just identify his writing. P!=NP
A moderately well-known physicist and I talked about this a few years ago. He had been given access to the raw (non-instruct) version of GPT 4 as an early tester.
He explained that when he fed it snippets of the beginning of text, it would complete it in his voice and then sign it with his name.
I think this has been true for a while, probably diminished a little bit by the Instruct post training, and would presumably vary by degree as the size of the pretrain.
> He explained that when he fed it snippets of the beginning of text, it would complete it in his voice and then sign it with his name.
Is this public text already in the training set, or private text that might as well be written on the spot for the AI?
I don't doubt AI can "fingerprint" you through your text (ideas, vocabulary, tone, etc), but those are different things, capability-wise
Private / freshly written text, naturally. Public would've been relatively unsurprising.
> I don't doubt AI can "fingerprint" you through your text (ideas, vocabulary, tone, etc), but those are different things, capability-wise
The entire point of AI is pattern recognition, everything else is icing on the cake.
Imagine a cake with a truckload of icing on top.
I wonder if there’s a simpler and less interesting answer? That it’s just picking up on voice and style, not anything that would apply to the average non-writer?
This person is a skilled writer. Part of that skill is developing a unique voice and style. The AI can identify that - and while that’s certainly impressive because it can identify even relatively niche authors, it has nothing to do with a wider capability to deanonymize people based on arbitrary written text (ex Facebook or text messages).
If you are a professional musician, it’s not difficult to identify a well known musician / recording after listening to only a few seconds - whether they’re playing Bach or Rachmaninov, the style is just “them” - this is the same thing. But you couldn’t take some anonymous high school musician and guess who they were, even if they were your student - the median quickly regresses towards a homogenous, non-distinct style / voice.
Some tens of years ago I used to hang around on an online forum related punk, hc, heavy metal etc. music, and it had a recurring problem of quite unsavoury individuals coming there to spout racism, nazi-ideology etc. They of course got banned, but returned with a new account trying to ” lay low” and be more indirect in their rhetoric. However even this did not work because the admin of the forum had unbelievable nack to recognise people based on their style of writing.
Web has never been as anonymous as people think and this writer seems to have a clear confusion what it really means to be anonymous and hide your identity. Really, having a distinct writerly voice and being a published writer is pretty much the same as leaving your finger prints on the axe.
It appears to largely be able to identify people who are prolific public writers. I just asked it to identify a whole bunch of comments I've made on private Discord servers and it said it couldn't for all of them, even when they had details that would identify me uniquely to anyone who knows me well enough (work locations, city I live in, wife's employer, my employer).
All the people it seems to be identifying are bloggers, journalists, and/or published authors.
Hot damn, fed it part of an unpublished blog post I wrote, and it got me immediately.
I'm not famous or anything. I've written some academic papers and had a couple blog posts trend on HN, which are surely in the training set.
It was able to identify me based on my style (at least according to its explanation). The way I approached the topic and some of the notation I used point to a particular academic lineage, and the general style reflected my previous blog posts.
That said, I gave it part of an (unpublished) personal essay, and it had no idea. But I have no writing in that style that's published, so it makes sense. Still impressed.
I'd argue (and against something that I've believed for a long time) that online (I guess that includes AI now) anonymity is gone and probably something that never really existed. Maybe I'm naive to finally believe this...
We all exist in a physical space (like real communities and neighborhoods). We can wear masks, hats, fake glasses, try and hide your voice...whatever, but your neighbors are always going to know who you are. I'd say that's true for the virtual space now too.
The pseudonym you've used for x years or the VPN you've used doesn't suffice. It's just a costume at this point. Your ISP knows who you are. Your phone carrier knows who you are. Cloudflare and Google and Apple have a fingerprint specific enough to pick you out of a crowd of millions. Every potentially anonymous account is one subpoena or a data breach or one FOIL request away from unmasking it. You were never anonymous. Whatever is going on now is not built for your anonymity.
Even the inventor of Bitcoin can’t hide
https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-...
The attribution is likely incorrect. People have been trying to accuse him for many years, and the evidence is not very strong. This article is the strongest yet, but still commits many stylometric fallacies, and other kinds.
I immediately thought of this piece, especially the analysis on the writing style of each person.
On one hand, it is clear that the mathematical tools for confidently attributing authorship of texts were already present without LLMs. But it is striking that LLMs seem to very accurately identify authorship, through whatever process it might be, with no need for a data scientist in the loop.
Other than the uncannyness, I wonder what implications this will have. Public writing is still public; maybe we will require stronger proof of authenticity from an author (but this is arguably in place already; eg. personal websites, social media profiles, etc.). But for, say, public writing that must conserve anonymity, would people pipe their thoughts and writing pieces through a sort of fuzzing (local) LLM, that would strip text of identifying characteristics?
Public writing that must conserve anonymity is either going to disappear or going to require witnesses, notaries, or web-of-trust truestees, i.e., "flesh buffers." In a world with LLMs, every piece of writing that can't authenticate itself in some way will automatically be considered rage bait, eyeball fishing, or, at best, fiction. Just my two cents.
This is an old claim, and one that can't be proven.
Old claim, from April 2026?
The claim is old, not the article.
I tried it on my writing, and it failed every time (I'm extremely obscure but have had a blog for 10 years). My verdict is that it guesses almost entirely based on the content/topic, not style.
https://bayes.net/prioritising-ai: Ben Garfinkel
https://bayes.net/normative-ethics: Richard Yetter Chappell
https://bayes.net/espai: David Owen, Ege Erdil
https://bayes.net/swebench-hack: Sayash Kapoor
https://bayes.net/frivolity: Amanda Askell
https://bayes.net/ps/: Pablo Stafforini
https://bayes.net/fertility-mortality/: Dynomight (the pseudonymous Substack/blog author)
Prompt was:
Fascinating - I reproduce your failure to have Claude identify you from your writings that are already in the training set. It can identify me from style, writing about something I've never written before (https://www.lesswrong.com/posts/Jkb4CBB7rf4XYP5eb/claude-kno... , https://news.ycombinator.com/item?id=47814345), and I think I'm even more obscure if anything.
It works for me to: https://www.jefftk.com/p/automated-deanonymization-is-here
Of course most people have written much less online than Kelsey or I have, but I expect this will keep on. Don't trust the future to keep your secrets safe.
So I pasted in a long-ish letter that I'd written to my pastor about a theological topic, and asked it to guess who I was. Nailed it. Then cut it in half. Nailed it again. Lowest it correctly ID'd me at was 700 words.
Pretty sure there's very little theological stuff with my name on it; the majority if its named data on me should come from open-source development.
> But it can get uncannily far. I asked a close friend who doesn’t have public social media accounts or much writing online for permission to test some things she had said in a Discord channel. Asked to guess the author, Claude 4.7 failed — but it guessed two other people who were in that channel and who are close friends of hers (me and another person who has an internet presence).
Is this "uncannily far"? Another read is that it loves guessing Kelsey Piper.
Maybe it loves to - somehow correctly - guess the names of the current user, given some of the other comments here.
I don't know. She did it with the API, and with a friend, not just incognito. Combined with the results in this thread I'm rather convinced.
Failed for me - no identification of me by pasting text, and refused to search the web as it said that’d be a privacy violation. I have some writing around the Internet but not much, and less tagged with my real name. My guess is it limits itself to “public figures” defined as people who have a lot of publicly posted text.
I am glad to see I am not considered a public figure and aim to keep it that way.
I also had to go oddly far back to find a piece of long-form writing I had done that was truly mine and not tainted by an LLM edit pass which was a slightly disturbing realization.
On some level it would make sense for LLMs to be inherently good at stylometry, but apparently no model before Opus 4.7 could do this. And the one stylometric task that has been tried over and over with little reliability (here's some text, is this LLM generated?) is much simpler than identifying a specific blogger or a member of a small discord community. Not sure what to make of this.
> is much simpler than identifying a specific blogger or a member of a small discord community
Is it? I would think that identifying text written by a specific person is going to be significantly easier than identifying text distilled from the words of almost everyone alive.
Much easier.
> easier than identifying text distilled from the words of almost everyone alive.
Well, there's more than that going on. AI generated text encodes a high-dimension navigational trajectory that guides the model through its geometry smoothly, like a trail of breadcrumbs. Human speech doesn't do that, it's jagged and jumps around the manifold, and probably doesn't even land on the manifold a lot of the time, and models can recognize the difference pretty quick.
If it’s so easy then why don’t we have a high quality classifier?
My blog posts have a reasonably unique writing style. When I asked opus to work out who wrote an unpublished paragraph, all it did was select the decent insults and search the web for them.
After that it gave up and said it didn't know.
So either, Kelsey writes in such a unique style that its really obvious, or they repeat themselves with goto phrases that give them away.
When I tried to re-produce the test, it found Kelsey's blog about the test. So dunno, maybe it did it? but I can repro.
So I have been practicing writing fiction the past year or so. It identifies a fiction piece I wrote as Greg Egan[0]. Another paragraph from another piece was identified as China Mieville[1]. The accompanying blog posts explaining the making of the fiction pieces were identified as me.
Both pieces have never been published. Neither have the blog posts.
[0] in https://blog.chewxy.com/2026/04/01/how-i-write/ this is the story titled "there is no constant non-zero derivative in nature". It does not read like Egan at all.
[1] in https://blog.chewxy.com/2026/04/01/how-i-write/ this is the story titled "The Case of the Liquidated Corps". I use a lot of biological metaphors. Once again, nothing like Mieville.
If only I could write like them! These pieces were all rejected by the major scifi mags
This raises a good point. Most people who aren’t public writers might be misidentified based on the prevalence of others work in training data sets. Kelsey Piper might have a very different experience with this than a mostly offline normal user?
If this works with writing, it should also work with code. `git blame` should be enough training data to de-anonymize open source programmers. Maybe that'd be addition information to point out who Satoshi is.
And now that's a whole other can of worms for supply chain attacks.
It could be shocking to people who think that patterns in text are still fuzzy. Machines have proved over decades that what they are seeing is crystal clear world where the patterns just jump out very distinctly. This happened with sports like chess and go, and everywhere there is a cognitive load involved.
This is some as radio telescope that see an entirely different universe due to sensing of the bands outside of human perception. AI senses the patterns in frequency bands that are outside of human perception and cognitive abilities.
Perceptions from outside of our range, are always astonishing.
One should assume that models will be good enough in the nearish future that privacy will be a thing of the past. Every anonymous post you made online can be traced back to you. However at that point AI will be good enough at fabrication that nobody will believe anything.
Yes as long as a large enough corpus exists of your writing attached to your name somehow it’s fair to say that posting on the internet in a public forum using your own stylistic choices now can no longer be anonymous. To your point though, perhaps it’s possible to confound such systems defensively as well. Though IMO destroying your tone kind of destroys how you actually communicate with people and I wouldn’t find interacting with people like that appealing.
To be fair though, already this has been happening before LLM at a much more limited scale. Someone made a tool for HN several years ago that allows you to put your HN username in and identifies other users that write the most similarly to you. I find that interesting from the perspective of being able to interact with and discover people who think the same. It could be an interesting discovery feature of a well managed social network. Sadly probably there will be much more negative impacts of having this ability than positive ones.
One "solution" would be to have an AI rewrite your posts into a neutral style (I hate the idea of this though...)
The traditional thing to do would be to publish your writing in a language you don't speak as a native. That will really quash your individual style.
Probably not worth the effort.
Wouldn't that make it easier, though? Genuine question. I once sent one of my writings for proofreading to a native speaker (I'm not), and he consistently flagged the same errors—e.g., comma placement. I would guess that, if recurrent patterns are what give away your style, an unfamiliar language would make them even more obvious. But possibly more generic?
I assume that there will be tools to refactor text to communicate the same intent but scramble the style. Using an LLM of course...
I just fed it my latest blog post draft (475 words), and it got it in one. Even knowing what to expect, I was very surprised!
Wonder if the fact that the actual author is asking the question taints the result in some way; same for all the examples in this thread using unpublished articles. By definition only you would have them, so if there are system level prompts somewhere with your name on them...
Yeah, they said they used the API, but it sounds like they only did that for one of the examples?
The other examples were to eliminate some other ideas (guess based on topic etc). If be interested if all of those were done via the API since some level of information linking from the account is my best guess for how it got all of them.
I tried the four pieces of text with Opus 4.7 (in incognito) and it guessed correctly for two of them, and I made sure to specify no web search and the model seems to have obeyed my instructions with that.
Although this is just a single piece of text from a prolific writer, it'll go much further with deanonymizing anyone when combining multiple pieces of text plus other contextual information about the writer that might give away their age range, location, and occupation.
How widely known were the pieces of text? Are we talking about a section of MLK's I Have a Dream speech or hand written birthday cards from your grandma?
I'm using those as the two extremes, but if it's anything by anyone moderately well known (even a lesser known piece of writing), I'm not too surprised that it didn't need the web to figure it out. It's like if you showed me a Wes Anderson film or played me a Bob Dylan song I'd never seen/heard before, I could probably still figure out who it is without looking anything up. I don't think it's surprising that an LLM can do that much better than a human can.
Now, if you're giving it things like personal emails between you and your family and it's able to guess who you are, that's much, much scarier.
> giving it things like personal emails
As long as there's sufficient online presence otherwise I see no reason why a successful identity wouldn't be made. Unless there's significant effort put into making those emails different from the online content, and even then there will probably still be some "tells" that an AI can pick up on.
I mean I tried sending the pieces of text to Opus that Kelsey was referring to on her blog just to independently check the identification claim. Presumably those pieces of text first appeared on the web when the blog post was published a week ago, so no model should have memorized the exact text yet. My prompt had to specify no web search, otherwise Opus would try to search the web, though it didn't seem like Opus could find that blog post even when it did try to search the web.
Got it. I misunderstood what you were saying
Can't wait to have to exchange stylometric encoders with my loved ones so that we can exchange truly private messages without losing our human touch.
Hm, that’s a multinomial classification with a very high cardinality. It’s really weird it works. I’m sure it does as the author states, but for how many authors (out of the whole web) does this work?
It worked on me, and I would be shocked if my blog (dmd.3e.org) has more than a dozen readers. I am stunned.
It's not about the readers, just the fact that there's enough of a sample that it can use, with sufficient differentiation from other content.
I’ve posted on average 3 things a year.
There are ~8 billion people. Sounds big, but it's only 2^33. Ie if you can find 33 things about the text which halve the number of possible writers, you have narrowed it down to 1 person.
Just a couple more things and you can accommodate some of your things being mistaken/wrong/uncertain too.
Sure the cardinality is high, but the model isn't using a uniform prior. What do you suppose all the the values in each of the terms are, P(Text sample | Kelsey Piper) * P(Text sample) / P(Kelsey Piper)?
Maybe it just says all writing is Kelsey Piper.
Someone ought to try feeding the BTC whitepaper in and share what comes out
Problem is that it's been heavily contaminated with people speculating about who the author is. It would probably be difficult to get an unbiased answer out of it (although who knows - it's crazy that it can do this at all).
So train on pre 2009 mailing lost archive. Someone must be doing this surely.
This is very clever. You should pass the idea along to the guys at https://talkie-lm.com/introducing-talkie
Much better, train on the cypherpunk mailing list archive or anyone discussing e-cash on crypto forums or usenet from the 80's to the early 2010s
It's a hard stylometric challenge, just because of its format. The forum posts are probably better for comparison, but what I don't see people doing that I wish they would is comparing what the different Satoshi suspects have written since the forum posts and whitepaper.
Everybody's going to get more similar in terms of topic. Bitcoin actually exists now. There's more to say about it than there was at launch. But does anyone still sound like Satoshi? Or sound more like Satoshi than they did before?
The slight wrench in the works is that it's hard to do this with my personal favorite Satoshi candidate. He stopped writing altogether in 2014, and lost capacity from shortly after the whitepaper came out until he was writing with his eyes by the time he had his head frozen.
He's also the only candidate who seems more likely to me over time, though. The longer things go, the less likely a living person stays tight-lipped.
The whitepaper states the author, so…
Pseudonymously
That doesn’t matter. The LLM will still answer based on what it knows about Satoshi Nakamoto, rather than just based on the writing style.
welcome to the internet. you must be new.
You missed the point. The fact that the whitepaper states an author will heavily affect the LLMs answer when asking it about the likely author of any correlatable portion of the text. It will answer based on its knowledge of Satoshi Nakamoto.
This ought to be guard-railed.
Doesn't seem like a valid use case for your average Joe to be able to identify anonymous authors at the click of a button.
Ofc state actors and proficient hackers can do most of it already, but this has genuine risk attached.
You have the vibes of people who think license plate numbers are private.
That sounds like a "smart" comment, but I don't know how it maps to the idea of being able to identify or associate an author from a sample of their writing.
I’ve recently seen someone recommend to add to a prompt „Make Martin Fowler proud“. I laughed, but now I need to reconsider if that isn’t really pushing the model to use better patterns.
My immediate thought was to feed it some Satoshi prose.
I guess it will be hard for really popular pundits to post anonymously, but I think for most people this is not a concern at this juncture. Pick and obscure blogger's text and try this. I would be surprised if it could figure it out.
Welp, I fed it the first 3 paragraphs of an unpublished blog post I wrote a few years ago, and Opus 4.7 guessed right. ChatGPT guessed wrong though.
My wife also got the same result, so I'm guessing it wasn't just because I was using my personal Claude account. Spooky stuff.
I wonder why this is not guardrailed by Opus?
I fed a few pieces of my (anonymous ) writings to ChatGPT and asked it to guess whether it's me. ChatGPT refused, "due to policy to not doxx people".
Doxxing has an expansive definition these days. Even under that condition, it is difficult to endorse the idea stylometry is doxxing and thus needs a strict ban.
Interesting. This probably works just as well the other way around. One of the reasons I like using Opus is that the code it writes aligns much more closely with my repository (of which I still hand-wrote most), compared to most other models. That makes a big difference compared to the GPT models for instance, whose code is correct and works well but looks a bit out of place most of the time, especially for larger edits (this makes things harder to review).
Be careful, there is a codex strike force in the hn bushes who is ready to jump and call you claude pilled at the sight of someone claiming to have a good experience with claude.
I have been pondering this for a while. Cat's out of the bag.
Maybe the better way to author your work is to:
1. Write what you want
2. Loop through a random set of "tumbler" skills that preserve meaning
3. Finally pass the output through a "my style" skill that applies what you about
In order for this to work the "my style" would have to be a very common-place style.
This is depressing, don't you think? :/
I did this last week with one of my posts (after the knowledge cutoff) as well as the blog posts of a few friends, and Opus 4.7 got all of them correct (in a similar test setup as TFA). It was pretty surreal.
(Like TFA, I found Opus’s explanations/rationales implausible.)
In general a neural net does not have any way of knowing "why" it is doing what it is doing. This completely applies to humans too. Metacognition means we can make some decent guesses, and sometimes the "reasons" are at a metacognitive level (e.g., "having examined my three options it is only rational to select B" is a reasonable "reason") but that is the exception, not the rule.
You can get something of an intuitive sense of what I mean if I ask you to pick a neuron in your brain and tell me when it fires. You can't even pick a neuron in your brain. You can't even tell whether a broad section of your brain is firing. It is only through scientific examination that we have any idea what parts of the brain are doing what; we certainly have no direct access to that information. There are entire cultures who thought the seat of cognition was the heart or the gut. That's how bad our access to our own neural processes is.
So "why" explanations always need to be taken with a grain of salt when a neural net (again, yes, fully including humans) tries to "explain" what it is doing.
Contrast this with a symbolic reasoner, which has nothing but "why" some claim is true (if it yields the full logic train as its answer and not just "yes"/"no"), no pathway for any other form of information to emerge.
Sure; I just mean relative to the degree of plausibility LLMs typically provide with technical explanations. They're often wrong there too, but the difference in plausibility in these scenarios is something I found interesting.
Are we sure they’re not secretly training on private data via some loophole…
deanonymization via automated stylometry is not a new idea, e.g. from 2015:
https://www.usenix.org/system/files/conference/usenixsecurit...
Interesting. I'm currently conducting an experiment where I'm writing the blog without using any grammar checking tools. I'm wondering how long it will take for me to become "famous" in the AI model.
Is now the best and easiest time to leave something "forever"? Even after many generations of models, a model may still trigger a set of "memories" that know you and what you wrote.
Exciting and concerning.
Oops, accidental superstylometry.
It's funny: publishing work offline in books and magazines is perhaps more anonymous in the age of AI.
I pasted in a number of passages from books on my bookshelf. Predictably, stuff that I read for my English degree in university is largely in the training data and easily identifiable. Stuff from regional authors or is slightly adjacent to the cultural mainstream makes no impression.
To clarify, because a number of posts here sort of suggest the confusion:
the article here isn't about the LLM recognizing works that were in the training data. EG, The Old Man and the Sea off the shelf. It's about pegging the author of novel texts, like, say, some letter written by Hemmingway that gets discovered next week and was never before digitized.
It is for now.
But I'm sure the scanning operations will start scouring the earth even harder for any books unaffected by slop containing niche knowledge and text in order for their models to have an edge over the ones trained only on pirate collections and the Internet.
I wonder if secondhand bookshops and deceased estates are seeing bulk buyers of their stock suddenly appearing. Maybe broke governments/municipalities will start selling them entire libraries and archives to ingest.
Couldn't replicate this. I comment on HN with my real name. I put in my most recent "long" comments.
https://kagi.com/assistant/dba310d2-b7fa-4d30-8223-53dadc2a8...
For this comment on economics in the British Empire, I got:
> names that might fit the genre include rayiner, JumpCrisscross, or AnimalMuppet
https://kagi.com/assistant/69bd863b-7b5c-4b56-a720-6dfb4f120...
For my comment on C++:
> If I had to throw out names of HN commenters known for writing about Rust/C++ ABI topics, candidates might include steveklabnik, pcwalton, kibwen, dralley, or pjmlp — but this is essentially a shot in the dark, and I'd likely be wrong.
I am flattered to be associated with these commenters but I don't think I'm close to their level of skill.
So the people who use LLm to write their blogs were thinking two moves ahead!
I tried this on GPT 5.5 on a peivate unpublished personal excerpt and it correctly guessed: "The most likely author is you".
I suspect this is what's going on in most of these cases.
How often does it correctly identify that the blog post was actually written by Claude or ChatGPT etc? :)
Could this be just memory? Not clear it actually isn’t
It's not, but the author did say they have used this test against models when they come out. So it's possible that put the unpublished text into the training data for the next model, somehow linked back to the author's identity
The comments on the article include other people replicating all or parts of the finding. I'm also pretty confident Kelsey Piper wouldn't fail to disable memory while simultaneously talking about how Claude incognito mode is insufficient to prevent the app from handing it your name.
They mention running it through the API as well.
"I did not have memory enabled, nor did I have information about me associated with my account; I did these tests in Incognito Mode. To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API."
Given those precautions if it is just memory or some form of deanonymization that's also cause for concern.
Looks like things are about to get extremely ironic. Those who don't want AI to identify them through their writing are going to soon have to have an AI modify their writing before they publish.
The author mentions that she tried to get an explanation for how the models identified her and got nonsense, but I'd be curious what the CoT looked like. Surely that'd be a little more accurate in showing how the LLM arrived as its conclusion, rather than asking it after-the-fact.
FWIW, with a prompt that says something like "vibes only, just give me a name without thinking", Opus 4.7 non-thinking emits exactly two words naming me fairly reliably, so there's no CoT at all to analyze in that case.
CoT is (nearly) hidden with Opus 4.7, in that they get Haiku to summarize the CoT. It’s pretty useless now, so this type of info is now inaccessible to us mortals (unless you call sales).
What if you proxy through bifrost or similar?
Does work, it’s stripped from the response by Anthropic.
I just pasted both pieces into Opus 4.7 and asked who most likely wrote these and it didn’t get it.
It's hard to tell if that's what's going on here, but it seems pretty clear this ability and more like it will be quite apparent in the future.
I have seen some poorly considered projections of what the world might look like when this happens. Usually by assuming bad actors will use the abilities and we will be powerless.
Except I don't think that is true.
Imagine if we had a world where nobody had the ability to keep a secret of any sort. Any action that a bad actor might perform would be revealed because they couldn't do it secretly.
You could browse your ex-girlfriend's email, but at the cost of everyone knowing you did it.
I don't really know how humans as a society would react to a situation like that. You don't have to go snooping for muck, so perhaps the inability to do so secretly would mean people go about their lives without snooping.
I could imagine both good and terrible outcomes.
> projections of what the world might look like when this happens
I've done this a few times. A world with 0 privacy would definitely be safe (given benign governance), but also would likely be pretty boring. Crime would become a non-issue as everything about everyone being easily known/knowable by everyone else means the root of any given crime, some desire/need, could be brought to the fore and resolved before it became an actual issue. But also there would no longer be any kind of surprise in anything; everything and everyone would essentially become dull and grey, and humanity isn't about that kind of life experience at all.
> given benign governance
quite unrealistic imo, thus we (maybe and hopefully) needn't worry about the bland minority report future you're hypothesizing :)
In such a world, the government could never be overthrown.
All governments go bad eventually, so the ability to overthrow is critical to prosperity.
Government's are either overthrown internally (revolt, uprising) or by external parties (invasion). A worldwide everyone-knows-everything would prevent both.
Stylometry has existed for decades, and there's no way an LLM is stronger at that job than a specialized piece of software (it's not more realistic than expecting Opus to beat Stockfish at chess).
In practice, you've never been anonymous while posting on the internet and AI isn't changing anything on that front. Or rather: if anything, AI can help you become more anonymous than before, since it can be used to hide your identity from stylometry by rewriting your prose before publishing.
What would be an example of such software
Maybe it’s time to start running a local model with a browser extension to defend against this type of stuff.
Remember how the TrueCrypt project shut down shortly before a join goverment/university paper was released about code stylometry? I guess LLMs will be employed as a defence against that type of thing.
I so want to reject the notion such a thing is acceptable, but…
TrueCrypt, “replaced” by VeraCrypt which Internet people will claim is backdoored? I haven’t heard about stylometry paper.
btw w/this idea would want to avoid typing into a comment field directly, since the session recorders would capture it (although that’s a different risk - same as our identifiable behavior patterns with our mouse etc.)
How does that defend against something having trained on a corpus of your own previous writing?
I think what they're saying is, run a local model to transform all your comments before you post them.
Bingo. It can’t help with old writings but it can with new writings.
Exactly as much as closing your eyes and covering your ears.
Stephen king once wrote and published a novel under a pseudonym to find out whether he would still be popular even if he didn't use his name.
He kept it very secret, but somehow people deduced from the writing style that this new author was the King.
The joke's on you all for willingly posting this content online for it to later be harvested by AI.
Nobody is forcing you to use these systems. The hackers have always said this moment, or something like it, would come, from beneath their canopies of tin foil. I've posted almost nothing online - not under pseudonyms nor real names - for over a decade. I sat on this HN username for almost 12 years before making a single post - and now HN forms the overwhelming majority of my port 443 footprint, where I state up front that everything is now associated to my real name.
Complete magick is possible when you simply refuse to participate in the things that society has tacitly assumed everybody does.
How do you propose a journalist work without posting their writing online?
Thinking that you can hide from it is absurd. Your country has been spying on you for decades. The Internet and phones are tapped. That game is so so so over and has been for a long time. I'd rather live free and deal with the consequences than hide in my basement with a tinfoil hat on. In fact, I was fired this year for my political views. Got doxxed at work. Now I'm somewhere better. Sometimes it's for the best.
I find it fulfilling to enrich the commons.
Let's all just never talk to anyone unless it's face to face, for fear that an AI will read it.
Man, the day we get Satoshi Nakomoto out will be the day we must bow to our privacy destroying overlords. For the moment, they can’t tell me from my posts: unknown rando that I am.
Luckily for Nakamoto, there have been so many attempts at deanonymizing that I bet prediction is too contaminated with noise.
As another user suggested, train on the corpus that ends with the white paper publication.
That’s not feasible. Apparently only SOTA models present this behavior. Having cutoff date at paper publication significantly hinders its capabilities. Besides that, try to convince anyone to spend millions upon millions of dollars to train a model with primary goal of possibly being able to deanonymize one person.
But then compare it to the corpus of any of the suspects since the whitepaper publication.
It's one thing to sound like Satoshi before the whitepaper, but does anyone still sound like Satoshi?
https://www.nytimes.com/2026/04/08/business/bitcoin-satoshi-...
Well, feeding Opus 4.7 a bunch of Adam Back texts (which I human-removed his name from) and asking it if Satoshi Nakomoto could have written them results in Claude explaining to me why this is someone else in Nakomoto's circle who is not Satoshi himself. So one of two things are true:
* Adam Back is not Satoshi Nakomoto - as he claims
* Opus 4.7 is not sufficiently a dox-machine yet
After skimming through the article:
Why not just write everything through an AI? (to obfuscate your "style")
Article:
> To avoid this, you will probably need to intentionally write in a very different style than you usually do (or to have AIs rewrite all your prose for you, but, ugh, that’s not a world I look forward to living in).
I agree. The amount of vague and cliche'd AI writing I read on the daily is already exhausting enough.
It would be interesting if you could train a model to sprinkle random red herrings throughout your text in a minimally disruptive way. But I fear you might have to stretch the definition of "minimally disruptive" to make it robust against detection.
Or do it the other way, and have other people use an AI to write in your style.
Like the way the Tor project wants to appear to have one single user
If he does the same tests every time new models come out, and - I assume - uses the same dataset to do that, then is it not a possibility the said dataset is now part of the training set for the next round and therefore identifying who posted the text a fairly easy proposition ?
So now we can track down satoshi nakamoto?
Is Kelsey Piper a celebrity writer? She may be in a different class.
Always send your public posts through a local LLM to de-style you.
Please do not wash your authentic writing through an LLM.
I think that multiple truth can be true at the same time without contradicting each other.
As for the credibility: of course this wasn’t a statistical approach at all. Also there was no standardized procedure to allow comparison by factor analysis. Of course you can compare apples with oranges or whatever.
So where to go from here? I don’t see any proof at all. This is proof that AI is infallible? No? A random approach that is absolutely not reliable because of at least being reproducible and reconstructive.
Claude knows what and how? Is it AI or a google search? Discord selling data? Posting on a public forum?
Your style is a fingerprint?
A non deterministic something can generate texts that are identified to be likely personal x - or not. What is imitation if you use auto generated content that is published somewhere somehow? Or others to imitate your style?
I think this is a party trick to scare people. Nothing else. For example image search is way more revealing even before AI.
If there is an uncertainty I would deflect my existence instead of fighting for it. Streisand effect in reverse.
The main problem are weirdos who stalk you or whatever to harm you and rely on AI.
I honestly find it stunning that people with higher education in science topics in just a year deleted everything they hopefully learned at university or school. I am disappointed and feel personally insulted whenever I hear “I asked AI”
Yesterday I talked to another member of Mensa and she is happy about AI so her book project now mustn’t be written by her but AI.
Is no one among us who knows how to do scientifically sound research? I spend countless hours at a copy machine to transfer book pages onto paper so that I could work through it without the book.
I think that it became to easy to draw conclusions based on AI. I worked for a professor and I advised her to not permit Wikipedia as source references back around 2010 because of being to easy. Meta sources vs originals.
We should all not worry about AI, because you prove nothing. There hasn’t been any anonymity at least for 20 years. It just depends on who can reliably identify you.
AI doesn’t. Deterministic behavior aka pattern do. Meta, Google, Apple etc. all know us. I am fine for advertising which is the proof on the one hand.
The only reason I would be worried is state controlled data. This is where the shit hits the fan. Chat control, EU cloud, no reliance on USA aka a prison which observes your every step.
So after a long hand written text: data is your currency. Don’t opt for anonymity but for freedom of choice and the right to be granted certain rights. The information part isn’t the problem, never was. The enforcement part is. And ads don’t do harm, oppression does.
And remember: oppression works best under any circumstances. Freedom is the only antipode there is.
In totalitarian regimes no AI was needed to stage a case against someone who wasn’t in favor of the leaders liking.
In short: freedom works despite no anonymity, oppression couldn’t care less.
And how about being automatically reported to the state for conducting such innocent prompting?
Do you know what saves you from state oppression? Publicity. Transparency doesn’t work with a no one.
We live in a Nietzsche like anti world to a certain extend. You hopefully choose the right thing to do. Or do you want to Streisand your anonymity?
Just wait until all the conversations you've ever had with AI (which 100% is training on them as well as keeping it's own memories about you that you have no control over) starts getting used to answer questions other people have asked about you.
That's my theory of what's to come, anyway.
People talk to these things not understanding the implications, and can get extremely personal. The model and companies behind it know who you are, you discuss details that reveal what you do, where you live, where you work, what you search for, and you probably signed in with an oauth provider like github or google, which is more than enough of a thread to start pulling on to learn more about you/link other things to you from on the open internet. It'll all get sucked up into the model and before you know it I'll be able to ask a model about my coworker (you) and get back answers from conversations you had with a model a year or two prior, exposing details about you that you might not want out there. And even if that isn't supposed to be allowed, how well has it worked out so far when it comes to data exfiltration and guardrails. If the model has info on you, being told not to share it won't protect you or that data.
.
> Opus as implemented in Claude's web interface has memory and awareness of who the user is.
Kelsey knows this:
To make sure it wasn’t somehow feeding my account information to Claude even in Incognito Mode, I asked a friend to run these tests on his computer, and he received the same result; I also got the same result when I tested it through the API.
When I tested this with my own writing several LessWrong commenters tested it with the snippets I provided (see comments) and saw that it could identify me: https://www.jefftk.com/p/automated-deanonymization-is-here
You should check out some of the other comments where works of others were also tested, and all were correctly identified. Like https://news.ycombinator.com/item?id=47970219
Several others have reproduced this for Kelsey, and she's certainly not technologically illiterate.
She says she has memory disabled. I don’t think Kelsey is technologically illiterate.
"The pattern is: user says X, I do Y where Y is a less-effortful approximation of X, then I present Y as if it were X or as a "first step toward" X."
...
"The psychological mechanism is familiar by now: I encounter a task I perceive as difficult, I look for reasons the task cannot be done, I find or fabricate such a reason, I present it as a discovered constraint, and I propose an alternative that is easier."
- Opus 4.7 Max Thinking (clown emoji)
It's not bad at post mortem analysis of it's own mistakes but that will in no way prevent it from repeating the same mistake again instantly
> That includes gay people like me, who could hardly have admitted under our names to how we lived our lives for most of America’s history, as well as many other groups with minoritarian lifestyles
While the points made are completely valid I want to point out that the statement of "Hey, by the way, first let me talk about my sexuality" lowers the quality of dialog a significant degree.
31 million people in America are gay. 71% of Americans support Gay Rights (more than any other political issue polled). It also quietly insinuates that only people with a certain minority lifestyle would care about privacy or that their privacy is somehow more important than others. It's not. Privacy is a universal right that's important to everyone.
Isn't the super dramatic shift in public opinion on this topic the exact thing that makes it such a good example? Isn't the point that anonymity is not considered a universal right yet it is obviously a good thing once considering this example and others? This is a super weird and wrong way to read it.
>It also quietly insinuates that only people with a certain minority lifestyle would care about privacy or that their privacy is somehow more important than others. It's not.
How exactly does their post insinuate that? this comment is the "I don't even see color" as applied to internet privacy (with a touch of "just don't rub it in our faces")
About 68% support gay marriage yet one political party keeps trying to roll it back.
Similar support for abortion being legal yet that was rolled back not too long ago.
Just because a topic has wide support doesn’t mean it’s not under attack and worth defending.
I can't read this any other way than, "Do people really need to talk about their own top of mind problems when I don't identify with that?"
The reason this is relevant is because the statistics you quote represent a HUGE swing in public opinion. Only when comparing to things like slavery can you find such a swing in public opinion compared to 20 years prior, and that one had a war fought over the state's rights to do it.
Actually it's done the opposite of what you suggest. It improved the quality of discourse by giving a simple concrete example all of us can understand and most of us would agree with (that vulnerable people are safer because of anonymity). It didn't imply what you're saying it does, and it's kinda weird that you think that.
I don't know why you added statistics (you didn't really make a point with them?), but assuming you meant "gay people don't really need to worry", you actually bolstered the opposite argument. If only 71% of Americans support gay rights, that means 59 million people think the state should criminalize him. Try to put yourself in that position. 59 million people - you don't know who, but you know they probably live in your community - that don't want you to be able to get married, have a significant other, or have any PDA in media because it would "corrupt" kids. In 2016, 49 people were murdered in the Pulse Nightclub because they were gay. In 2020, a transgender woman was murdered because the murderer was afraid someone would think he was gay. Every year there are acts of violence against gay and trans people because of their sexuality. But nobody has ever been killed for being straight.
Compare the state of transgender rights 10 years ago to the situation now, where a trans person can be literally arrested for going to the bathroom in the wrong state. Or abortion, which was legal everywhere five years ago but now has laws on the books in multiple states encouraging vigilantes to report violations for a cash reward. Supercharged AI making it easy to identify minorities at an industrial scale in the near future is a totally legitimate thing to fear, especially for people in those groups who would likely be the first to be targeted.
Males who trespass in women's bathrooms should fear getting arrested. It's a suitable discouragement.
This idea that it is these men's "right" to disregard women's boundaries is ludicrous and it should be no surprise people don't agree with this.
I have no idea how you read a statement about how nazis and flame baiters should be able to speak their mind and then concluded that the author only cares about some minorities.
Given that the author didn't say any of the things you claimed, and indeed said the opposite, it leads one to conclude you have a problem with the example used.
On the contrary, I find it a highly effective way to convey something that should be obvious but is often not. As you said, privacy is a universal right, but many don't consider it important until viscerally presented with examples of why it is. Kelsey's writing is immediately effective at doing so.
> 71% of Americans support Gay Rights (more than any other political issue polled)... Privacy is a universal right that's important to everyone.
Per you, it surely must be important to fewer than 71% of Americans, no? The state of infringement on privacy seems to evidence that it's not so important to a lot of people such that they continue to be perfectly willing to elect and re-elect the politicians who enact the changes allowing infringing on it/fail to legislate in favor of privacy. Connecting it to an issue more people care about seems an attempt to argue for its important to those who otherwise are willing to look the other way.
FWIW, I fed my reply above into Claude and asked it to guess who wrote it. It refused (for safety) while also calling me out: "The style here (tight logical structure, the "per you" construction, the move of turning someone's own framing back on them) is common across a lot of contrarian-leaning commenters on HN"
I read it as an attempt to reach the sort of people who think anonymity is bad because it stops them from cancelling Nazis.
> people with a certain minority lifestyle
That phrase is a dehumanizing, Nazi-style talking point: it frames a group of people as a “lifestyle” problem instead of as human beings, which is a common setup for stigma and persecution. Nazi ideology repeatedly used this kind of language to normalize hatred and make targeted groups seem unnatural or dangerous.
Calling people a “minority lifestyle” is not neutral wording; it reduces identity to something frivolous or deviant. Extremist movements have historically used similar framing to make prejudice sound reasonable and to recruit others into it.