I found this earlier today when looking for research and ended up reporting it for citing fake sources. Please correct me if I'm wrong, but I couldn't find "[9] Jongsuk Jung, Jaekyeom Kim, and Hyunwoo J. Choi. Rethinking attention as belief
propagation. In International Conference on Machine Learning (ICML), 2022." anywhere else on the internet
The headline theorem, "every sigmoid transformer is a Bayesian network," is proved by `rfl` [1]. For non-Lean people: `rfl` means "both sides are the same expression." He defines a transformer forward pass, then defines a BP forward pass with the same operations, wraps the weights in a struct called `implicitGraph`, and Lean confirms they match. They match because he wrote them to match.
The repo with a real transformer model (transformer-bp-lean) has 22 axioms and 7 theorems. In Lean, an axiom is something you state without proving. The system takes your word for it. Here the axioms aren't background math, they're the paper's claims:
- "The FFN computes the Bayesian update" [2]. Axiom.
- "BP converges" [4]. Axiom, with a comment saying it's "not provable in general."
- The no-hallucination corollary [5]. Axiom.
The paper says "formally verified against standard mathematical axioms" about all of these. They are not verified. They are assumed.
The README suggests running `grep -r "sorry"` and finding nothing as proof the code is complete. In Lean, `sorry` means "I haven't proved this" and throws a compiler warning. `axiom` also means "I haven't proved this" but doesn't warn. So the grep returns clean while 22 claims sit unproved. Meanwhile the godel repo has 4 actual sorries [6] anyway, including "logit and sigmoid are inverses," which the paper treats as proven. That same fact appears as an axiom in the other repo [7]. Same hole, two repos, two different ways to leave it open.
Final count across all five repos: 65 axioms, 5 sorries, 149 theorems.
Claude (credited on page 1) turned it into "Not an approximation of it. Not an analogy to it. The computation is belief propagation." Building to a 2-variable toy experiment on 5K parameters presented as the fulfillment of Leibniz's 350-year-old dream. Ending signed by "Calculemus."
> Transformers are the dominant architecture in AI, yet why they work remains poorly understood. This paper offers a precise answer: a transformer is a Bayesian network.
Why would being a Bayesian network explain why transformers work? Bayesian networks existed long before transformers and never achieved their performance.
Bayesian network is a really general concept. It applies to all multidimensional probability distribution. It's a graph that encodes independence between variables. Ish.
I have not taken the time to review the paper, but if the claim stands, it means we might have another tool to our toolbox to better understand transformers.
> Hallucination is not a bug that scaling can fix. It is the structural consequence of operating without concepts.
NNs are as close to continuous as we can get with discrete computing. They’re flexible and adaptable and can contain many “concepts.” But their chief strength is also their chief weakness: these “concepts” are implicit. I wonder if we can get a hybrid architecture that has the flexibility of NNs while retaining discrete concepts like a knowledge base does.
> NNs are as close to continuous as we can get with discrete computing.
This is incorrect. For example, fuzzy logic[0] can model analog ("continuous") truth beyond discrete digital representations, such as 1/0, true/false, etc.
There is nothing continuous on the computer, it's all bit strings & boolean arithmetic. The semantics imposed on the bit strings does not exist anywhere in the arithmetic operations, i.e. there is no arithmetic operation corresponding to something as simple as the color red.
The way neural networks work is that the base neural network is embedded in a sampling loop, i.e. a query is fed into the network & the driver samples output tokens to append to the query so that it can be re-fed back into the network (q → nn → [a, b, c, ...] → q + sample([a, b, c, ...])). There is no way to avoid hallucinations b/c hallucinations are how the entire network works at the implementation level. The precision makes no difference b/c the arithmetic operations are semantically void & only become meaningful after they are interpreted by someone who knows to associated 1 /w red, 2 w/ blue, 3 w/ clouds, & so on & so forth. The mapping between the numbers & concepts does not exist in the arithmetic.
The arithmetic is meaningless, it doesn't matter what you call it b/c on the computer it's all bit strings & boolean arithmetic. You can call some sequence of operations residual & others embeddings but that is all imposed top-down. There is nothing in the arithmetic that indicates it is somehow special & corresponds to embeddings or residuals.
Maybe it's better if you define the terms b/c what I mean by hallucination is that the arithmetic operations + sampling mean that it's all hallucinations. The output is a trajectory of a probabilistic computation over some set of symbols (0s & 1s). Those symbols are meaningless, the only reason they have meaning is b/c everyone has agreed that the number 97 is the ascii code for "a" & every conformant text processor w/ a conformant video adapter will convert 97 (0b1100001) into the display pattern for the letter "a".
> The semantics imposed on the bit strings does not exist anywhere in the arithmetic operations,
Correct, the semantics is actually in the network of relations between the nodes. That has been one of the major lessons of LLMs, and it validates the systems response to Searle's Chinese Room.
> In quantum information theory, a mix of quantum mechanics and information theory, the Petz recovery map can be thought of as a quantum analog of Bayes' theorem
I found this earlier today when looking for research and ended up reporting it for citing fake sources. Please correct me if I'm wrong, but I couldn't find "[9] Jongsuk Jung, Jaekyeom Kim, and Hyunwoo J. Choi. Rethinking attention as belief propagation. In International Conference on Machine Learning (ICML), 2022." anywhere else on the internet
Yep, nothing by even a subset of those authors. Closest paper from that Conference:
Rethinking Attention-Model Explainability through Faithfulness Violation Test Yibing Liu, Haoliang Li, Yangyang Guo, Chenqi Kong, Jing Li, Shiqi Wang
https://proceedings.mlr.press/v162/liu22i.html
https://icml.cc/virtual/2022/spotlight/18082
It's "vibe" research. Most of it is basically pure nonsense.
Care to elaborate?
The headline theorem, "every sigmoid transformer is a Bayesian network," is proved by `rfl` [1]. For non-Lean people: `rfl` means "both sides are the same expression." He defines a transformer forward pass, then defines a BP forward pass with the same operations, wraps the weights in a struct called `implicitGraph`, and Lean confirms they match. They match because he wrote them to match.
The repo with a real transformer model (transformer-bp-lean) has 22 axioms and 7 theorems. In Lean, an axiom is something you state without proving. The system takes your word for it. Here the axioms aren't background math, they're the paper's claims:
- "The FFN computes the Bayesian update" [2]. Axiom.
- "Attention routes neighbors correctly" [3]. Axiom.
- "BP converges" [4]. Axiom, with a comment saying it's "not provable in general."
- The no-hallucination corollary [5]. Axiom.
The paper says "formally verified against standard mathematical axioms" about all of these. They are not verified. They are assumed.
The README suggests running `grep -r "sorry"` and finding nothing as proof the code is complete. In Lean, `sorry` means "I haven't proved this" and throws a compiler warning. `axiom` also means "I haven't proved this" but doesn't warn. So the grep returns clean while 22 claims sit unproved. Meanwhile the godel repo has 4 actual sorries [6] anyway, including "logit and sigmoid are inverses," which the paper treats as proven. That same fact appears as an axiom in the other repo [7]. Same hole, two repos, two different ways to leave it open.
Final count across all five repos: 65 axioms, 5 sorries, 149 theorems.
Claude (credited on page 1) turned it into "Not an approximation of it. Not an analogy to it. The computation is belief propagation." Building to a 2-variable toy experiment on 5K parameters presented as the fulfillment of Leibniz's 350-year-old dream. Ending signed by "Calculemus."
[1] https://github.com/gregorycoppola/sigmoid-transformer-lean/b...
[2] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...
[3] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...
[4] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...
[5] https://github.com/gregorycoppola/transformer-bp-lean/blob/7...
[6] https://github.com/gregorycoppola/godel/blob/bc1d138/Godel/O...
[7] https://github.com/gregorycoppola/sigmoid-transformer-lean/b...
I suspect it means it's LLM generated without it being checked
> Transformers are the dominant architecture in AI, yet why they work remains poorly understood. This paper offers a precise answer: a transformer is a Bayesian network.
Why would being a Bayesian network explain why transformers work? Bayesian networks existed long before transformers and never achieved their performance.
Bayesian network is a really general concept. It applies to all multidimensional probability distribution. It's a graph that encodes independence between variables. Ish.
I have not taken the time to review the paper, but if the claim stands, it means we might have another tool to our toolbox to better understand transformers.
> Hallucination is not a bug that scaling can fix. It is the structural consequence of operating without concepts.
NNs are as close to continuous as we can get with discrete computing. They’re flexible and adaptable and can contain many “concepts.” But their chief strength is also their chief weakness: these “concepts” are implicit. I wonder if we can get a hybrid architecture that has the flexibility of NNs while retaining discrete concepts like a knowledge base does.
> NNs are as close to continuous as we can get with discrete computing.
This is incorrect. For example, fuzzy logic[0] can model analog ("continuous") truth beyond discrete digital representations, such as 1/0, true/false, etc.
0 - https://en.wikipedia.org/wiki/Fuzzy_logic
There is nothing continuous on the computer, it's all bit strings & boolean arithmetic. The semantics imposed on the bit strings does not exist anywhere in the arithmetic operations, i.e. there is no arithmetic operation corresponding to something as simple as the color red.
It sounds like you're saying that if a computer had infinite precision then hallucinations would not occur?
The way neural networks work is that the base neural network is embedded in a sampling loop, i.e. a query is fed into the network & the driver samples output tokens to append to the query so that it can be re-fed back into the network (q → nn → [a, b, c, ...] → q + sample([a, b, c, ...])). There is no way to avoid hallucinations b/c hallucinations are how the entire network works at the implementation level. The precision makes no difference b/c the arithmetic operations are semantically void & only become meaningful after they are interpreted by someone who knows to associated 1 /w red, 2 w/ blue, 3 w/ clouds, & so on & so forth. The mapping between the numbers & concepts does not exist in the arithmetic.
Oh, I thought that the embedding space of the residual stream was precisely that.
The arithmetic is meaningless, it doesn't matter what you call it b/c on the computer it's all bit strings & boolean arithmetic. You can call some sequence of operations residual & others embeddings but that is all imposed top-down. There is nothing in the arithmetic that indicates it is somehow special & corresponds to embeddings or residuals.
Ah ok, so if we had such a mapping then models wouldn't hallucinate?
Maybe it's better if you define the terms b/c what I mean by hallucination is that the arithmetic operations + sampling mean that it's all hallucinations. The output is a trajectory of a probabilistic computation over some set of symbols (0s & 1s). Those symbols are meaningless, the only reason they have meaning is b/c everyone has agreed that the number 97 is the ascii code for "a" & every conformant text processor w/ a conformant video adapter will convert 97 (0b1100001) into the display pattern for the letter "a".
So kind of like if you flip a coin, the sampling means the heads or tails you get isn't real?
> The semantics imposed on the bit strings does not exist anywhere in the arithmetic operations,
Correct, the semantics is actually in the network of relations between the nodes. That has been one of the major lessons of LLMs, and it validates the systems response to Searle's Chinese Room.
https://news.ycombinator.com/item?id=45256179 :
> Which statistical models disclaim that their output is insignificant if used with non-independent features? Naieve Bayes [...]
Ironic then, because if transformers are Bayesian networks then we're using Bayesian networks for non-independent features.
From "Quantum Bayes' rule and Petz transpose map from the minimum change principle" (2025) https://news.ycombinator.com/item?id=45074143 :
> Petz recovery map: https://en.wikipedia.org/wiki/Petz_recovery_map :
> In quantum information theory, a mix of quantum mechanics and information theory, the Petz recovery map can be thought of as a quantum analog of Bayes' theorem
But there aren't yet enough qubits for quantum LLMs: https://news.ycombinator.com/item?id=47203219#47250262
"Transformer is a holographic associative memory" (2025) https://news.ycombinator.com/item?id=43028710#43029899