15 comments

  • thomasahle 5 hours ago

    There's a graveyard of 100s of papers with "approximate near linear time attention."

    They always hope the speed increase makes up for the lower quality, but it never does. The quadratic time seems inherent to the problem.

    Indeed, there are lower bounds showing that sub n^2 algorithms can't work: https://arxiv.org/pdf/2302.13214

    • jcarreiro 4 hours ago

      The paper says that:

      > In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution, acceptable for many AI applications.

      ie., the claim is that this method reproduces the results of conventional attention, up to float16 numerical precision.

      • kristjansson 2 hours ago

        > approximately the same magnitude

        and they really do mean that, their results show +/- 1 on log10 plots.

        • fheinsen 4 hours ago

          The method is more general. The github repository's first example is with eight Taylor terms (P = 8).

          • energy123 3 hours ago

            It converges on conventional attention as P goes up

          • antirez 32 minutes ago

            I agree with the fundamental idea that attention must be O(N^2), with the exception of recent DeepSeek sparse attention approach (DSA), that does not escape N^2 but attempts to lower constant times so much that N^2 is more acceptable, by creating a much faster layer that predicts high scoring tokens.

            • kristjansson 5 hours ago

              > self-attention is efficiently computable to arbitrary precision with constant cost per token

              This paper at least aspires to reproduce 'true' attention, which distinguishes it from many of the others. TBD if its successful in that.

              • logicchains 4 hours ago

                It can't be successful at that any more than 1+1 can equal 3. Fundamentally, if every token wants to be able to look at every previous token without loss of information, it must be O(n^2); N tokens looking at N tokens is quadratic. Any sub-quadratic attention must hence necessarily lose some information and be unable to support perfect recall on longer sequences.

                • orlp 2 hours ago

                  > N tokens looking at N tokens is quadratic

                  Convolving two arrays can be done perfectly accurately in O(n log n), despite every element being combined with every other element.

                  Or consider the even more basic sum of products a[i] * b[j] for all possible i, j:

                      total = 0
                      for i in range(len(a)):
                          for j in range(len(b)):
                              total += a[i] * b[j]
                  
                  This can be computed in linear time as sum(a) * sum(b).

                  Your logic that 'the result contains terms of all pairs, therefore the algorithm must be quadratic' simply doesn't hold.

                  • anvuong 1 hour ago

                    This brings me back to DSP class, man learning about FFT was eye-opening.

                  • hellohello2 3 hours ago

                    I'm not saying if the paper is correct or not (since I can't tell), but I don't think your argument really holds. Consider applying it to multiplication:

                    Fundamentally, multiplication need to look at every pair of integer from the two input numbers. It must be O(n^2); N digits looking at N other digits is quadratic. Any sub-quadratic multiplication must hence necessarily lose some information.

                    • actionfromafar 3 hours ago

                      Doesn't that have to do with how many bits you allow in the actual calculation in physical reality?

                      • hellohello2 2 hours ago

                        Well, for multiplication complexity is defined in terms of on the number of digits/bits digits directly. For attention, complexity is defined on terms of the number of input vectors which are all at fixed precision. I don't understand what happens to the method proposed in the paper at higher precision (since I don't understand the paper), but in reality in doesn't matter since there is no value in anything over float16 for machine learning.

                    • oasisaimlessly 3 hours ago

                      That argument could also be used to say that the FFT's time complexity of O(n log n) should be impossible.

                      • naasking 2 hours ago

                        Your argument just assumes there is no latent structure that can be exploited. That's a big assumption.

                      • energy123 4 hours ago

                        It's like claims of room temperature superconductors or millenium prize solutions. Earth shattering if true. It'd be such a black swan. Terrible for Nvidia.

                        • SeanAnderson 4 hours ago

                          Well, we solved one of the Millennium Prize problems (honestly kinda quickly) so maybe there's hope :)

                      • fheinsen 5 hours ago

                        As the error via linear approximation approaches similar magnitude as numerical error via quadratic computation, don’t the two start becoming comparable in practice?

                        I ask because in practice, for inference, attention is typically computed with low-precision (4-bit, 8-bit, 16-bit) floats.

                        Numerical error, in fact, may be a key factor as to why quadratic attention, in practice, exhibits context rot as context gets longer, analogous to an RNN:

                        https://www.anthropic.com/engineering/effective-context-engi...

                        • WhitneyLand 3 hours ago

                          The 2023 paper even if true doesn’t preclude the 2026 paper from being true, it just sets constraints on how a faster attention solution would have to work.

                          • cobolexpert 5 hours ago

                            Dumb question: is the quadratic time complexity for training, inference, or both?

                            • dave_universetf 3 hours ago

                              Both, with caveats. The attention computation is fundamentally quadratic: for every token in the sequence, you're doing a computation that has to compute over every other token in the sequence. So it's O(N) per token, O(N^2) for the whole sequence.

                              The big mitigation for this is that in causal transformers (i.e. all the chatbot type applications, where each token is only allowed to see tokens before it), you're running inference repeatedly on the same prefix in order to grow it by one token at a time. So if you cache the computations for tokens 0..N-1, on each inference pass you only have to compute O(N) for the newly added token at the end of the sequence.

                              That's why caching (and caching charges) appear so prominently everywhere in the pricing of inference.

                              In practice, caching is most beneficial at inference time, because you typically have relatively long conversations that start with the same cacheable prefix (the system prompt). At training time the same optimization can apply, but you're typically not pushing the same prefixes through the model repeatedly so you end up paying the quadratic cost more often.

                              The quadratic cost of attention is the fundamental compute bottleneck for transformer architectures, which is why there's research like this trying to find shortcuts in computing attention, as well as research into completely new primitives to replace attention (e.g. SSM, which is O(N) on a cold cache and O(1) on a warm cache).

                              • omneity 5 hours ago

                                Attention is calculated during the forward pass of the model, which happens in both inference (forward only) and training (forward & backward).

                                • SubiculumCode 4 hours ago

                                  Dumb question: Can inference be done in a reverse pass? Outputs predicting inputs?

                                  • dave_universetf 2 hours ago

                                    Strictly speaking: no. The "forward pass" terminology does not imply that there exists a "reverse pass" that does the same kind of computation. Rather, it's describing two different kinds of computation, and the direction they occur in.

                                    The forward pass is propagating from inputs to outputs, computing the thing the model was trained for. The reverse/backwards pass is propagating from outputs back to inputs, but it's calculating the gradients of parameters for training (rougly: how much changing each parameter in isolation affects the output, and whether it makes the output closer to the desired training output). The result of the "reverse pass" isn't a set of inputs, but a set of annotations on the model's parameters that guide their adjustment.

                                    The computations of the forward pass are not trivially reversible (e.g. they include additions, which destroys information about the operand values). As a sibling thread points out, you can still probabilistically explore what inputs _could_ produce a given output, and get some information back that way, but it's a lossy process.

                                    And of course, you could train a "reverse" model, one that predicts the prefix of a sequence given a suffix (trivially: it's the same suffix prediction problem, but you train it on reversed sequences). But that would be a separate model trained from scratch on that task, and in that model the prefix prediction would be its forward pass.

                                    • gpm 4 hours ago

                                      Not as trivially as the forwards direction, unsurprisingly information is lost, but better than you might expect. See for example https://arxiv.org/pdf/2405.15012

                                      • root_axis 4 hours ago

                                        Sounds like a great premise for a sci-fi short story.

                                        • anu7df 4 hours ago

                                          Sci-fi ? You mean historical fiction!

                                  • naasking 5 hours ago

                                    I think any kind of innovation here will have to take advantage of some structure inherent to the problem, like eliminating attention in favour of geometric structures like Grassman flows [1].

                                    [1] Attention Is Not What You Need, https://arxiv.org/abs/2512.19428

                                    • findalex 4 hours ago

                                      Right - e.g., if you're modeling a physical system it makes sense to bake in some physics - like symmetry.

                                      • naasking 3 hours ago

                                        Indeed, and I think natural language and reasoning will have some kind of geometric properties as well. Attention is just a sledgehammer that lets us brute force our way around not understanding that structure well. I think the next step change in AI/LLM abilities will be exploiting this geometry somehow [1,2].

                                        [1] GrokAlign: Geometric Characterisation and Acceleration of Grokking, https://arxiv.org/abs/2510.09782

                                        [2] The Geometry of Reasoning: Flowing Logics in Representation Space, https://arxiv.org/abs/2506.12284

                                    • cubefox 5 hours ago

                                      I think DeepSeek V3.2 is sub n^2, but it clearly performs quite well, refuting the alleged lower bounds in the paper.

                                      • andy12_ 5 hours ago

                                        It really isn't sub N^2. The main attention is only O(Nk), but only thanks to a lightning indexer that still has complexity O(N^2). So overall it still has the same complexity; just with a smaller constant factor [1]

                                        > DSA reduces the core attention complexity of the main model from O(L^2) to O(Lk), where k (<< L) is the number of selected tokens. Although the lightning indexer still has a complexity of O(L^2), it requires much less computation compared with MLA in DeepSeek-V3.1-Terminus

                                        [1] https://arxiv.org/pdf/2512.02556

                                        • cubefox 3 hours ago

                                          Okay, then let's see whether we are going to see real linear architectures, like Gated DeltaNet or Mamba-3, in some larger models. I don't believe there is a "lower bound" which states that those can never get to (or exceed) the real-world performance of quadratic attention. (Perfect recall in unrealistic needle-in-haystack tests doesn't count.)

                                    • amluto 4 hours ago

                                      I skimmed the paper, and I think I completely lost the plot.

                                      Sections 2.1 through 2.4 talk about the decomposing the per-token-pair attention (key vector from the ith token with query vector from the jth token, where, in inference, the jth token is the one being sampled) into an approximation that is only mildly outrageously exponential in size compared to the original exponential-of-a-dot product. And they get something that's a polynomial (in the mathematical sense -- you're literally evaluating a polynomial) and has a size that's manageable at 4th order.

                                      Okay, great, they took something simple and made it bigger and nastier but less transcendental without losing too much precision. (As far as I know, there is really nothing special about the exp in attention in the first place, so trying to approximate it well seems mostly useful insofar as it will keep existing models working.)

                                      But the reason that attention is quadratic is that each token gets evaluated with respect to each other token. They haven't changed this at all. Section 2.5 seems like it's deferring this to an appendix. Section 2.6 gives the hidden state size per token, which, on first read, is strictly larger than the hidden state in normal attention (in normal attention it's d_v * d_k -- I'm not sure where their +1 comes from).

                                      So what did the paper gain? Is there some detail that I missed or that the paper completely glossed over that explains why there is any gain of efficiency at all?

                                      For what it's worth, the paper's overall claim is, in some sense, impossible. You can think of attention as being a sort of vector database, and this gets more accurate the sharper you make the exponential. If you replace softmax with actual max, a query locates the key that is the closest match to the query and returns the associated value. This operation is a plain linear search, it's possible (in principle anyway) to do lots of queries and recover the entire contents of the database, and I think that any paper claiming to do it faster than linear time should explain how it's compressing the data and where the loss is.

                                      In language model terms, imagine an prompt like so:

                                          1: [string 1]
                                          2: [string 2]
                                          3: [string 3]
                                          ...
                                          n: [string n]
                                          
                                          Tell me the string associated with the number k.
                                      
                                      As long as there's enough precision and enough query/key space to fit some embedding of the number k that will match the right thing (and there is a lot of room in high-dimensional spaces), one might expect a transformer to be able to answer this question. But this obviously requires memory with size linear in the prompt length. If you try to get rid of that, you necessarily lose something. (This is not to say that nice attention scaling is impossible -- one could imagine schemes where it takes the model multiple tokens to answer the question, and the number of tokens needed could scale, say, logarithmically with prompt size. But you still need that linear memory.)
                                      • csense 3 hours ago

                                        This paper combines two different insights, the second one is buried in the appendix.

                                        Let's say you consider the 3 most-recent tokens. The first insight is that you can use a Taylor approximation: At token position 3 you compute A_3 = ((q1, q2, q3) . (k1, k2, k3))^1, B_3 = ((q1, q2, q3) . (k1, k2, k3)^2, C_3 = ((q1, q2, q3) . (k1, k2, k3))^3, etc. [1] [2]

                                        The second insight is that you can compute e.g. B_{i+1} incrementally from B_i, with much fewer FLOPS than computing B_{i+1} from scratch. [3]

                                        [1] I'd buy that it's empirically "good enough" that you don't need to go beyond D_3 (fourth degree polynomial).

                                        [2] I'd also buy that it's empirically "good enough" to assume the inputs aren't extreme enough for E_3, F_3 etc. to matter. I agree with other posters that radius of convergence worries aren't addressed. I find it plausible that these issues don't sink the paper. I'd not be surprised to learn that either it doesn't matter in practice, or workarounds can be implemented without much performance impact.

                                        [3] The author's choice to bury this insight in an appendix rather than putting it front and center is a baffling pedagogical choice but it's a small issue in the grand scheme of things. Perhaps that second insight is prior work (possibly by others) that experts in the latest LLM linear algebra could reasonably be expected to be familiar with, but is included as an appendix because it's not universally known in e.g. HN comment sections?

                                      • fheinsen 3 hours ago

                                        This is a form of linear attention (https://arxiv.org/abs/2006.16236) that approximates standard scaled dot-product attention to arbitrary precision, by adding Taylor terms in an efficient manner. Each additional Taylor term improves the approximation. Efficiency is achieved by exploiting certain mathematical symmetries that become evident only after decomposing the standard formulation of attention into an expression over chains of tensor products. The github repository's README walks through examples. The first example is with 8 Taylor terms.

                                        • yorwba 3 hours ago

                                          > But the reason that attention is quadratic is that each token gets evaluated with respect to each other token. They haven't changed this at all. Section 2.5 seems like it's deferring this to an appendix.

                                          They defer it to the appendix because it's a standard construction (Q'K)V = Q'(KV), where Q'K is an n×n matrix and requires O(n²) to compute, but KV has a constant size and can be computed in O(n) time, and the multiplication with Q' can also be done in O(n) time.

                                          > Section 2.6 gives the hidden state size per token, which, on first read, is strictly larger than the hidden state in normal attention (in normal attention it's d_v * d_k -- I'm not sure where their +1 comes from).

                                          Actually, their hidden state has a (large) constant size, so strike the words "per token" from section 2.6. In normal attention, the total state is n(d_v + d_k), but their state is basically (d_v + 1)D_k, where D_k is much larger than d_k, but independent of n. The +1 is because they also need to compute the normalization factor for the softmax.

                                          It's true that a constant state size implies that you cannot use it to losslessly store arbitrarily large databases, but LLMs in practice cannot do this either, so there's no loss of capability in that sense. (In fact, if you use enough terms in the Taylor expansion to get the same result as standard attention to within machine precision, the resulting constant state size should give you an upper bound for the amount of data the LLM can effectively retrieve from its context.)

                                          • jsenn 3 hours ago

                                            > Section 2.6 gives the hidden state size per token, which, on first read, is strictly larger than the hidden state in normal attention

                                            This is where you’ve gone off track. The “hidden state” for their model is a fixed size thing, like in an RNN, not per token. For a transformer, the “hidden state” is called the KV cache, and it grows with sequence length. This is why their method is linear not quadratic.

                                            The Taylor Series they derive isn’t just for softmax (after all, real implementations of softmax will likely already use the Taylor series!), it’s for the entire tensor-level softmax(QK) computation.

                                          • riemannzeta 4 hours ago

                                            Neat result. The symmetry exploitation here reminds me of recent work connecting neural network training dynamics to renormalization group theory. Charles Martin's SETOL paper https://arxiv.org/abs/2507.17912 shows that well-trained layers converge to something like an RG fixed point—the eigenvalue spectrum of the weight matrix develops power-law tails with exponent α ≈ 2, which is the signature of scale invariance. At this fixed point, the "effective correlation space" is low-dimensional: you can truncate the SVD aggressively and recover nearly identical test accuracy.

                                            I wonder if there's a connection to your Taylor truncation order. In RG terms, higher-order polynomial interactions are "irrelevant operators"—they get suppressed as you flow toward the fixed point. If trained attention heads are sitting near this fixed point, that might explain why modest truncation orders work: the network has already learned to concentrate its computation in the lower-order terms. A testable prediction: layers with α closer to 2 (measurable via weightwatcher https://github.com/CalculatedContent/WeightWatcher) might need fewer Taylor terms for accurate approximation than layers with α far from 2. If true, you could potentially use the spectral statistics to adaptively choose truncation order per-head.

                                            • fheinsen 2 hours ago

                                              Yes, there must be a connection. While adaptive truncation may prove impractical, it should be possible to measure spectral statistics on sample data, and specify a different fixed truncation order per layer, per head, etc. The github repository lists many other possible improvements: https://github.com/glassroom/sata_attention#proof-of-concept

                                            • bluecoconut 5 hours ago

                                              I almost feel like this goes opposite to what attention is good at. This would be good at approximating all the places where attention is low / not sharp. Where attention/the exponential is key is when it selects out / needle-in-haystack / winner-takes-all focus (the word "attention" itself sorta implies this), and this is where the taylor expression would fail to represent the values well. This just... softens attentions ability to attend?

                                              (I'm imagining that if in the context there's ~4-8 "similar" attention-targets that should be sharp, and regular attention learns to select the correct one, this taylor approximation version would wash out any difference and they'd all loosly be attended to, and it'd fail to isolate the correct signal)

                                              Really wish this had some downstream tests -- apply it to a pretrained model and see how performance degrades, train a fresh one, etc. The tests are worth doing, but I somehow don't feel that hopeful this is the unlock required for sub-quadratic attention. It's possible that a freshly trained model with this learns to attend without the sharp attention signals, but that seems a bit dubious to me.

                                              But also, maybe this combined with some other selective (sparse attention) trick, means that the hybrid model gets the "fuzzy long tail" of attention well represented as well as the sharpness well represented, and all together it could actually be a part of the larger solution.

                                              • energy123 5 hours ago

                                                > this is where the taylor expression would fail to represent the values well.

                                                "In practice, we find that four Taylor terms (P = 4) suffice for recovering conventional attention with elementwise errors of approximately the same magnitude as Float16 resolution"

                                                • seanhunter 5 hours ago

                                                  I read that too, but I wondered whether elementwise error is the right metric. Surely the actual error metric should be to evaluate model performance for a conventional transformer model and then the same model with the attention mechanism replaced by this 4th order Taylor approximation?

                                                  • vlovich123 4 hours ago

                                                    Bounded error weights by definition is a more strict evaluation criterion than “performance” metrics through running the model.

                                                    • ehsanu1 1 hour ago

                                                      To spell it out for myself and others: approaching equivalent calculations for each individual attention block means we also approach equivalent performance for the combination of them. And with an error bar approaching floating point accuracy, the performance should be practically identical to regular attention. Elementwise errors of this magnitude can't lead to any noteworthy changes in the overall result, especially given how robust LLM networks seem to be to small deviations.

                                                • mapontosevenths 5 hours ago

                                                  > This just... softens attentions ability to attend?

                                                  I think this does soften, but not linearly. That is to say the fixed state size limitation means that it softens more as it gets further into the past.

                                                  • tehsauce 5 hours ago

                                                    Right, and when they compare to floating point accuracy they seem to be using the number of decimals supported by the mantissa, but the exponent is important no?

                                                    • seanhunter 5 hours ago

                                                      When someone says the error is of a certain magnitude they mean the absolute value of the difference between the the two things, so what they're saying is that the values they produced with their approximation are about as wrong as the difference between the actual values and those values cast to float16. The exponent is most definitely important and would be included in that.

                                                  • Kubuxu 1 hour ago

                                                    A paper on the same topic: On the Expressiveness of Softmax Attention: A Recurrent Neural Network Perspective, Gabriel Mongaras, Eric C. Larson, https://arxiv.org/abs/2507.23632

                                                    Video presentation if someone prefers it: https://www.youtube.com/watch?v=PN3nYBowSvM

                                                    Linear attention is a first-degree approximation of Softmax attention, and model performance gets better as you increase the degree of the Taylor approximation.

                                                    I'm thinking about adapting an existing model to Taylor-approximated attention. I think it should be possible with some model surgery and rehabilitation training.

                                                    • abeppu 5 hours ago

                                                      I haven't tried to follow the math closely but should there not be some concern about the region of convergence? It looks like they don't specifically discuss it. Or is there some reason this isn't a problem in this context?

                                                      • reactordev 5 hours ago

                                                        I fear they have completely overlooked it.

                                                        • measurablefunc 2 hours ago

                                                          The Taylor series for the exponential is convergent everywhere so what radius of convergence are you talking about? All the functions they're approximating are convergent everywhere & you can easily prove that compositions of functions that are convergent everywhere are still convergent everywhere.

                                                        • alyxya 5 hours ago

                                                          The best and proven linear attention is the Gated DeltaNet or variations of it, used by Kimi and Qwen. Anyone who thinks linear attention can't work is forgetting that models are a fixed size so attention should always be compressable to be linear. Another way to think of the feasibility of linear attention is that the standard attention mechanism can be made linear simply by removing the softmax so the kv cache stores the kv product as a constant size matrix instead. Softmax just normalizes attention, but it's not theoretically required.

                                                          • mapontosevenths 5 hours ago

                                                            This uses the Taylor approximation to approximate softmax, but that IS only an approximation. I wonder exactly how much that trade-off costs in terms of accuracy vs performance? I note that they say it's close to float16 with four Taylor terms.

                                                            My other concern would be that Taylor itself is fairly complex. I wonder how well GPU's handle this in comparison to good old fashioned softmax? The last time I used Taylor with a custom Triton kernel it was still very slow. That could just have been my own jank vibe-coded implementation though.

                                                            • slashdave 2 hours ago

                                                              If the model learns by using the approximate softmax, then why does it matter? We only need the behavior of softmax, not an exact numerical solution.

                                                              • I guess that what I'm saying is I'd love to see an LLM actually have it's attention mechanism replaced with this and get benchmarked on real world tasks in comparison to quadratic attention. They don't seem to have done that here. They claim that's it's close to being the same, but my experience tells me that it needs to do better than get "pretty close."

                                                                They also haven't' tried to write a high performance kernel for triton yet. If it goes the way my last experiment with Taylor did they're in for some bad news.

                                                                I'm just a hobbyist though, it's certainly possible that people with more time/resources could outperform me without much effort. I just want to see it tested on something familiar and benchmark-able.

                                                            • spacewhales 6 hours ago
                                                              • observationist 5 hours ago

                                                                This could turbocharge ByT5 and other tokenless architectures, whose big downside was the increase in compute over longer sequences. It's easy to imagine a bunch of strategies with variable levels of "focus" and so on with a fixed compute budget assigned on the fly with learned optimizers informing the distribution.

                                                                • physicsguy 2 hours ago

                                                                  With this, they've not provided an upper bound the error on the kernel expanded with N terms which I think is a big missing piece.

                                                                  • andes314 5 hours ago

                                                                    Linear time attention doesn’t work, by principle. Dead end pursuit. Much great research on more efficient quadratic time inference

                                                                    • smokel 4 hours ago

                                                                      What about n log n?

                                                                    • yanosh_kunsh 5 hours ago

                                                                      So does that mean that LLM inference could go down significantly in price and/or context length would dramatically increase?

                                                                      • NedCode 4 hours ago
                                                                        • rvz 5 hours ago

                                                                          > Our work enables unbounded token generation at modest fixed cost, substantially reducing the infrastructure and energy demands of large-scale Transformer models. The mathematical techniques we introduce are of independent interest.

                                                                          Now this is a very interesting paper, which hopefully should address the chronic inefficiencies of the AI lack of efficient methods and approaches in reducing their significant computational and energy demands which are off the charts.

                                                                          > These factors penalize performance relative to what a fused, hardware-optimized implementation could achieve, and the reported runtime results should therefore be interpreted conservatively.

                                                                          It's still early with several limitations, but the need for wasting billions on GPUs will begin to not make any sense soon.