>"Retrieval based on reasoning — say goodbye to approximate semantic search ("vibe retrieval"
How is this not precisely "vibe retrieval" and much more approximate, where approximate in this case is uncertainty over the precise reasoning?
Similarity with conversion to high-dimensional vectors and then something like kNN seems significantly less approximate, less "vibe" based, than this.
This also appears to be completely predicated on pre-enrichment of the documents by adding structure through API calls to, in the example, openAI.
It doesn't at all seem accurate to:
1: Toss out mathematical similarity calculations
2: Add structure with LLMs
3: Use LLMs to traverse the structure
4: Label this as less vibe-ish
Also for any sufficiently large set of documents, or granularity on smaller sets of documents, scaling will become problematic as the doc structure approaches the context limit of the LLM doing the retrieval.
Embeddings are great at basic conceptual similarity, but in quality maximalist fields and use cases they fall apart very quickly.
For example:
"I want you to find inconsistencies across N documents." There is no concept of an inconsistency in an embedding. However, a textual summary or context stuffing entire documents can help with this.
"What was John's opinion on the European economy in 2025?" It will find a similarity to things involving the European economy, including lots of docs from 2024, 2023, etc. And because of chunking strategies with embeddings and embeddings being heavily compressed representations of data, you will absolutely get chunks from various documents that are not limited to 2025.
"Where are Sarah or John directly quoted in this folder full of legal documents?" Sarah and John might be referenced across many documents, but finding where they are directly quoted is nearly impossible even in a high dimensional vector.
Embeddings are awesome, and great for some things like product catalog lookups and other fun stuff, but for many industries the mathematical cosign similarity approach is just not effective.
> Embeddings are great at basic conceptual similarity, but in quality maximalist fields and use cases they fall apart very quickly.
This makes a lot of sense if you think about it. You want something as conceptually similar to the correct answer as possible. But with vector search, you are looking for something conceptually similar to some formulation of the question, which has some loose correlation, but is very much not the same thing.
There's ways you can prepare data to try to get a closer approximation (e.g. you can have an LLM formulate for each indexed block questions that it could answer and index those, and then you'll be searching for material that answers a question similar to the question being asked, which is a bit closer to what you want, but its still an approximation.
But if you ahead of time know from experience salient features of the dataset that are useful for the particular application, and can index those directly, it just makes sense that while this will be more labor intensive than generalized vector search and may generalize less well outside of that particular use case, it will also be more useful in the intended use case in many places.
It is just as "vibe-ish" as vector search and notably does require chunking (document chunks are fed to the indexer to build the table of contents). That said, I don't find vector search any less "vibey". While "mathematical similarity" is a structured operation, the "conversion to high-dimensional vectors" part is predicated on the encoder, which can be trained towards any objective.
> scaling will become problematic as the doc structure approaches the context limit of the LLM doing the retrieval
IIUC, retrieval is based on traversing a tree structure, so only the root nodes have to fit in the context window. I find that kinda cool about this approach.
> This also appears to be completely predicated on pre-enrichment of the documents by adding structure through API calls to, in the example, openAI.
That was my immediate take. [Look at the summary and answer based on where you expect the data to be found] maybe works well for reliably structured data.
So, this has already been done plenty, Serena MCP and Codanna MCP both do this with AST source graphs, Codanna even gives hints in the MCP response to guide the agent to walk up/down the graph. There might be some small efficiency gain in having a separate agent walk the graph in terms of context savings, but you also lose solution fidelity, so I'm not sure it's a win. Also, it's not a replacement for RAG, it's just another piece in the pipeline that you merge over (rerank+cut or llm distillate).
Yeah, I agree it’s not something new, since humans also do this kind of retrieval. It’s just a way to generate a table of contents for an LLM. I’m wondering, when LLMs become stronger, will we still need vector-based retrieval? Or will we need a retrieval method that’s more like how humans do it?
I think for most use cases, it doesn't make much sense to use vector DBs. When I started to design my AI Search feature, I researched chunking a lot and the general consensus was, you can can lose context if you don't chunk in the right way and there wasn't really a right way to chunk. This was why I decided to take the approach that I am using today, which I talk about in another comment.
With input cost for very good models ($0.30/1M) for Gemini 2.5 Flash (bulk rates would be $0.15/1M), feeding the llm thousands of documents to generate summaries would probably cost 5 dollars or less if using bulk rate pricing. With input cost and with most SOTA LLMs being able to handle 50k tokens in context window with no apparent lost in reasoning, I really don't see the reason for vector DBs anymore, especially if it means potentially less accurate results.
Actually, chunking isn't such a bad problem with code, it chunks itself, and code embeddings produce better results. The problem is that RAG is fiddly, and people try to just copy a basic template or use a batteries included lib that's tuned to QA, which isn't gonna produce good results.
> Actually, chunking isn't such a bad problem with code, it chunks itself, and code embeddings produce better results.
I can't remember what post I read this in (but it was on Hacker News) and I read when designing Claude Code, they (Anthropic) tried a RAG approach but it didn't work very well compared to loading in the full file. If my understanding of how Claude Code works is correct (this was based on comments from others), was it "greps like a intern/junior developer". So what Claude Code does (provided grep is the key), is it would ask Sonnet for keywords to grep for based on the users query. And it would continuously revise the grep key words until it was satisfied with the files that it found.
As ridiculous as this sounds, this approach is not horrible, albeit very inefficient. For my approach, I focus on capturing intent which is what grep can't match. And for RAG, if the code is not chunked correctly and/or if the code is just badly organized, you may miss the true intent for the code.
Oh yeah, loading in full files when possible is great. I use Gemini pro to look at bundles of my whole codebase, the level of comprehension it gets from that is pretty shocking.
This is why I think Vector DBs are probably not going to be used for a lot of applications in the future. It served a very valid purpose when context windows were a lot smaller and LLMs were not as good, but moving forward, I personally think it makes less and less sense.
Vector DBs will still be around to do a first pass before feeding data in to a long context reasoner like Gemini in most cases. The thing that's going to go away is rerankers.
So if I understand this correctly it goes over every possible document with an LLM each time someone performs a search?
I might have misunderstood of course.
If so, then the use cases for this would be fairly limited since you'd have to deal with lots of latency and costs. In some cases (legal documents, medical records, etc) it might be worth it though.
An interesting alternative I've been meaning to try out is inverting this flow. Instead of using an LLM at time of searching to find relevant pieces to the query, you flip it around: at time of ingesting you let an LLM note all of the possible questions that you can answer with a given text and store those in an index. You could them use some traditional full-text search or other algorithms (BM25?) to search for relevant documents and pieces of text. You could even go for a hybrid approach with vectors on top or next to this. Maybe vectors first and then more ranking with something more traditional.
What appeals to me with that setup is low latency and good debug-ability of the results.
But as I said, maybe I've misunderstood the linked approach.
>An interesting alternative I've been meaning to try out is inverting this flow. Instead of using an LLM at time of searching to find relevant pieces to the query, you flip it around: at time of ingesting you let an LLM note all of the possible questions that you can answer with a given text and store those in an index.
You may already know of this one, but consider giving Google LangExtract a look. A lot of companies are doing what you described in production, too!
This is just a variation of index time HyDE (Hypothetical Document Embedding). I used a similar strategy when building the index and search engine for findsight.ai
In a nutshell, it generates a very short summary of every document along with keywords. The basic idea is to use BM25 ranking to identify the most relevant documents for the AI to review. For example, my use case is to understand how Aider, Claude Code, etc., store their conversations so that I can make them readable in my chat app. To answer this, I would ask 'How does Aider store conversations?' and the LLM would construct a deterministic keyword search using terms that would most likely identify how conversations are stored.
Once I have the list of files, the LLM is asked again to review the summaries of all matches and suggest which documents should be loaded in full for further review. I've found this approach to be inconsistent, however. What I've found to work much better is just loading the "Tiny Overview" summaries into context and chatting with the LLM. For example, I would ask the same question: "Which files do you think can tell me how Aider stores conversations? Identify up to 20 files and create a context bundle for them so I can load them into context." For a thousand files, you can easily fit three-sentence summaries for each of them without overwhelming the LLM. Once I have my answer, I just need a few clicks to load the files into context, and then the LLM will have full access to the file content and can better answer my question.
I’ve been working on RAG systems a lot this year and I think one thing people miss is that often for internal RAG efficiency/latency is not the main concern. You want predictable, linear pricing of course, but sometimes you want to simply be able to get a predictably better response by throwing a bit more money/compute time at it.
It’s really hard to get to such a place with standard vector-based systems, even GraphRag. Because it relies on summaries of topic clusters that are pre-computed, if one of those summaries is inaccurate or none of the summaries deal with your exact question, that will never change during query processing. Moreover, GraphRag preprocessing is insanely expensive and precisely does not scale linearly with your dataset.
TLDR all the trade-offs in RAG system design are still being explored, but in practice I’ve found the main desired property to be “predictably better answer with predictably scaling cost” and I can see how similar concerns got OP to this design.
> Moreover, GraphRag preprocessing is insanely expensive and precisely does not scale linearly with your dataset.
Sounds interesting. What exactly is the expensive computation?
On a separate note: I have a feeling RAG could benefit from a kind of ”simultaneous vector search” across several different embedding spaces, sort of like AND in an SQL database. Do you agree?
Yes, in the use case we're doing it's been diagnosis of issues, and draws on documents in that. the latency doesn't matter because it's all done before the diagnosis is raised to the customer.
> You want predictable, linear pricing of course, but sometimes you want to simply be able to get a predictably better response by throwing a bit more money/compute time at it.
Through more thorough ANN vector search / higher recall, or would it also require different preprocessing?
I didn't look at the implementation but sounds similar to something I two years ago recursively summarize the documentation based on structure (domain/page/section) and then ask the model to walk the hierarchy based on summaries.
My motivation back then I had 8k context length to work with so I had to be very conservative about what I include. I still used vectors to narrow down the entry points and then use LLM to drill down or pick the most relevant ones and the search threads were separate, would summarize the response based on the tree path they took and then main thread would combine it.
you can use document search straedgies (like SQL metadata search, semantic search etc, doc descrption search by LLM) to narrow down the doc candidates first.
The thing is — for very long documents, it's actually pretty hard for humans to find things, even with a hierarchical structure. This is why we made indexes — the original indexes! — on paper. What you're saying makes pretty hard assumptions about document content, and of course doesn't start to touch multiple documents.
My feeling is that what you're getting at is actually the fact that it's hard to get semantic chunks and when embedding them, it's hard to have those chunks retain context/meaning, and then when retrieving, the cosine similarity of query/document is too vibes-y and not strictly logical.
These are all extremely real problems with the current paradigm of vector search. However, my belief is that one can fix each of these problems vs abandoning the fundamental technology. I think that we've only seen the first generation of vector search technology and there is a lot more to be built.
At Vectorsmith, we have some novel takes on both the comptuation and storage architecture for vector search. We have been working on this for the last 6 months and have seen some very promising resutls.
Fundamentally my belief is that the system is smarter when it mostly stays latent. All the steps of discretization that are implied in a search system like the above lose information in a way that likely hampers retrieval.
interesting, so you think the issue with the above approach is the graph structure being too rigid / lossy (in terms of losing semantics)? And embeddings are also too lossy (in terms of losing context and structure)? But you guys are working on something less lossy for both semantics and context?
> interesting, so you think the issue with the above approach is the graph structure being too rigid / lossy (in terms of losing semantics)?
Yeah, exactly.
>And embeddings are also too lossy (in terms of losing context and structure)
Interestingly, it appears that the problem is not embeddings but rather retrieval. It appears that embeddings can contain a lot more information than we're currently able to pull out. Like, obviously they are lossy, but... less than maybe I thought before I started this project? Or at least can be made to be that way?
> But you guys are working on something less lossy for both semantics and context?
Yes! :) We're getting there! It's currently at the good-but-not-great like GPT-2ish kind of stage. It's a model-toddler - it can't get a job yet, but it's already doing pretty interesting stuff (i.e. it does much better than SOTA on some complex tasks). I feel pretty optimistic that we're going to be able to get it to work at a usable commercial level for at least some verticals — maybe at an alpha/design partner level — before the end of the year. We'll definitely launch the semantic part before the context part, so this probably means things like people search etc. first — and then the contextual chunking for big docs for legal etc... ideally sometime next year?
Not sure if I fully understand it, but this seems highly inefficient?
Instead of using embeddings which are easy to make a cheap to compare, you use summarized sections of documents and process them with an LLM? LLM's are slower and more expensive to run.
The idea this person is trying for is a LLM that explores the codebase using the source graph in the way a human might, by control+clicking in idea/vscode to go to definition, searching for usages of a function, etc. It actually does work, other systems use it as well, though they have the main agent performing the codebase walk rather than delegate to a "codebase walker" agent.
My concern would be that a function called setup() might mask some really important thing, likewise a “preface” chapter might get missed by an LLM when you ask some specially deep question.
Either way that your input data structure could build bad summaries that the LLM misses with.
I think it only needs to generate the tree once before retrieval, and it doesn’t require any external model at query time. The indexing may take some time upfront, but retrieval is then very fast and cost-free.
If this is used as an important tool call for an AI agent that preforms many other calls, then it's likely that the added cost and latency would be negligible compared to the benefit of significantly improved retrieval. As an analogy, for a small task you're often ok with just going over the first few search results, but to prepare for a large project, you might want to spend an afternoon researching.
In specific domains, accuracy matters more than than speed. Document structure and reasoning bring better retrieval than semantic search which retrieves "similar" but not "relevant" results.
I did something like this myself. Take a large PDF, summarize each page. Make sure to have the titles of previous 3 pages, it helps with consistency and detecting transitions from one part to another. Then you take all page summaries in a list, and do another call to generate the table of contents. When you want to use it you add the TOC in the prompt and use a tool to retrieve sections on demand. This works better than embeddings which are blind to relations and larger context.
It was for a complex scenario of QA on long documents, like 200 page earning reports.
> It moves RAG away from approximate "semantic vibes" and toward explicit reasoning about where information lives. That clarity can help teams trust outputs and debug workflows more effectively.
Wasn't this a feature of RAGs, though? That they could match semantics instead of structure, while us mere balls of flesh need to rely on indexes. I'd be interested in benchmarks of this versus traditional vector-based RAGs, is something to that effect planned?
In their gitHub repo’s readme, they show a benchmark on FinanceBench and found that PageIndex-based retrieval significantly outperforms vector-based methods. I’ve noticed that in domain-specific documents, where all the text has similar “semantic vibes,” non-vector methods like PageIndex can be more useful. In contrast, for use cases like recommendation systems, you might actually need a semantic-vibe search.
RAG is just finding the right context for your question.
Embedding based RAG is fast and conceptually accurate, but very poor for high complexity tasks. Agentic RAG is higher quality, but much higher compute and latency cost. But often worth it for complex situations.
One common way is to mix approaches. e.g. take a large top-K from ANN on embeddings as a preliminary shortlist, then run a tuned LLM or cross encoder to evaluate relevance.
At the end of the day you just want a way to shortlist and focus information that's cheaper, computationally, and more reliable, than dumping your entire corpus into a very large context window.
So what we're doing is fitting the technique to the situation. Price of RAM; GPU price; size of dataset; etc. The "ideal" setup will evolve as the cost structure and model quality evolves, and will always depend on your activity.
But for sure, ANN-on-embedding as your RAG pipeline is a very blunt instrument and if you can afford to do better you can usually think of a way.
The "redundacy" experiment is very interesting! Strongly agree, we just need to do something better than "dumping your entire corpus into a very large context window", maybe using this table-of-contents methods would be very useful?
The folks who are using RAG, what's the SOTA for extracting text from pdf documents? I have been following discussions on HN and I have seen a few promising solutions that involve converting pdf to png and then doing extraction. However, for my application this looks a bit risky because my pdfs have tons of tables and I can't afford to get in return incorrect of made up numbers.
The original documents are in HTML format and although I don't have access to them I can obtain them if I want. Is it better to just use these HTML documents instead? Previously I tried converting HTML to markdown and then use these for RAG. I wasn't too happy with the result although I fear I might be doing something wrong.
Extracting structure and elements from HTML should be trivial and probably has multiple libraries in your programming language of choice. Be happy you have machine-readable semantic documents, that's best-case scenario in NLP. I used to convert the chunks to Markdown as it was more token-efficient and LLMs are often heavily preference trained on Markdown, but not sure with current input pricing and LLM performance gains that matters anymore.
If you have scanned documents, last I checked Gemini Flash was very good cost/performance wise for document extraction. Mistral OCR claims better performance in their benchmarks but people I know used it and other benchmarks beg to differ. Personally I use Azure Document Intelligence a lot for the bounding boxes feature, but Gemini Flash apparently has this covered too.
Sidenote: What you want for RAG is not OCR as-in extracting text. The task for RAG preprocessing is typically called Document Layout Analysis or End-to-End Document Parsing/Extraction.
Good RAG is multimodal and semantic document structure and layout-aware so your pipeline needs to extract and recognize text sections, footers/headers, images, and tables. When working with PDFs you want accurate bounding boxes in your metadata for referring your users to retrieved sources etc.
Yeah, thanks for pointing out the OCR! We also found that for complex PDFs, you first need to use OCR to convert them into Markdown and then run PageIndex. However, most OCR tools process each page independently, which causes them to lose the overall document structure. For example, existing OCR tools often generate incorrect heading levels, which is a big problem if you want to build a tree structure from them. You could check out PageIndex-OCR, the first long-context OCR model that can produce Markdown with more accurate heading-level recognition.
I am always on the lookout for new document extraction tools, but can't seem to find any benchmarks for PageIndex-OCR. There are several like OmniDocBench and readoc. So... Got benchmark?
> Sidenote: What you want for RAG is not OCR as-in extracting text. The task for RAG preprocessing is typically called Document Layout Analysis or End-to-End Document Parsing/Extraction.
Got it. Indeed, I need to do End-to-End Document Parsing/Extraction.
How about using something like Apache Tika for extracting text from multiple documents? It's a subproject of Lucene and consists of a proxy parser + delegates for a number of document formats. If a document, e.g. PDF, comes from a scanner, Tika can optionally shell-out a Tesseract invocation and perform OCR for you.
If accuracy is a major concern, then it's probably guaranteed better to go with the HTML documents. Otherwise, I've heard Docling is pretty good from a few co-workers.
You could also try PageIndex OCR, the first long-context OCR model. Most current OCR tools process each page independently, which causes them to lose the document’s structure and produce markdown with incorrect heading levels. PageIndex OCR generates markdown with more accurate heading levels to better capture the document’s structure.
Oh, I totally think markdown is better than converting to png and then doing OCR. Maybe I did not use a good HTML to markdown converter. The HTML documents are really long and the markdown converter broke down a few times. But as I mentioned, this is probably on me as I did not do a good job of finding a better HTML to markdown converter.
My approach in "LLM-only RAG for small corpora" [0] was to mechanically make an outline version of all the documents _without_ an LLM, feed that to an LLM with the prompt to tell which docs are likely relevant, and then feed the entirety of those relevant docs to a second LLM call to answer the prompt. It only works with markdown and asciidoc files, but it's surprisingly solid for, for example, searching a local copy of the jj or helix docs. And if the corpus is small enough and your model is on the cheap side (like Gemini 2.5 Flash), you can of course skip the retrieval step and just send the entire thing every time.
A suspicious lack of any performance metrics on the many standard RAG/QA benchmarks out there, except for their highly fine-tuned and dataset-specific MAFIN2.5 system. I would love the see this approach vs. a similarly well-tuned structured hybrid retriever (vector similarity + text matching) which is the common way of building domain-specific RAG. The FinanceBench GPT4o+Search system never mentions what the retrieval approach is [1,2], so I will have to assume it is the dumbest retriever possible to oversell the improvement.
PageIndex does not state to what degree the semantic structuring is rule-based (document structure) or also inferred by an ML model, in any case structuring chunks using semantic document structure is nothing new and pretty common, as is adding generated titles and summaries to the chunk nodes. But I find it dubious that prompt-based retrieval on structured chunk metadata works robustly, and if it does perform well it is because of the extra work in prompt-engineering done on chunk metadata generation and retrieval. This introduces two LLM-based components that can lead to highly variable output versus a traditional vector chunker and retriever. There are many more knobs to tune in a text prompt and an LLM-based chunker than in a sentence/paragraph chunker and a vector+text similarity hybrid retriever.
You will have to test retrieval and generation performance for your application regardless, but with so many LLM-based components this will lead to increased iteration time and cost vs. embeddings. Advantage of PageIndex is you can make it really domain-specific probably. Claims of improved retrieval time are dubious, vector databases (even with hybrid search) are highly efficient, definitely more efficient that prompting an LLM to select relevant nodes.
Very cool. These days I’m building RAG over a large website, and when I look at the results being fed into the LLM, most of them are so silly it’s surprising the LLM even manages to extract something meaningful. Always makes me wonder if it’s just using prior knowledge even though it’s instructed not to do so (which is hacky).
I like your approach because it seems like a very natural search process, like a human would navigate a website to find information. I imagine the tradeoff is performance of both indexing and search, but for some use cases (like mine) it’s a good sacrifice to make.
I wonder if it’s useful to merge to two approaches. Like you could vectorize the nodes in the tree to give you a heuristic that guides the search. Could be useful in cases where information is hidden deep in a subtree, in a way that the document’s structure doesn’t give it away.
Strongly agree! It is basically the Mone-Carlo tree search method used in Alpha Go! This is also mentioned in one of their toturials: PageIndex/blob/main/tutorials/doc-search/semantics.md. I believe it will make the method more scalable for large documents.
A good thing about tree representation compared to a 'list' representation is that you can search hierarchically, layer by layer, in a large tree. For example, AlphaGo performs search in a large tree. Since the scale of retrieval is smaller than that of the Go game, I guess this framework can scale very well.
I just realized that the whole Hacker News discussion is formalized as a tree, and I am using my eyes to tree search through the tree to retrieve ideas from the insightful comments.
Yeah, they share a similar intuition. I found that the difference is that PageIndex is more of a learning-free approach, more like how a human would do retrieval?
This will scale when you have a single/a small set of document(s) and want your questions answered.
When you have a question and you don't know which of the million documents in your dataspace contains the answer - I'm not sure how this approach will perform. In that case we are looking at either feeding an enormously large tree as context to LLM or looping through potentially thousands of iterations between a tree & a LLM.
That said, this really is a good idea for a small search space (like a single document).
>Instead of relying on vector databases or artificial chunking, it builds a hierarchical tree structure from documents and uses reasoning-based tree search to locate the most relevant sections.
So are we are creating create for each document on the fly ? even if its a batch process then dont you think we are pointing back to something which is graph (approximation vs latency sort of framework)
Looks like you are talking more in line of LLM driven outcome where "semantic" part is replaced with LLM intelligence.
I tried similar approaches few months back but those often results in poor scalablity, predictiablity and quality.
an effective "vectorless RAG" is to have an LLM write search queries against the documents. e.g. if you store your documents in postgres, allow the LLM to construct a regex string that will find relevant matches. If you were searching for “Martin Luther King Jr.”, it might write something like:
SELECT id, body
FROM docs
WHERE body ~* E'(?x) -- x = allow whitespace/comments
(?:\\m(?:dr|rev(?:erend)?)\\.?\\M[\\s.]+)? -- optional title: Dr., Rev., Reverend
( -- name forms
(?:\\mmartin\\M[\\s.]+(?:\\mluther\\M[\\s.]+)?\\mking\\M) -- "Martin (Luther)? King"
| (?:\\mm\\.?\\M[\\s.]+(?:\\ml\\.?\\M[\\s.]+)?\\mking\\M) -- "M. (L.)? King" / "M L King"
| (?:\\mmlk\\M) -- "MLK"
)
(?:[\\s.,-]*\\m(?:jr|junior)\\M\\.?)* -- optional suffix(es): Jr, Jr., Junior
';
A good thing about tree representation compared to a 'list' representation is that you can search hierarchically, layer by layer, in a large tree. For example, AlphaGo performs search in a large tree. Since the scale of retrieval is smaller than that of the Go game, I guess this framework can scale very well.
Yeah, I strongly agree. I also found in AI coding tools, tree search has replaced vector search. I’m wondering if in generic RAG systems, tree search will replace vector databases?
Tree search hasn't replaced vector search, you can use them synergistically, it's just that vector search is "fiddly" as you have to set up a bunch of stuff to index your repos, manage embeddings, etc and it can use a lot of disk space if you don't use graph representations for your embeddings like LEANN.
This seems really interesting but I can't quite figure out if this is like a SaaS product or an OSS library? The code sample seems to indicate that it uses some sort of "client" to send the document somewhere and then wait to retrieve it later.
But the home page doesn't indicate any sort of sign up or pricing.
So I'm a little confused.
edit Ok I found a sign up flow, but the verification email never came :(
This is good for applications where a background queue based RAG is acceptable. You upload a file, set the expectation to the user that you're processing it and needs more time for a few hours and then after X hours you deliver them. Great for manuals, documentation and larger content.
But for on-demand, near instant RAG (like say in a chat application), this won't work. Speed vs accuracy vs cost. Cost will be a really big one.
A good thing about tree representation compared to a 'list' representation is that you can search hierarchically, layer by layer, in a large tree. For example, AlphaGo performs search in a large tree. Since the scale of retrieval is smaller than that of the Go game, I guess this framework can scale very well.
In this approach, the documents need to be pre-processed once to generate a tree structure, which is slower than the current vector-based method. However, during retrieval, this approach only requires conditioning on the context for the LLM and does not require an embedding model to convert the query into vectors. As a result, it can be efficient when the tree is small. When the tree is large, however, this approach may be slower than the vector-based method since it prioritizes accuracy. If you prioritize speed over accuracy, then I guess you should use Vector DB.
The approach used here for breaking down large documents into summarized chunks that can more easily be reasoned about is how a lot of AI systems deal with large documents that surpass effective context limits in-general, but in my experience this approach will only work up to a certain point and then the summaries will start to hide enough detail that you do need semantic search or another RAG approach like GraphRAG. I think the efficacy of this approach will really fall apart after a certain number of documents.
Would've loved to seen the author run experiments about how they compare to other RAG approaches or what the limitations are to this one.
Thanks, that’s a great point! That’s why we use the tree structure, which can search layer by layer without putting the whole tree into the context (to compromise the summary quality). We’ll update with more examples and experiments on this. Thanks for the suggestion!
"Human-like Retrieval: Simulates how human experts navigate and extract knowledge from complex documents." - pretty sure I use control-f when I look for stuff
Context and prompt engineering is the most important of AI, hands down.
There are plenty of lightweight retrieval options that don't require a separate vector database (I'm the author of txtai [https://github.com/neuml/txtai], which is one of them).
It can be as simple this in Python: you pass an index operation a data generator and save the index to a local folder. Then use that for RAG.
Context and prompt engineering are super automatable. DSPy can automate prompt generation that massively outperforms human prompts, and instead of hand packing context, you can use IR/ML algorithms to intelligently select the optimal context bundle to produce the desired output.
Context and prompt engineering are going to be replaced by algorithms, 100%.
>"Retrieval based on reasoning — say goodbye to approximate semantic search ("vibe retrieval"
How is this not precisely "vibe retrieval" and much more approximate, where approximate in this case is uncertainty over the precise reasoning?
Similarity with conversion to high-dimensional vectors and then something like kNN seems significantly less approximate, less "vibe" based, than this.
This also appears to be completely predicated on pre-enrichment of the documents by adding structure through API calls to, in the example, openAI.
It doesn't at all seem accurate to:
1: Toss out mathematical similarity calculations
2: Add structure with LLMs
3: Use LLMs to traverse the structure
4: Label this as less vibe-ish
Also for any sufficiently large set of documents, or granularity on smaller sets of documents, scaling will become problematic as the doc structure approaches the context limit of the LLM doing the retrieval.
I work in this field, so I can answer.
Embeddings are great at basic conceptual similarity, but in quality maximalist fields and use cases they fall apart very quickly.
For example:
"I want you to find inconsistencies across N documents." There is no concept of an inconsistency in an embedding. However, a textual summary or context stuffing entire documents can help with this.
"What was John's opinion on the European economy in 2025?" It will find a similarity to things involving the European economy, including lots of docs from 2024, 2023, etc. And because of chunking strategies with embeddings and embeddings being heavily compressed representations of data, you will absolutely get chunks from various documents that are not limited to 2025.
"Where are Sarah or John directly quoted in this folder full of legal documents?" Sarah and John might be referenced across many documents, but finding where they are directly quoted is nearly impossible even in a high dimensional vector.
Embeddings are awesome, and great for some things like product catalog lookups and other fun stuff, but for many industries the mathematical cosign similarity approach is just not effective.
> Embeddings are great at basic conceptual similarity, but in quality maximalist fields and use cases they fall apart very quickly.
This makes a lot of sense if you think about it. You want something as conceptually similar to the correct answer as possible. But with vector search, you are looking for something conceptually similar to some formulation of the question, which has some loose correlation, but is very much not the same thing.
There's ways you can prepare data to try to get a closer approximation (e.g. you can have an LLM formulate for each indexed block questions that it could answer and index those, and then you'll be searching for material that answers a question similar to the question being asked, which is a bit closer to what you want, but its still an approximation.
But if you ahead of time know from experience salient features of the dataset that are useful for the particular application, and can index those directly, it just makes sense that while this will be more labor intensive than generalized vector search and may generalize less well outside of that particular use case, it will also be more useful in the intended use case in many places.
It is just as "vibe-ish" as vector search and notably does require chunking (document chunks are fed to the indexer to build the table of contents). That said, I don't find vector search any less "vibey". While "mathematical similarity" is a structured operation, the "conversion to high-dimensional vectors" part is predicated on the encoder, which can be trained towards any objective.
IIUC, retrieval is based on traversing a tree structure, so only the root nodes have to fit in the context window. I find that kinda cool about this approach.But yes, still "vibe retrieval".
> This also appears to be completely predicated on pre-enrichment of the documents by adding structure through API calls to, in the example, openAI.
That was my immediate take. [Look at the summary and answer based on where you expect the data to be found] maybe works well for reliably structured data.
So, this has already been done plenty, Serena MCP and Codanna MCP both do this with AST source graphs, Codanna even gives hints in the MCP response to guide the agent to walk up/down the graph. There might be some small efficiency gain in having a separate agent walk the graph in terms of context savings, but you also lose solution fidelity, so I'm not sure it's a win. Also, it's not a replacement for RAG, it's just another piece in the pipeline that you merge over (rerank+cut or llm distillate).
Yeah, I agree it’s not something new, since humans also do this kind of retrieval. It’s just a way to generate a table of contents for an LLM. I’m wondering, when LLMs become stronger, will we still need vector-based retrieval? Or will we need a retrieval method that’s more like how humans do it?
> will we still need vector-based retrieval
I think for most use cases, it doesn't make much sense to use vector DBs. When I started to design my AI Search feature, I researched chunking a lot and the general consensus was, you can can lose context if you don't chunk in the right way and there wasn't really a right way to chunk. This was why I decided to take the approach that I am using today, which I talk about in another comment.
With input cost for very good models ($0.30/1M) for Gemini 2.5 Flash (bulk rates would be $0.15/1M), feeding the llm thousands of documents to generate summaries would probably cost 5 dollars or less if using bulk rate pricing. With input cost and with most SOTA LLMs being able to handle 50k tokens in context window with no apparent lost in reasoning, I really don't see the reason for vector DBs anymore, especially if it means potentially less accurate results.
Actually, chunking isn't such a bad problem with code, it chunks itself, and code embeddings produce better results. The problem is that RAG is fiddly, and people try to just copy a basic template or use a batteries included lib that's tuned to QA, which isn't gonna produce good results.
> Actually, chunking isn't such a bad problem with code, it chunks itself, and code embeddings produce better results.
I can't remember what post I read this in (but it was on Hacker News) and I read when designing Claude Code, they (Anthropic) tried a RAG approach but it didn't work very well compared to loading in the full file. If my understanding of how Claude Code works is correct (this was based on comments from others), was it "greps like a intern/junior developer". So what Claude Code does (provided grep is the key), is it would ask Sonnet for keywords to grep for based on the users query. And it would continuously revise the grep key words until it was satisfied with the files that it found.
As ridiculous as this sounds, this approach is not horrible, albeit very inefficient. For my approach, I focus on capturing intent which is what grep can't match. And for RAG, if the code is not chunked correctly and/or if the code is just badly organized, you may miss the true intent for the code.
Oh yeah, loading in full files when possible is great. I use Gemini pro to look at bundles of my whole codebase, the level of comprehension it gets from that is pretty shocking.
This is why I think Vector DBs are probably not going to be used for a lot of applications in the future. It served a very valid purpose when context windows were a lot smaller and LLMs were not as good, but moving forward, I personally think it makes less and less sense.
Vector DBs will still be around to do a first pass before feeding data in to a long context reasoner like Gemini in most cases. The thing that's going to go away is rerankers.
So if I understand this correctly it goes over every possible document with an LLM each time someone performs a search?
I might have misunderstood of course.
If so, then the use cases for this would be fairly limited since you'd have to deal with lots of latency and costs. In some cases (legal documents, medical records, etc) it might be worth it though.
An interesting alternative I've been meaning to try out is inverting this flow. Instead of using an LLM at time of searching to find relevant pieces to the query, you flip it around: at time of ingesting you let an LLM note all of the possible questions that you can answer with a given text and store those in an index. You could them use some traditional full-text search or other algorithms (BM25?) to search for relevant documents and pieces of text. You could even go for a hybrid approach with vectors on top or next to this. Maybe vectors first and then more ranking with something more traditional.
What appeals to me with that setup is low latency and good debug-ability of the results.
But as I said, maybe I've misunderstood the linked approach.
>An interesting alternative I've been meaning to try out is inverting this flow. Instead of using an LLM at time of searching to find relevant pieces to the query, you flip it around: at time of ingesting you let an LLM note all of the possible questions that you can answer with a given text and store those in an index.
You may already know of this one, but consider giving Google LangExtract a look. A lot of companies are doing what you described in production, too!
This is just a variation of index time HyDE (Hypothetical Document Embedding). I used a similar strategy when building the index and search engine for findsight.ai
> An interesting alternative I've been meaning to try out is inverting this flow.
This is what I am doing with my AI Search Assistant feature, which I discuss in more detail via the link below:
https://github.com/gitsense/chat/blob/main/packages/chat/wid...
By default, I provide what I call a "Tiny Overview Analyzer". You can read the prompt for the Analyzer with the link below:
https://github.com/gitsense/chat/blob/main/packages/chat/wid...
In a nutshell, it generates a very short summary of every document along with keywords. The basic idea is to use BM25 ranking to identify the most relevant documents for the AI to review. For example, my use case is to understand how Aider, Claude Code, etc., store their conversations so that I can make them readable in my chat app. To answer this, I would ask 'How does Aider store conversations?' and the LLM would construct a deterministic keyword search using terms that would most likely identify how conversations are stored.
Once I have the list of files, the LLM is asked again to review the summaries of all matches and suggest which documents should be loaded in full for further review. I've found this approach to be inconsistent, however. What I've found to work much better is just loading the "Tiny Overview" summaries into context and chatting with the LLM. For example, I would ask the same question: "Which files do you think can tell me how Aider stores conversations? Identify up to 20 files and create a context bundle for them so I can load them into context." For a thousand files, you can easily fit three-sentence summaries for each of them without overwhelming the LLM. Once I have my answer, I just need a few clicks to load the files into context, and then the LLM will have full access to the file content and can better answer my question.
I’ve been working on RAG systems a lot this year and I think one thing people miss is that often for internal RAG efficiency/latency is not the main concern. You want predictable, linear pricing of course, but sometimes you want to simply be able to get a predictably better response by throwing a bit more money/compute time at it.
It’s really hard to get to such a place with standard vector-based systems, even GraphRag. Because it relies on summaries of topic clusters that are pre-computed, if one of those summaries is inaccurate or none of the summaries deal with your exact question, that will never change during query processing. Moreover, GraphRag preprocessing is insanely expensive and precisely does not scale linearly with your dataset.
TLDR all the trade-offs in RAG system design are still being explored, but in practice I’ve found the main desired property to be “predictably better answer with predictably scaling cost” and I can see how similar concerns got OP to this design.
> Moreover, GraphRag preprocessing is insanely expensive and precisely does not scale linearly with your dataset.
Sounds interesting. What exactly is the expensive computation?
On a separate note: I have a feeling RAG could benefit from a kind of ”simultaneous vector search” across several different embedding spaces, sort of like AND in an SQL database. Do you agree?
Yes, in the use case we're doing it's been diagnosis of issues, and draws on documents in that. the latency doesn't matter because it's all done before the diagnosis is raised to the customer.
> You want predictable, linear pricing of course, but sometimes you want to simply be able to get a predictably better response by throwing a bit more money/compute time at it.
Through more thorough ANN vector search / higher recall, or would it also require different preprocessing?
I didn't look at the implementation but sounds similar to something I two years ago recursively summarize the documentation based on structure (domain/page/section) and then ask the model to walk the hierarchy based on summaries.
My motivation back then I had 8k context length to work with so I had to be very conservative about what I include. I still used vectors to narrow down the entry points and then use LLM to drill down or pick the most relevant ones and the search threads were separate, would summarize the response based on the tree path they took and then main thread would combine it.
> let an LLM note all of the possible questions that you can answer
What does this even mean? At what point do you know you have all of them?
Humans are quite ingenious coming up with new, unique questions in my observation, whereas LLMs have a hard time replicating those efficiently.
Cantors diagonalization is trivial to show for questions. There are uncountably many.
you can use document search straedgies (like SQL metadata search, semantic search etc, doc descrption search by LLM) to narrow down the doc candidates first.
The thing is — for very long documents, it's actually pretty hard for humans to find things, even with a hierarchical structure. This is why we made indexes — the original indexes! — on paper. What you're saying makes pretty hard assumptions about document content, and of course doesn't start to touch multiple documents.
My feeling is that what you're getting at is actually the fact that it's hard to get semantic chunks and when embedding them, it's hard to have those chunks retain context/meaning, and then when retrieving, the cosine similarity of query/document is too vibes-y and not strictly logical.
These are all extremely real problems with the current paradigm of vector search. However, my belief is that one can fix each of these problems vs abandoning the fundamental technology. I think that we've only seen the first generation of vector search technology and there is a lot more to be built.
At Vectorsmith, we have some novel takes on both the comptuation and storage architecture for vector search. We have been working on this for the last 6 months and have seen some very promising resutls.
Fundamentally my belief is that the system is smarter when it mostly stays latent. All the steps of discretization that are implied in a search system like the above lose information in a way that likely hampers retrieval.
interesting, so you think the issue with the above approach is the graph structure being too rigid / lossy (in terms of losing semantics)? And embeddings are also too lossy (in terms of losing context and structure)? But you guys are working on something less lossy for both semantics and context?
> interesting, so you think the issue with the above approach is the graph structure being too rigid / lossy (in terms of losing semantics)?
Yeah, exactly.
>And embeddings are also too lossy (in terms of losing context and structure)
Interestingly, it appears that the problem is not embeddings but rather retrieval. It appears that embeddings can contain a lot more information than we're currently able to pull out. Like, obviously they are lossy, but... less than maybe I thought before I started this project? Or at least can be made to be that way?
> But you guys are working on something less lossy for both semantics and context?
Yes! :) We're getting there! It's currently at the good-but-not-great like GPT-2ish kind of stage. It's a model-toddler - it can't get a job yet, but it's already doing pretty interesting stuff (i.e. it does much better than SOTA on some complex tasks). I feel pretty optimistic that we're going to be able to get it to work at a usable commercial level for at least some verticals — maybe at an alpha/design partner level — before the end of the year. We'll definitely launch the semantic part before the context part, so this probably means things like people search etc. first — and then the contextual chunking for big docs for legal etc... ideally sometime next year?
Not sure if I fully understand it, but this seems highly inefficient?
Instead of using embeddings which are easy to make a cheap to compare, you use summarized sections of documents and process them with an LLM? LLM's are slower and more expensive to run.
The idea this person is trying for is a LLM that explores the codebase using the source graph in the way a human might, by control+clicking in idea/vscode to go to definition, searching for usages of a function, etc. It actually does work, other systems use it as well, though they have the main agent performing the codebase walk rather than delegate to a "codebase walker" agent.
My concern would be that a function called setup() might mask some really important thing, likewise a “preface” chapter might get missed by an LLM when you ask some specially deep question.
Either way that your input data structure could build bad summaries that the LLM misses with.
I think it only needs to generate the tree once before retrieval, and it doesn’t require any external model at query time. The indexing may take some time upfront, but retrieval is then very fast and cost-free.
If this is used as an important tool call for an AI agent that preforms many other calls, then it's likely that the added cost and latency would be negligible compared to the benefit of significantly improved retrieval. As an analogy, for a small task you're often ok with just going over the first few search results, but to prepare for a large project, you might want to spend an afternoon researching.
In specific domains, accuracy matters more than than speed. Document structure and reasoning bring better retrieval than semantic search which retrieves "similar" but not "relevant" results.
I did something like this myself. Take a large PDF, summarize each page. Make sure to have the titles of previous 3 pages, it helps with consistency and detecting transitions from one part to another. Then you take all page summaries in a list, and do another call to generate the table of contents. When you want to use it you add the TOC in the prompt and use a tool to retrieve sections on demand. This works better than embeddings which are blind to relations and larger context.
It was for a complex scenario of QA on long documents, like 200 page earning reports.
> It moves RAG away from approximate "semantic vibes" and toward explicit reasoning about where information lives. That clarity can help teams trust outputs and debug workflows more effectively.
Wasn't this a feature of RAGs, though? That they could match semantics instead of structure, while us mere balls of flesh need to rely on indexes. I'd be interested in benchmarks of this versus traditional vector-based RAGs, is something to that effect planned?
In their gitHub repo’s readme, they show a benchmark on FinanceBench and found that PageIndex-based retrieval significantly outperforms vector-based methods. I’ve noticed that in domain-specific documents, where all the text has similar “semantic vibes,” non-vector methods like PageIndex can be more useful. In contrast, for use cases like recommendation systems, you might actually need a semantic-vibe search.
RAG is just finding the right context for your question.
Embedding based RAG is fast and conceptually accurate, but very poor for high complexity tasks. Agentic RAG is higher quality, but much higher compute and latency cost. But often worth it for complex situations.
There's good reasons to do this. Embedding similarity is _not_ a reliable method of determining relevance.
I did some measurements and found you can't even really tell if two documents are "similar" or not. Here: https://joecooper.me/blog/redundancy/
One common way is to mix approaches. e.g. take a large top-K from ANN on embeddings as a preliminary shortlist, then run a tuned LLM or cross encoder to evaluate relevance.
I'll link here these guys' paper which you might find fun: https://arxiv.org/pdf/2310.08319
At the end of the day you just want a way to shortlist and focus information that's cheaper, computationally, and more reliable, than dumping your entire corpus into a very large context window.
So what we're doing is fitting the technique to the situation. Price of RAM; GPU price; size of dataset; etc. The "ideal" setup will evolve as the cost structure and model quality evolves, and will always depend on your activity.
But for sure, ANN-on-embedding as your RAG pipeline is a very blunt instrument and if you can afford to do better you can usually think of a way.
The "redundacy" experiment is very interesting! Strongly agree, we just need to do something better than "dumping your entire corpus into a very large context window", maybe using this table-of-contents methods would be very useful?
The folks who are using RAG, what's the SOTA for extracting text from pdf documents? I have been following discussions on HN and I have seen a few promising solutions that involve converting pdf to png and then doing extraction. However, for my application this looks a bit risky because my pdfs have tons of tables and I can't afford to get in return incorrect of made up numbers.
The original documents are in HTML format and although I don't have access to them I can obtain them if I want. Is it better to just use these HTML documents instead? Previously I tried converting HTML to markdown and then use these for RAG. I wasn't too happy with the result although I fear I might be doing something wrong.
Extracting structure and elements from HTML should be trivial and probably has multiple libraries in your programming language of choice. Be happy you have machine-readable semantic documents, that's best-case scenario in NLP. I used to convert the chunks to Markdown as it was more token-efficient and LLMs are often heavily preference trained on Markdown, but not sure with current input pricing and LLM performance gains that matters anymore.
If you have scanned documents, last I checked Gemini Flash was very good cost/performance wise for document extraction. Mistral OCR claims better performance in their benchmarks but people I know used it and other benchmarks beg to differ. Personally I use Azure Document Intelligence a lot for the bounding boxes feature, but Gemini Flash apparently has this covered too.
https://getomni.ai/blog/ocr-benchmark
Sidenote: What you want for RAG is not OCR as-in extracting text. The task for RAG preprocessing is typically called Document Layout Analysis or End-to-End Document Parsing/Extraction.
Good RAG is multimodal and semantic document structure and layout-aware so your pipeline needs to extract and recognize text sections, footers/headers, images, and tables. When working with PDFs you want accurate bounding boxes in your metadata for referring your users to retrieved sources etc.
Yeah, thanks for pointing out the OCR! We also found that for complex PDFs, you first need to use OCR to convert them into Markdown and then run PageIndex. However, most OCR tools process each page independently, which causes them to lose the overall document structure. For example, existing OCR tools often generate incorrect heading levels, which is a big problem if you want to build a tree structure from them. You could check out PageIndex-OCR, the first long-context OCR model that can produce Markdown with more accurate heading-level recognition.
I am always on the lookout for new document extraction tools, but can't seem to find any benchmarks for PageIndex-OCR. There are several like OmniDocBench and readoc. So... Got benchmark?
> Sidenote: What you want for RAG is not OCR as-in extracting text. The task for RAG preprocessing is typically called Document Layout Analysis or End-to-End Document Parsing/Extraction.
Got it. Indeed, I need to do End-to-End Document Parsing/Extraction.
Our PageIndex for HTML will be open-sourced next week, we are actually working on that!
In our benchmarks, https://github.com/datalab-to/marker is the best if you need to deploy it on your own hardware.
Thanks! I will check this out.
How about using something like Apache Tika for extracting text from multiple documents? It's a subproject of Lucene and consists of a proxy parser + delegates for a number of document formats. If a document, e.g. PDF, comes from a scanner, Tika can optionally shell-out a Tesseract invocation and perform OCR for you.
The Tika's documentation is abysmal. Maybe it is a great product but we had to scrap it because of this.
If accuracy is a major concern, then it's probably guaranteed better to go with the HTML documents. Otherwise, I've heard Docling is pretty good from a few co-workers.
So you suggest working directly with HTML or going HTML -> Markdown first?
I've used nv-ingest and Nvidia's nemoretriever-parse model.
extractous is worth a look if it's real text
If it's an image / you need to OCR it, Gemini Flash is so good and so cheap that I've had good luck using it as a "meta OCR" tool
I will try it out. Is this the correct library? - https://github.com/yobix-ai/extractous
I have used Gemini for OCR and it was indeed good. I also used GPT 3.5 and liked that too.
You could also try PageIndex OCR, the first long-context OCR model. Most current OCR tools process each page independently, which causes them to lose the document’s structure and produce markdown with incorrect heading levels. PageIndex OCR generates markdown with more accurate heading levels to better capture the document’s structure.
Ok, thanks for sharing. I will take a look.
Can you explain why to png? why not to markdown?
Oh, I totally think markdown is better than converting to png and then doing OCR. Maybe I did not use a good HTML to markdown converter. The HTML documents are really long and the markdown converter broke down a few times. But as I mentioned, this is probably on me as I did not do a good job of finding a better HTML to markdown converter.
My approach in "LLM-only RAG for small corpora" [0] was to mechanically make an outline version of all the documents _without_ an LLM, feed that to an LLM with the prompt to tell which docs are likely relevant, and then feed the entirety of those relevant docs to a second LLM call to answer the prompt. It only works with markdown and asciidoc files, but it's surprisingly solid for, for example, searching a local copy of the jj or helix docs. And if the corpus is small enough and your model is on the cheap side (like Gemini 2.5 Flash), you can of course skip the retrieval step and just send the entire thing every time.
[0]: https://crespo.business/posts/llm-only-rag/
LLM-only RAG for small corpora looks super interesting!
this rocks! will definitely check
A suspicious lack of any performance metrics on the many standard RAG/QA benchmarks out there, except for their highly fine-tuned and dataset-specific MAFIN2.5 system. I would love the see this approach vs. a similarly well-tuned structured hybrid retriever (vector similarity + text matching) which is the common way of building domain-specific RAG. The FinanceBench GPT4o+Search system never mentions what the retrieval approach is [1,2], so I will have to assume it is the dumbest retriever possible to oversell the improvement.
PageIndex does not state to what degree the semantic structuring is rule-based (document structure) or also inferred by an ML model, in any case structuring chunks using semantic document structure is nothing new and pretty common, as is adding generated titles and summaries to the chunk nodes. But I find it dubious that prompt-based retrieval on structured chunk metadata works robustly, and if it does perform well it is because of the extra work in prompt-engineering done on chunk metadata generation and retrieval. This introduces two LLM-based components that can lead to highly variable output versus a traditional vector chunker and retriever. There are many more knobs to tune in a text prompt and an LLM-based chunker than in a sentence/paragraph chunker and a vector+text similarity hybrid retriever.
You will have to test retrieval and generation performance for your application regardless, but with so many LLM-based components this will lead to increased iteration time and cost vs. embeddings. Advantage of PageIndex is you can make it really domain-specific probably. Claims of improved retrieval time are dubious, vector databases (even with hybrid search) are highly efficient, definitely more efficient that prompting an LLM to select relevant nodes.
1. https://pageindex.ai/blog/Mafin2.5 2. https://github.com/VectifyAI/Mafin2.5-FinanceBench
Very cool. These days I’m building RAG over a large website, and when I look at the results being fed into the LLM, most of them are so silly it’s surprising the LLM even manages to extract something meaningful. Always makes me wonder if it’s just using prior knowledge even though it’s instructed not to do so (which is hacky).
I like your approach because it seems like a very natural search process, like a human would navigate a website to find information. I imagine the tradeoff is performance of both indexing and search, but for some use cases (like mine) it’s a good sacrifice to make.
I wonder if it’s useful to merge to two approaches. Like you could vectorize the nodes in the tree to give you a heuristic that guides the search. Could be useful in cases where information is hidden deep in a subtree, in a way that the document’s structure doesn’t give it away.
Strongly agree! It is basically the Mone-Carlo tree search method used in Alpha Go! This is also mentioned in one of their toturials: PageIndex/blob/main/tutorials/doc-search/semantics.md. I believe it will make the method more scalable for large documents.
I have a RAG built on 10000+ docs knowledge base. On vector store, of course (Qdrant - hybrid search). It work smoothly and quite reliable.
I wonder how this "vectorless" engine would deal with this. Simply, I can't see this tech scalable.
A good thing about tree representation compared to a 'list' representation is that you can search hierarchically, layer by layer, in a large tree. For example, AlphaGo performs search in a large tree. Since the scale of retrieval is smaller than that of the Go game, I guess this framework can scale very well.
A proof/real-world example would be needed to validate your claim(s).
I think the technology is promising but I don't believe in all those "advantages" that they advertise on the website.
I just realized that the whole Hacker News discussion is formalized as a tree, and I am using my eyes to tree search through the tree to retrieve ideas from the insightful comments.
this is fundamentally organized by popularity, though
Sounds a bit like generative retrieval (e.g. this Google paper here: https://arxiv.org/abs/2202.06991)
Yeah, they share a similar intuition. I found that the difference is that PageIndex is more of a learning-free approach, more like how a human would do retrieval?
I love it
This will scale when you have a single/a small set of document(s) and want your questions answered.
When you have a question and you don't know which of the million documents in your dataspace contains the answer - I'm not sure how this approach will perform. In that case we are looking at either feeding an enormously large tree as context to LLM or looping through potentially thousands of iterations between a tree & a LLM.
That said, this really is a good idea for a small search space (like a single document).
>Instead of relying on vector databases or artificial chunking, it builds a hierarchical tree structure from documents and uses reasoning-based tree search to locate the most relevant sections.
So are we are creating create for each document on the fly ? even if its a batch process then dont you think we are pointing back to something which is graph (approximation vs latency sort of framework)
Looks like you are talking more in line of LLM driven outcome where "semantic" part is replaced with LLM intelligence.
I tried similar approaches few months back but those often results in poor scalablity, predictiablity and quality.
an effective "vectorless RAG" is to have an LLM write search queries against the documents. e.g. if you store your documents in postgres, allow the LLM to construct a regex string that will find relevant matches. If you were searching for “Martin Luther King Jr.”, it might write something like:
Won't that be slower than vector DB's by an order of magnitude or more?
I guess the major foucs in certain uses cases is not speed but accuracy and retrieval quality.
Faster is not always better. In certain situations, we may choose to sacrifice speed for increased accuracy.
I don't see this scaling: https://deepwiki.com/search/how-is-the-tree-formed-and-tra_9...
I'd do some large scale benchmarks before doubling down on this approach.
A good thing about tree representation compared to a 'list' representation is that you can search hierarchically, layer by layer, in a large tree. For example, AlphaGo performs search in a large tree. Since the scale of retrieval is smaller than that of the Go game, I guess this framework can scale very well.
This is like semantic version of B+ trees.
Yeah, I strongly agree. I also found in AI coding tools, tree search has replaced vector search. I’m wondering if in generic RAG systems, tree search will replace vector databases?
Tree search hasn't replaced vector search, you can use them synergistically, it's just that vector search is "fiddly" as you have to set up a bunch of stuff to index your repos, manage embeddings, etc and it can use a lot of disk space if you don't use graph representations for your embeddings like LEANN.
This seems really interesting but I can't quite figure out if this is like a SaaS product or an OSS library? The code sample seems to indicate that it uses some sort of "client" to send the document somewhere and then wait to retrieve it later.
But the home page doesn't indicate any sort of sign up or pricing.
So I'm a little confused.
edit Ok I found a sign up flow, but the verification email never came :(
vectorless rag? I think I have one of those in my kitchen
I have page index in my book :)
Loll you also need one in your computer.
https://en.wikipedia.org/wiki/Retrieval-augmented_generation
This is good for applications where a background queue based RAG is acceptable. You upload a file, set the expectation to the user that you're processing it and needs more time for a few hours and then after X hours you deliver them. Great for manuals, documentation and larger content.
But for on-demand, near instant RAG (like say in a chat application), this won't work. Speed vs accuracy vs cost. Cost will be a really big one.
If you have a lot of time, cost on a local machine may be low.
Looks like this should scale spectacularly poorly.
Might be useful for a few hundred documents max though.
A good thing about tree representation compared to a 'list' representation is that you can search hierarchically, layer by layer, in a large tree. For example, AlphaGo performs search in a large tree. Since the scale of retrieval is smaller than that of the Go game, I guess this framework can scale very well.
This design isn't new, Codanna MCP uses it, and it definitely works (at least when run by the main agent, a dumb subagent might biff it).
It scales as log(N), right? So if you can tolerate it for a few hundred docs you can probably tolerate it for a lot more.
What about latency?
yeah vectors are way more efficient for this
In this approach, the documents need to be pre-processed once to generate a tree structure, which is slower than the current vector-based method. However, during retrieval, this approach only requires conditioning on the context for the LLM and does not require an embedding model to convert the query into vectors. As a result, it can be efficient when the tree is small. When the tree is large, however, this approach may be slower than the vector-based method since it prioritizes accuracy. If you prioritize speed over accuracy, then I guess you should use Vector DB.
The approach used here for breaking down large documents into summarized chunks that can more easily be reasoned about is how a lot of AI systems deal with large documents that surpass effective context limits in-general, but in my experience this approach will only work up to a certain point and then the summaries will start to hide enough detail that you do need semantic search or another RAG approach like GraphRAG. I think the efficacy of this approach will really fall apart after a certain number of documents.
Would've loved to seen the author run experiments about how they compare to other RAG approaches or what the limitations are to this one.
Thanks, that’s a great point! That’s why we use the tree structure, which can search layer by layer without putting the whole tree into the context (to compromise the summary quality). We’ll update with more examples and experiments on this. Thanks for the suggestion!
Can you eloborate on this please?
To put it in terms of data structures, a vector DB is more like a Map, this is more like a Tree
For the C++ programmers among us I think that means it's more like `unordered_map` than `map`
Lol you mean vector db is more like hash_map. map is more tree based
"Human-like Retrieval: Simulates how human experts navigate and extract knowledge from complex documents." - pretty sure I use control-f when I look for stuff
But different people may have different ways. For example, I use command+f in macbook.
LOL ctrl-f feels like bm25 vector search
I think it's about how you decide where to press Ctrl+F next.
this is how I as a human retrieve on a computer :)
I let a boot do a free text search over and indexed database. Works ok. I've also tried keyword based retrieval and vector search.
I've found all leave something to be desired, sadly.
Unrelated: why is chat search in Claude so bad?
Maybe lost in the context? I guess a tree method can be used to improve that?
Second attempt to get away from vectors and embeddings I’ve seen here recently. Are people really struggling that much with their RAG systems?
curious about the other attempt you see
Context and prompt engineering is the most important of AI, hands down.
There are plenty of lightweight retrieval options that don't require a separate vector database (I'm the author of txtai [https://github.com/neuml/txtai], which is one of them).
It can be as simple this in Python: you pass an index operation a data generator and save the index to a local folder. Then use that for RAG.
Context and prompt engineering are super automatable. DSPy can automate prompt generation that massively outperforms human prompts, and instead of hand packing context, you can use IR/ML algorithms to intelligently select the optimal context bundle to produce the desired output.
Context and prompt engineering are going to be replaced by algorithms, 100%.
Yep, context, however you build it.
Strongly agree, I also found txtai is super interesting! Thank you for your open-source effort!
You got it!
thinking if this is related to llms.txt?