Preferences

So if I understand this correctly, this works on a single large document whose size exceeds what you can or want to put into a single context frame for answering a question? It first "indexes" the document by feeding successive "proto-chunks" to an LLM, along with an accumulator, which is like a running table of contents into the document with "sections" that the indexer LLM decides on and summarizes, until the table of contents is complete. (What we're calling "sections" here - these are still "chunks", they're just not a fixed size and are decided on by the indexer at build time?)

Then for the retrieval stage, it presents the table of contents to a "retriever" LLM, which decides which sections are relevant to the question based on the summaries the indexer LLM created. Then for the answer generation stage, it just presents those relevant sections along with the question.

That's pretty clever - does it work with a corpus of documents as well, or just a single large document? Does the "indexer" know the question ahead of time, or is the creation of sections and section summarization supposed to be question-agnostic? What if your table of contents gets too big? Seems like then it just becomes normal RAG, where you have to store the summaries and document-chunk pointers in some vector or lexical database?


Exactly — thanks for the insightful comments! The goal is to generate an “LLM-friendly table of contents” for retrieval, rather than relying on vector-based semantic search. We think it’s closer to how humans approach information retrieval. The table of contents also naturally produces semantically coherent sections instead of arbitrary fixed-size chunks.

- Corpus of documents: Yes, this approach can generalize. For multiple documents, you can first filter by metadata or document-level summaries, and then build indexes per document. The key is that the metadata (or doc-level summaries) helps distinguish and route queries across documents. We have some examples here: https://docs.pageindex.ai/doc-search

- Question-agnostic indexing: The indexer does not know the question in advance. It builds the tree index once, and that structure can then be stored in a standard SQL database and reused at query time. In practice, we store the tree structure in JSON, and also keep (node_id, node_text) in a separate table. When we get the node_id from the LLM, we look up the corresponding node_text to form the context. There is no need for Vector DBs.

- Handling large tables of contents: If the TOC gets too large, you can traverse the tree hierarchically — starting from the top level and drilling down only into relevant branches. That’s why we use a tree structure rather than just a flat list of sections. This is what makes it different from traditional RAG with flat chunking. In spirit, it’s closer to a search-over-tree approach, somewhat like how AlphaGo handled large search spaces.

Really appreciate the thoughtful questions again! We’re actually preparing some upcoming notebooks that will address them in more detail— stay tuned!

> That’s why we use a tree structure rather than just a flat list of sections. This is what makes it different from traditional RAG

Ah ok, that’s a key piece I was missing. That’s really cool, thanks!

This item has no comments currently.

Keyboard Shortcuts

Story Lists

j
Next story
k
Previous story
Shift+j
Last story
Shift+k
First story
o Enter
Go to story URL
c
Go to comments
u
Go to author

Navigation

Shift+t
Go to top stories
Shift+n
Go to new stories
Shift+b
Go to best stories
Shift+a
Go to Ask HN
Shift+s
Go to Show HN

Miscellaneous

?
Show this modal