- Corpus of documents: Yes, this approach can generalize. For multiple documents, you can first filter by metadata or document-level summaries, and then build indexes per document. The key is that the metadata (or doc-level summaries) helps distinguish and route queries across documents. We have some examples here: https://docs.pageindex.ai/doc-search
- Question-agnostic indexing: The indexer does not know the question in advance. It builds the tree index once, and that structure can then be stored in a standard SQL database and reused at query time. In practice, we store the tree structure in JSON, and also keep (node_id, node_text) in a separate table. When we get the node_id from the LLM, we look up the corresponding node_text to form the context. There is no need for Vector DBs.
- Handling large tables of contents: If the TOC gets too large, you can traverse the tree hierarchically — starting from the top level and drilling down only into relevant branches. That’s why we use a tree structure rather than just a flat list of sections. This is what makes it different from traditional RAG with flat chunking. In spirit, it’s closer to a search-over-tree approach, somewhat like how AlphaGo handled large search spaces.
Really appreciate the thoughtful questions again! We’re actually preparing some upcoming notebooks that will address them in more detail— stay tuned!
Ah ok, that’s a key piece I was missing. That’s really cool, thanks!
Then for the retrieval stage, it presents the table of contents to a "retriever" LLM, which decides which sections are relevant to the question based on the summaries the indexer LLM created. Then for the answer generation stage, it just presents those relevant sections along with the question.
That's pretty clever - does it work with a corpus of documents as well, or just a single large document? Does the "indexer" know the question ahead of time, or is the creation of sections and section summarization supposed to be question-agnostic? What if your table of contents gets too big? Seems like then it just becomes normal RAG, where you have to store the summaries and document-chunk pointers in some vector or lexical database?