- kbyatnal parentUltimately, there’s some intersection of accuracy x cost x speed that’s ideal, which can be different per use case. We’ll surface all of those metrics shortly so that you can pick the best model for the job along those axes.
- 216 points
- School transcripts are surprisingly one of the hardest documents to parse. The thing that makes them tricky is (1) the multi-column tabular layouts and (2) the data ambiguity.
Transcript data is usually found in some sort of table, but they're some of the hardest tables for OCR or LLMs to interpret. There's all kinds of edge cases with tables split across pages, nested cells, side-by-side columns, etc. The tabular layout breaks every off-the-shelf OCR engine we've run across (and we've benchmarked all of them). To make it worse, there's no consistency at all (every school in the country basically has their own format).
What we've seen help in these cases are:
1. VLM based review and correction of OCR errors for tables. OCR is still critical for determinism, but VLMs really excel at visually interpreting the long tail.
2. Using both HTML and Markdown as an LLM input format. For some of the edge cases, markdown cannot represent certain structures (e.g. a table cell nested within a table cell). HTML is a much better representation for this, and models are trained on a lot of HTML data.
The data ambiguity is a whole set of problems on its own (e.g. how do you normalize what a "semester" is across all the different ways it can be written). Eval sets + automated prompt engineering can get you pretty far though.
Disclaimer: I started a LLM doc processing company to help companies solve problems in this space (https://extend.ai/).
- It's very dependent on the use case. That's why we offer a native evals experience in the product, so you can directly measure the % accuracy diffs between the two modes for your exact docs.
As a rule of thumb, light processing mode is great for (1) most classification tasks, (2) splitting on smaller docs, (3) extraction on simpler documents, or (4) latency sensitive use cases.
- Exactly correct! We've had users migrate over from other providers because our granular pricing enabled new use cases that weren't feasible to do before.
One interesting thing we've learned is, most production pipelines often end up using a combination of the two (e.g. cheap classification and splitting, paired with performance extraction).
- Feedback heard. Pricing is hard, and we've iterated on this multiple times so far.
Our goal is to provide customers with as much transparency & flexibility as possible. Our pricing has 2 axes:
- the complexity of the task
- performance processing vs cost-optimized processing
Complexity matters because e.g. classification is much easier than extraction, and as such it should be cheaper. That unlocks a wide range of use cases, such as tagging and filtering pipelines.
Toggles for performance is also important because not all use cases are created equal. Similar to how having options between cheaper and the best foundation models is important, the same applies to document tasks.
For certain use cases, you might be willing to take a slight hit to accuracy in exchange for better costs and latency. To support this, we offer a "light" processing mode (with significantly lower prices) that uses smaller models, fewer VLMs, and more heuristics under the hood.
For other use cases, you simply want the highest accuracy possible. Our "performance" processing mode is a great fit for that, which enables layout models, signature detection, handwriting VLMs, and the most performant foundation models.
In fact, most pipelines we seen in production often end up combining the two (cheap classification and splitting, paired with performance extraction).
Without this level of granularity, we'd either be overcharging certain customers or undercharging others. I definitely understand how this is confusing though, we'll work on making our docs better!
- good question!
Our goal is to provide customers with as much flexibility as possible. For certain use cases, you might be willing to take a slight hit to accuracy in exchange for better costs and latency. To support this, we offer a "light" processing mode (with significantly lower prices) that uses smaller models, fewer VLMs, and more heuristics under the hood.
For other use cases, you simply want the highest accuracy possible. Our "performance" processing mode is a great fit for that, which enables layout models, signature detection, handwriting VLMs, and the most performant foundation models.
We back this up with a native evals experience in the product, so you can directly measure the % accuracy difference between the two modes for your exact use case.
- thanks!
A lot of customers choose us for our handwriting, checkbox, and table performance. To handle complex handwriting, we've built an agentic OCR correction layer which uses a VLM to review and make edits to low confidence OCR errors.
Tables are a tricky beast, and the long tail of edge cases here is immense. A few things we've found to be really impactful are (1) semantic chunking that detects table boundaries (so a table that spans multiple pages doesn't get chopped in half) and (2) table-to-HTML conversion (in addition to markdown). Markdown is great at representing most simple tables, but can't represent cases where you have e.g. nested cells.
You can see examples of both in our demo! https://dashboard.extend.ai/demo
Accuracy and data verification is challenging. We have a set of internal benchmarks we use, which gets us pretty far, but that's not always representative of specific customer situations. That's why one of the earliest things we built was a evaluation product, so that customers can easily measure performance on their exact docs and use cases. We recently added support for LLM-as-a-judge and semantic similarity checks, which have been really impactful for measuring accuracy before going live.
- There's certainly a lot of tools that focus on individual parts of the problem (e.g. the OCR layer, or workflows on top). But very few that solve the problem end-to-end with enough flexibility for AI teams that want a lot of control over the experience.
For example, we expose options for AI teams to control how chunking works, whether to enable a bounding box citation model, and whether a VLM should correct handwriting errors.
Most customers we speak with, the evaluation is actually between Extend or building it in-house (and we have a pretty good win rate here).
- thanks! Yup that's correct, we offer a set of APIs for handling documents: parsing, classification, splitting, and extraction.
We've seen customers integrate these in a few interesting ways so far:
1. Agents (exposing these APIs as tools in certain cases, or into a vector DB for RAG)
2. Real-time experiences in their product (e.g. we power all of Brex's user-facing document upload flows)
3. Embedded in internal tooling for back-office automation
Our customers are already requesting new APIs and capabilities for all the other problems they run into with documents (e.g. fintech customers want fraud detection, healthcare users need form filling). Some of these we'll be rolling out soon!
- There's definitely no shortage of options. OCR has been around for decades at this point, and legacy IDP solutions really proliferated in the last ~10 years.
The world today is quite different though. In the last 24 months, the "TAM" for document processing has expanded by multiple orders of magnitude. In the next 10 years, trillions of pages of documents will be ingested across all verticals.
Previous generations of tools were always limited to the same set of structured/semi-structured documents (e.g. tax forms). Today, engineering teams are ingesting truly the wild west of documents, from 500pg mortgage packages to extremely messy healthcare forms. All of those legacy providers fall apart when tackling these types of actual unstructured docs.
We work with hundreds of customers now, and I'd estimate 90% of the use cases we tackle weren't technically solvable until ~12 months ago. So it's nearly all greenfield work, and very rarely replacing an existing vendor or solution already in place.
All that to say, the market is absolutely huge. I do suspect we'll see a plateau in new entrants though (and probably some consolidation of current ones). With how fast the AI space moves, it's nearly impossible to compete if you enter a market just a few months too late.
- 61 points
- Extend | Senior Software Engineer, ML Engineer, AI Engineer | NYC | Full-time | $250k-$350k + equity
Extend is building a LLM-native document processing platform (massive market w/ low NPS existing solutions) (https://extend.ai/)
Apply here: https://jobs.ashbyhq.com/extend
--------
Why you should consider joining:
- High comp (both cash & equity), culture of high impact and ownership, in-person in NYC
- We blew past 7-figures in ARR with a team of 6, and have grown 5x YoY
- Our customers include companies like Zillow, Chime, Flatiron Health, Brex, Mercury, Checkr, and many more
- We're supporting customer and revenue metrics with 1/2 the team size of other startups, so everyone joining at this stage will have outsized impact
- Backed by YC, Homebrew, investors from OpenAI, and more
- kbyatnal ashbyhq.com
- Thanks for the reply. Not sure what you're referring to, but I don't believe we've ever copied or taken inspo from you guys on anything — but please do let me know if you feel otherwise.
It's not a big deal at the end of the day, and excited to see what we can both deliver for customers. congrats on the launch!
- Founder of Extend (https://www.extend.ai/) here, it's a great question and thanks for the tag. There definitely are a lot of document processing companies, but it's a large market and more competition is always better for users.
In this case, the Reducto team seems to have cloned us down to the small details [1][2], which is a bit disappointing to see. But imitation is the best form of flattery I suppose! We thought deeply about how to build an ergonomic configuration experience for recursive type definitions (which is deceptively complex), and concluded that a recursive spreadsheet-like experience would be the best form factor (which we shipped over a year ago).
> "How do you see the space evolving as LLMs commoditize PDF extraction?"
Having worked with a ton of startups & F500s, we've seen that there's still a large gap for businesses in going from raw OCR outputs —> document pipelines deployed in prod for mission-critical use cases. LLMs and VLMs aren't magic, and anyone who goes in expecting 100% automation is in for a surprise.
The prompt engineering / schema definition is only the start. You still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it takes time and effort — and that's where we come in. Our goal is to give AI teams all of that tooling on day 1, so they hit accuracy quickly and focus on the complex downstream post-processing of that data.
- 1 point
- "PDF to Text" is a bit simplified IMO. There's actually a few class of problems within this category:
1. reliable OCR from documents (to index for search, feed into a vector DB, etc)
2. structured data extraction (pull out targeted values)
3. end-to-end document pipelines (e.g. automate mortgage applications)
Marginalia needs to solve problem #1 (OCR), which is luckily getting commoditized by the day thanks to models like Gemini Flash. I've now seen multiple companies replace their OCR pipelines with Flash for a fraction of the cost of previous solutions, it's really quite remarkable.
Problems #2 and #3 are much more tricky. There's still a large gap for businesses in going from raw OCR outputs —> document pipelines deployed in prod for mission-critical use cases. LLMs and VLMs aren't magic, and anyone who goes in expecting 100% automation is in for a surprise.
You still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it's going to take time and effort. The future is definitely moving in this direction though.
Disclaimer: I started a LLM doc processing company to help companies solve problems in this space (https://extend.ai)
- kbyatnal ashbyhq.com
- kbyatnal ashbyhq.com
- kbyatnal ashbyhq.com
- yeah that's a fun challenge — what we've seen work well is a system that forces the LLM to generate citations for all extracted data, map that back to the original OCR content, and then generate bounding boxes that way. Tons of edge cases for sure that we've built a suite of heuristics for over time, but overall works really well.
- Yup definitely, and this is exactly why I built my startup. I've heard this a bunch across startups & large enterprises that we work with. 100% automation is an impossible target, because even humans are not 100% perfect. So how we can expect LLMs to be?
But that doesn't mean you have to abandon the effort. You can still definitely achieve production-grade accuracy! It just requires having the right tooling in place, which reduces the upfront tuning cost. We typically see folks get there on the order of days or 1-2 weeks (it doesn't necessarily need to take months).
- re: real world implications, LLMs and VLMs aren't magi, and anyone who goes in expecting 100% automation is in for a surprise (especially in domains like medical or legal).
IMO there's still a large gap for businesses in going from raw OCR outputs —> document processing deployed in prod for mission-critical use cases.
e.g. you still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it's going to take time and effort.
But for RAG and other use cases where the error tolerance is higher, I do think these OCR models will get good enough to just solve that part of the problem.
Disclaimer: I started a LLM doc processing company to help companies solve problems in this space (https://extend.app/)