A ton of document parsing solutions have been coming out lately, each claiming SOTA with little evidence. A lot of these turned out to be LLM or LVM wrappers that hallucinate frequently on complex tables.
We just released RD-TableBench, an open benchmark to help teams evaluate extraction performance for complex tables. The benchmark includes a variety of challenging scenarios including scanned tables, handwriting, language detection, merged cells, and more.
We employed an independent team of PhD-level human labelers who manually annotated 1000 complex table images from a diverse set of publicly available documents.
Alongside this, we also release a new bioinformatics inspired algorithm for grading table similarity. Would love to hear any feedback!
-Raunak
I'd encourage you to take a look at some of our data points to compare for yourself! Link: huggingface.co/spaces/reducto/rd_table_bench
In terms of the overall importance of table extraction, we've found it to be a key bottleneck for folks looking to do document parsing. It's up there amongst the hardest problems in the space alongside complex form region parsing. I don't have the exact statistics handy, but I'd estimate that ~25% of the pages we parse have some hairy tables in them!
We constantly see alternatives show one ideal table to claim they're accurate. Being able to parse some tables is not hard.
What happens when it has merged cells, dense text, rotations, or no gridlines? Will your table outputs be the same when a user uploads a document twice?
Our team is relentlessly focused on solving for the true range of scenarios so our customers don't have to. Excited to share more about our next gen models soon.
(To summarize, the core challenge appears to be recognizing nested columnar layout formats combined with odd line wrapping within those columns.)
Is there anyone I can submit an example few pages to for consideration in some benchmark?