OCR makes a lot of mistakes, so unless someone bothered to go through and correct them, it’s only really useful for searching for keywords.
Comment on Why extracting data from PDFs is still a nightmare for data experts
adespoton@lemmy.ca 3 weeks ago
This is silly.
PDF is a Portable Document Format. It replaced Encapsulated Postscript as a document storage medium. It’s not all that different than a more highly structured zip archive, designed to structure how the layout and metadata is stored as well as the content.
It has a spec, and that spec includes accessibility features.
The problem is that how many people use it is to take a bunch of images of varying quality and place them on virtual letter-sized pages, sometimes interspersed with form fields and scripts.
A properly formatted accessible PDF is possible these days with tools available on any computer; these are compact, human and machine readable, and rich in searchable metadata.
Complaining about inaccessible PDFs is sort of like complaining about those people who use Excel as a word processor.
So, with that out of the way… on to the sales pitch: “use AI to free the data!”
Well I’m sorry, but most PDF distillers since the 90s have come with OCR software that can extract text from the images and store it in a way that preserves the layout AND the meaning. All that the modern AI is doing is accomplishing old tasks in new ways with the latest buzzwords.
Remember when PDFs moved “to the cloud?” Or to mobile devices? Remember when someone figured out how to embed blockchain data in them? Use them as NFTs? When they became “web enabled?” “Flash enabled?”
PDF, as a container file, has ridden all the tech trends and kept going as the convenient place to stuff data of different formats that had to display the same way everywhere.
It will likely still be around long after the AI hype is long gone.
cmnybo@discuss.tchncs.de 3 weeks ago
GenderNeutralBro@lemmy.sdf.org 3 weeks ago
The accuracy rate of even the best OCR software is far, far too low for a wide array of potential use cases.
Let’s say I have an archive of a few thousand scientific papers. These are neatly formatted digital documents, not even scanned images (though “scanned images” would be within scope of this task and should not be ignored). Even for that, there’s nothing out there that can produce reliably accurate results. Everything requires painstaking validation and correction if you really care about accuracy.
Even ArXiv can’t do a perfect job of this. They launched their “beta” HTML converter a couple years ago. Improving accuracy and reliability is an ongoing challenge. And that’s with the help or LaTeX source material! It would naturally be much, much harder if they had to rely solely on the PDFs generated from that LaTeX. See: info.arxiv.org/about/accessible_HTML.html
As for solving this problem with “AI”…uh…well, it’s not like “OCR” and “AI” are mutually exclusive terms. OCR tools have been using neural networks for a very long time already, it just wasn’t a buzzword back then so nobody called it “AI”. However, in the current landscape of “AI” in 2025, “accuracy” is usually just a happy accident. It doesn’t need to be that way, and I’m sure the folks behind commercial and open-source OCR tools are hard at work implementing new technology in a way that Doesn’t Suck.
I’ve played around with various VL models and they still seem to be in the “proof of concept” phase.