Canadian Underwriter
Feature

Handwritten data


July 8, 2019   by Greg Meckbach, Associate Editor


Print this page Share

Do your brokerage staff spend countless hours reading forms completed by clients and re-typing all that data into the computer?

Some of that work could be automated through artificial intelligence (AI) and machine learning. But machines won’t take over that process completely if your client’s chicken-scratch looks like a doctor’s notes on a prescription.

“With handwriting, everyone has a different ‘E.’ Everyone has a different ‘R,’” says Eyal Barsky, president and CEO of OCR Solutions Inc.

Mobile Phone, Transfer Image, Exchanging, Data, Downloading

Optical character recognition (OCR) allows computers to read paper documents and enter the text into software. While it may be easy for the computer to recognize typed letters on paper, the same cannot be said for handwriting.

“That is where you have to apply machine learning and more intelligence to a system to [recognize the letter],” said Barsky, whose runs a St. Petersburg, Fla.-based scanner reseller and systems integrator.

When software reads handwriting, it’s called intelligent character recognition (ICR). But machine learning cannot recognize handwritten letters even half the time, unless they are printed in block letters with each letter being printed in a box, suggests Barsky.

Once AI can quickly, easily and accurately figure out what a letter is, insurance professionals can then start to focus on higher-value tasks such as figuring out whether the form has enough information to approve something, says Christopher Wynder, director of product marketing for capture and original equipment manufacturer products at Open Text Corp.

“No one wants to spend their day figuring out if that was an ‘H’ or just somebody’s lazy ‘N,’” Wynder says. “The one thing we know about handwriting is, there is a lot of variability.”

Before he joined the Waterloo, Ont. software giant Open Text, Wynder was a tech consultant. One of his clients was a small brokerage office that received and processed a lot of paper documents. Initially, the brokerage did not use OCR. “They ended up hand-typing everything,” as Wynder explains.

But re-typing data from forms is exactly what many brokers and carriers would rather not do.

“The goal is to focus on the high-value things that define whether or not that document provides the necessary information, and whether or not there is enough information to do the approvals,” says Wynder.  “These kinds of things are still very much a human task.”

This is where intelligent character recognition and optical mark recognition (OMR) can come into play.

With OMR, the computer will detect whether a certain percentage of pixels within a check box on a form are filled in, says Barsky. This way, the computer can detect whether the client answered yes or no to a question.

An ICR system has a better chance of correctly reading hand-written forms if people are asked to print in block letters, with each block letter going into a box. It’s even better if the computer knows what data (for example a first or last name) is supposed to go into that box.

With AI, a computer can look at a string of characters and figure out whether it is a word, and whether that word makes sense in context, said Wynder. “That’s where you get operational efficiencies. It doesn’t come from the optical character recognition itself. It comes from the ability to combine optical character recognition with some key word character recognition.”