Image Processing With Deep Learning- A Quick Start Guide

Infrrd
5 min readApr 15, 2021

Imagine how much more valuable your data would be to your business if your document-intake solution could extract data from images as seamlessly as it does from the text.

Thanks to deep learning, intelligent document processing (IDP) is able to combine various AI technologies to not only automatically classify photos, but also describe the various elements in pictures and write short sentences describing each segment with proper English grammar.

IDP leverages a deep learning network known as CNN (Convolutional Neural Networks) to learn patterns that naturally occur in photos. IDP is then able to adapt as new data is processed, using Imagenet , one of the biggest databases of labeled images, which has been instrumental in advancing computer vision.

One of the ways this type of technology is implemented with impact is in the document-heavy insurance industry. Claims processing starts with a small army of humans manually entering data from forms.

In a typical use case, the claim includes a set of documents such as claim forms, police reports, accident scene, and vehicle damage pictures, vehicle operator driver’s license, insurance copy, bills, invoices, and receipts. Documents like these aren’t standard, and the business systems that automate most of the claims processing can’t function without data from the forms.

To turn those documents into data, the Convolutional Neural Networks are trained using GPU-accelerated deep learning frameworks such as Caffe2, Chainer, Microsoft Cognitive Toolkit, MXNet, PaddlePaddle, Pytorch, TensorFlow, and inference optimizers such as TensorRT.

Neural networks were first used in 2009 for speech recognition and were only implemented by Google in 2012. Deep learning, also called neural networks, is a subset of machine learning that uses a model of computing that’s very much inspired by the structure of the brain.

“Deep learning is already working in Google search and in image search; it allows you to image-search a term like ‘hug.’ It’s used to getting you Smart Replies to your Gmail. It’s in speech and vision. It will soon be used in machine translation, I believe.” said Geoffrey Hinton, considered the Godfather of neural networks.

Deep Learning models, with their multi-level structures, as shown above, are very helpful in extracting complicated information from input images. Convolutional neural networks are also able to drastically reduce computation time by taking advantage of GPU for computation, which many networks fail to utilize.

Let’s take a deeper dive into IDP’s image data preparation using deep learning. Preparing images for further analysis is needed to offer better local and global feature detection, which is how IDP enables straight-through processing and drives ROI for your business. Below are the steps:

IMAGE CLASSIFICATION:

For increased accuracy, image classification using CNN is most effective. First and foremost, your IDP solution will need a set of images. In this case, images of beauty and pharmacy products are used as the initial training data set. The most common image data input parameters are the number of images, image dimensions, number of channels, and number of levels per pixel.

With classification, you are able to categorize images (in this case, as beauty and pharmacy). Each category again has different classes of objects as shown in the picture below:

DATA LABELING:

It’s better to manually label the input data so that the deep learning algorithm can eventually learn to make the predictions on its own. Some off-the-shelf manual data labeling tools are given here. The objective at this point will be mainly to identify the actual object or text in a particular image, demarcating whether the word or object is oriented improperly, and identifying whether the script (if present) is in English or other languages.

To automate the tagging and annotation of images, NLP pipelines can be applied. ReLU (rectified linear unit) is then used for the non-linear activation functions, as they perform better and decrease training time.

To increase the training dataset, we can also try data augmentation by emulating the existing images and transforming them. We could transform the available images by making them smaller, blowing them up, cropping elements, etc.

USING RCNN:

With the usage of Region-based Convolutional Neural Network (aka RCNN), locations of objects in an image can be detected with ease. Within just 3 years the RCNN has moved from Fast RCNN, Faster RCNN to Mask RCNN, making tremendous progress towards human-level cognition of images. Below is an example of the final output of the image recognition model where it was trained by deep learning CNN to identify categories and products in images.

Category Detection

Product Detection

If you are new to deep learning methods and don’t want to train your own model, you could have a look at Google Cloud Vision. It works pretty well for general cases. If you are looking for a specific IDP solution or customization, our ML experts will ensure your time and resources are well spent in partnering with us.

Chat with us at www.infrrd.ai or schedule a demo to learn more about how IDP can drive business value from your data.

Originally published at https://www.infrrd.ai.

--

--

Infrrd

Infrrd has been offering AI as a Service since inception. Their focus is on developing faster Enterprise AI platform using AI, ML & NLP- https://infrrd.ai