Python + OpenCV: OCR Image Segmentation -


i trying ocr toy example of receipts. using python 2.7 , opencv 3.1.

enter image description here

grayscale + blur + external edge detection + segmentation of each area in receipts (for example "category" see later 1 marked -in case cash-).

i find complicated when image "skewed" able transform , "automatically" segment each segment of receipts.

example:

enter image description here

any suggestion?

the code below example until edge detection, when receipt first image. issue not image text. pre-processing of image.

any more appreciated! :)

import os; os.chdir() # put own directory  import cv2  import numpy np  image = cv2.imread("rent-receipt.jpg", cv2.imread_grayscale)  blurred = cv2.gaussianblur(image, (5, 5), 0)  #blurred  = cv2.bilateralfilter(gray,9,75,75)  # apply canny edge detection edged = cv2.canny(blurred, 0, 20)  #find external contour  (_,contours, _) = cv2.findcontours(edged, cv2.retr_external, cv2.chain_approx_none) 

a great tutorial on first step described available @ pyimagesearch (and have great tutorials in general)

in short, described ella, have use cv2.chain_approx_simple. more robust method use cv2.retr_list instead of cv2.retr_external , sort areas, should decently work in white backgrounds/if page inscribes bigger shape in background, etc.

coming second part of question, way segment characters use maximally stable extremal region extractor available in opencv. complete implementation in cpp available here in project helping out in recently. python implementation go along lines of (code below works opencv 3.0+. opencv 2.x syntax, check online)

import cv2  img = cv2.imread('test.jpg') mser = cv2.mser_create()  #resize image mser can work better img = cv2.resize(img, (img.shape[1]*2, img.shape[0]*2))  gray = cv2.cvtcolor(img, cv2.color_bgr2gray) vis = img.copy()  regions = mser.detectregions(gray) hulls = [cv2.convexhull(p.reshape(-1, 1, 2)) p in regions[0]] cv2.polylines(vis, hulls, 1, (0,255,0))   cv2.namedwindow('img', 0) cv2.imshow('img', vis) while(cv2.waitkey()!=ord('q')):     continue cv2.destroyallwindows() 

this gives output as

enter image description here

now, eliminate false positives, can cycle through points in hulls, , calculate perimeter (sum of distance between adjacent points in hulls[i], hulls[i] list of points in 1 convexhull). if perimeter large, classify not character.

the diagnol lines across image coming because border of image black. can removed adding following line image read (below line 7)

img = img[5:-5,5:-5,:] 

which gives output

enter image description here


Comments

Popular posts from this blog

java - SSE Emitter : Manage timeouts and complete() -

jquery - uncaught exception: DataTables Editor - remote hosting of code not allowed -

java - How to resolve error - package com.squareup.okhttp3 doesn't exist? -