How to remove black part from the image?

你说的曾经没有我的故事 提交于 2019-11-28 19:42:33

mevatron's answer is one way where amount of black region is minimised while retaining full image.

Another option is removing complete black region where you also loose some part of image, but result will be a neat looking rectangular image. Below is the Python code.

Here, you find three main corners of the image as below:

I have marked those values. (1,x2), (x1,1), (x3,y3). It is based on the assumption that your image starts from (1,1).

Code :

First steps are same as mevatron's. Blur the image to remove noise, threshold the image, then find contours.

import cv2
import numpy as np

img = cv2.imread('office.jpg')
img = cv2.resize(img,(800,400))

gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
gray = cv2.medianBlur(gray,3)

ret,thresh = cv2.threshold(gray,1,255,0)
contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

Now find the biggest contour which is your image. It is to avoid noise in case if any (Most probably there won't be any). Or you can use mevatron's method.

max_area = -1
best_cnt = None

for cnt in contours:

    area = cv2.contourArea(cnt)
    if area > max_area:
        max_area = area
        best_cnt = cnt

Now approximate the contour to remove unnecessary points in contour values found, but it preserve all corner values.

approx = cv2.approxPolyDP(best_cnt,0.01*cv2.arcLength(best_cnt,True),True)

Now we find the corners.

First, we find (x3,y3). It is farthest point. So x3*y3 will be very large. So we find products of all pair of points and select the pair with maximum product.

far = approx[np.product(approx,2).argmax()][0]

Next (1,x2). It is the point where first element is one,then second element is maximum.

ymax = approx[approx[:,:,0]==1].max()

Next (x1,1). It is the point where second element is 1, then first element is maximum.

xmax = approx[approx[:,:,1]==1].max()

Now we find the minimum values in (far.x,xmax) and (far.y, ymax)

x = min(far[0],xmax)
y = min(far[1],ymax)

If you draw a rectangle with (1,1) and (x,y), you get result as below:

So you crop the image to correct rectangular area.

img2 = img[:y,:x].copy()

Below is the result:

See, the problem is that you lose some parts of the stitched image.

You can do this with threshold, findContours, and boundingRect.

So, here is a quick script doing this with the python interface.

stitched = cv2.imread('stitched.jpg', 0)
(_, mask) = cv2.threshold(stitched, 1.0, 255.0, cv2.THRESH_BINARY);

# findContours destroys input
temp = mask.copy()
(contours, _) = cv2.findContours(temp, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)

# sort contours by largest first (if there are more than one)
contours = sorted(contours, key=lambda contour:len(contour), reverse=True)
roi = cv2.boundingRect(contours[0])

# use the roi to select into the original 'stitched' image
stitched[roi[1]:roi[3], roi[0]:roi[2]]

Ends up looking like this:

NOTE : Sorting may not be necessary with raw imagery, but using the compressed image caused some compression artifacts to show up when using a low threshold, so that is why I post-processed with sorting.

Hope that helps!

You can use active contours (balloons/snakes) for selecting the black region accurately. A demonstration can be found here. Active contours are available in OpenCV, check cvSnakeImage.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!