Python: How to cut out an area with specific color from image (OpenCV, Numpy)

ⅰ亾dé卋堺 提交于 2020-12-30 09:46:36

问题


so I've been trying to code a Python script, which takes an image as input and then cuts out a rectangle with a specific background color. However, what causes a problem for my coding skills, is that the rectangle is not on a fixed position in every image (the position will be random).

I do not really understand how to manage the numpy functions. I also read something about OpenCV, but I'm totally new to it. So far I just cropped the images through the ".crop" function, but then I would have to use fixed values.

This is how the input image could look and now I would like to detect the position of the yellow rectangle and then crop the image to its size.

Help is appreciated, thanks in advance.

Edit: @MarkSetchell's way works pretty good, but found a issue for a different test picture. The problem with the other picture is that there are 2 small pixels with the same color at the top and the bottom of the picture, which cause errors or a bad crop.


回答1:


Updated Answer

I have updated my answer to cope with specks of noisy outlier pixels of the same colour as the yellow box. This works by running a 3x3 median filter over the image first to remove the spots:

#!/usr/bin/env python3

import numpy as np
from PIL import Image, ImageFilter

# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)
orig = na.copy()    # Save original

# Median filter to remove outliers
im = im.filter(ImageFilter.MedianFilter(3))

# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))

top, bottom = yellowY[0], yellowY[-1]
left, right = yellowX[0], yellowX[-1]
print(top,bottom,left,right)

# Extract Region of Interest from unblurred original
ROI = orig[top:bottom, left:right]

Image.fromarray(ROI).save('result.png')

Original Answer

Ok, your yellow colour is rgb(247,213,83), so we want to find the X,Y coordinates of all yellow pixels:

#!/usr/bin/env python3

from PIL import Image
import numpy as np

# Open image and make into Numpy array
im = Image.open('image.png').convert('RGB')
na = np.array(im)

# Find X,Y coordinates of all yellow pixels
yellowY, yellowX = np.where(np.all(na==[247,213,83],axis=2))

# Find first and last row containing yellow pixels
top, bottom = yellowY[0], yellowY[-1]
# Find first and last column containing yellow pixels
left, right = yellowX[0], yellowX[-1]

# Extract Region of Interest
ROI=na[top:bottom, left:right]

Image.fromarray(ROI).save('result.png')


You can do the exact same thing in Terminal with ImageMagick:

# Get trim box of yellow pixels
trim=$(magick image.png -fill black +opaque "rgb(247,213,83)" -format %@ info:)

# Check how it looks
echo $trim
251x109+101+220

# Crop image to trim box and save as "ROI.png"
magick image.png -crop "$trim" ROI.png

If still using ImageMagick v6 rather than v7, replace magick with convert.




回答2:


What I see is dark and light gray areas on sides and top, a white area, and a yellow rectangle with gray triangles inside the white area.

The first stage I suggest is converting the image from RGB color space to HSV color space.
The S color channel in HSV space, is the "color saturation channel".
All colorless (gray/black/white) are zeros and yellow pixels are above zeros in the S channel.

Next stages:

  • Apply threshold on S channel (convert it to binary image).
    The yellow pixels goes to 255, and other goes to zero.
  • Find contours in thresh (find only the outer contour - only the rectangle).
  • Invert polarity of the pixels inside the rectangle.
    The gray triangles become 255, and other pixels are zeros.
  • Find contours in thresh - find the gray triangles.

Here is the code:

import numpy as np
import cv2

# Read input image
img = cv2.imread('img.png')

# Convert from BGR to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

# Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero.
s = hsv[:, :, 1]

# Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU).
ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)

# Find contours in thresh (find only the outer contour - only the rectangle).
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2]  # [-2] indexing takes return value before last (due to OpenCV compatibility issues).

# Mark rectangle with green line
cv2.drawContours(img, contours, -1, (0, 255, 0), 2)

# Assume there is only one contour, get the bounding rectangle of the contour.
x, y, w, h = cv2.boundingRect(contours[0])

# Invert polarity of the pixels inside the rectangle (on thresh image).
thresh[y:y+h, x:x+w] = 255 - thresh[y:y+h, x:x+w]

# Find contours in thresh (find the triangles).
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)[-2]  # [-2] indexing takes return value before last (due to OpenCV compatibility issues).

# Iterate triangle contours
for c in contours:
    if cv2.contourArea(c) > 4:  #  Ignore very small contours
        # Mark triangle with blue line
        cv2.drawContours(img, [c], -1, (255, 0, 0), 2)

# Show result (for testing).
cv2.imshow('img', img)
cv2.waitKey(0)
cv2.destroyAllWindows()

S color channel in HSV color space:

thresh - S after threshold:

thresh after inverting polarity of the rectangle:

Result (rectangle and triangles are marked):


Update:

In case there are some colored dots on the background, you can crop the largest colored contour:

import cv2
import imutils  # https://pypi.org/project/imutils/

# Read input image
img = cv2.imread('img2.png')

# Convert from BGR to HSV color space
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)

# Get the saturation plane - all black/white/gray pixels are zero, and colored pixels are above zero.
s = hsv[:, :, 1]

cv2.imwrite('s.png', s)

# Apply threshold on s - use automatic threshold algorithm (use THRESH_OTSU).
ret, thresh = cv2.threshold(s, 0, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)

# Find contours
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
cnts = imutils.grab_contours(cnts) 

# Find the contour with the maximum area.
c = max(cnts, key=cv2.contourArea)

# Get bounding rectangle
x, y, w, h = cv2.boundingRect(c)

# Crop the bounding rectangle out of img
out = img[y:y+h, x:x+w, :].copy()

Result:




回答3:


In opencv you can use inRange. This basically makes whatever color is in the range you speciefied white, and the rest black. This way all the yellow will be white.

Here is the documentation: https://docs.opencv.org/3.4/da/d97/tutorial_threshold_inRange.html



来源:https://stackoverflow.com/questions/60780831/python-how-to-cut-out-an-area-with-specific-color-from-image-opencv-numpy

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!