问题
this is the result that I have from my code : enter image description here
I've made this mask on my face using contour , as I show bellow in the code .
the final result of the project is to delete the face and show the background that I am not defined it yet .
my question is : is there any way to make a mask with this counter so i could use something like this cv2.imshow('My Image',cmb(foreground,background,mask))
to just show the foreground under the mask on the background ? ( the problem here is that i must have the mask as a video in this form but i want it to be real time)
or perhaps the other way , could i somehow delete the pixels of frame in (or under) my counter ?
this is my code :
from imutils.video import VideoStream
from imutils import face_utils
import datetime
import argparse
import imutils
import time
import dlib
import cv2
import numpy as np
# path to facial landmark predictor
ap = argparse.ArgumentParser()
ap.add_argument("-p", "--shape-predictor", required=True)
print("[INFO] loading facial landmark predictor...")
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])
# grab the indexes of the facial landmarks
(lebStart, lebEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eyebrow"]
(rebStart, rebEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eyebrow"]
(jawStart, jawEnd) = face_utils.FACIAL_LANDMARKS_IDXS["jaw"]
# initialize the video stream and allow the cammera sensor to warmup
print("[INFO] camera sensor warming up...")
vs = VideoStream(usePiCamera=args["picamera"] > 0).start()
time.sleep(2.0)
# loop over the frames from the video stream
while True:
# grab the frame from the threaded video stream, resize it to
# have a maximum width of 400 pixels, and convert it to
# grayscale
frame = vs.read()
frame = imutils.resize(frame, width=400)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# detect faces in the grayscale frame
rects = detector(gray, 0)
# loop over the face detections
for rect in rects:
shape = predictor(gray, rect)
shape = face_utils.shape_to_np(shape)
# extract the face coordinates, then use the
faceline = shape[jawStart:lebEnd]
# compute the convex hull for face, then
# visualize each of the face
facelineHull = cv2.convexHull(faceline)
mask = np.zeros(frame.shape,dtype='uint8')
cv2.drawContours(frame, [facelineHull], -1, (0, 0, 0),thickness=cv2.FILLED)
cv2.drawContours(frame, [facelineHull], -1, (0, 255, 0))
# show the frame
cv2.imshow("Frame", frame)
# cv2.imshow("Frame", mask)
key = cv2.waitKey(1) & 0xFF
# if the `q` key was pressed, break from the loop
if key == ord("q"):
break
# do a bit of cleanup
cv2.destroyAllWindows()
vs.stop()
回答1:
Assuming your mask is a binary mask you could do the following:
def cmb(foreground,background,mask):
result = background.copy()
result[mask] = foreground[mask]
return result
I did not test this code, but I hope the idea comes across. You invert the mask for the background and leave the mask alone for the foreground. You apply this to every frame and voilà, you have your masked images.
edit: adjusted code according to comment. Of course that solution is much clearer than what I originally wrote. The functionality stays the same, though.
回答2:
here is my solution for deleting face from frame( it is faster but again thanks to @meetaig for help)
mask = np.zeros(frame.shape,dtype='uint8')
mask = cv2.drawContours(mask, [facelineHull], -1, (255 , 255 , 255),thickness=cv2.FILLED)
mask = cv2.bitwise_not(mask)
img2gray = cv2.cvtColor(mask,cv2.COLOR_BGR2GRAY)
ret, mask = cv2.threshold(img2gray, 10, 255, cv2.THRESH_BINARY)
result= cv2.bitwise_and(frame,frame,mask=mask)
and if i show result now it will work.
cv2.imshow("Frame", result)
来源:https://stackoverflow.com/questions/52043621/create-a-mask-and-delete-inside-contour-in-opencv-python