问题
I have a set of images, and would like to recursively predict where a bunch of pixels will be in the next image. I am using Python, OpenCV, and believe Kalman filtering may be the way forward, but am struggling on the implementation. For simplicity, the code below opens and image and extracts just one colour channel, in this case the red one.
So far, I am using optical flow to determine the motion between images in X and Y for each pixel. After each iteration, I would like to use the last N iterations, and by using the X/Y motions found each time, calculate the velocity of the pixel, and predict where it will end up in the next frame. The group of pixels I will look at and predict is not specified, but is not relevant for the example. It would just be a Numpy array of (x,y) values.
Any help would be greatly appreciated. Simplified code snippet below:
import numpy as np
import cv2
from PIL import Image
imageNames = ["image1.jpg", "image2.jpg", "image3.jpg", "image4.jpg", "image5.jpg"]
for i in range(len(imageNames)):
# Load images and extract just one colour channel (e.g., red)
image1 = Image.open(imageNames[i])
image2 = Image.open(imageNames[i+1])
image1R = np.asarray(image1[:,:,0]).astype(np.uint8)
image2R = np.asarray(image2[:,:,0]).astype(np.uint8)
# Get optical flow
flow = cv2.calcOpticalFlowFarneback(image1R, image2R, 0.5, 1, 5, 15, 10, 5, 1)
change_in_x = flow[:,:,0]
change_in_y = flow[:,:,1]
# Use previous flows to obtain velocity in x and y
# For a subset of the image, predict where points will be in the next image
# Use Kalman filtering?
# Repeat recursively
回答1:
I am not sure if I can explain this here; but I will have a shot. Kalman filter is nothing but a prediction-measurement (correction) based loop.
You have your initial state (position and velocity) after two images:
X0 = [x0 v0]
where v0 is the flow between image1 and image2.
and x0 is the position at image2.
Make an assumption (like constant velocity model). Under constant velocity assumption, you will predict this object will move to X1 = A* X0 where A is found from constant velocity model equations:
x1 = x0 + v0*T
v1 = v0
=> X1 = [x1 v1]
= [1 T ; 0 1] * [x0 v0]
= [1 T ; 0 1] * X0
T is your sampling time (generally taken as the frame rate if used with cameras). You need to know the time difference of your images here.
Later, you are going to correct this assumption with the next measurement (load image3 here and obtain v1' from flow of image2 and image3. Also take x1' from image3).
X1' = [x1' y1']
For a simpler version of KF, find the average point as the estimation, i.e.
~X1 = (X1 + X1')/2.
If you want to use the exact filter, and use kalman gain and coveriance calculations, I'd say you need to check out the algorithm, page 4. Take R small if your images are accurate enough (it is the sensor noise).
The ~X1 you will find takes you to the start. Replace initial state with ~X1 and go over same procedure.
If you check the opencv doc, the algorithm might already be there for you to use.
If you are not going to use a camera and opencv methods; I would suggest you to use MATLAB, just because it is easier to manipulate matrices there.
来源:https://stackoverflow.com/questions/17520711/2d-motion-estimation-using-python-opencv-kalman-filtering