问题
I try to Stabilize video with a Kalman filter for smoothing . But i have some problems
Each time, i have two frames: one current and another one. Here my workflow:
- Compute goodFeaturesToTrack()
- Compute Optical Flow using calcOpticalFlowPyrLK()
- Keep only good points
- Estimate a rigid transformation
- Smoothing using Kalman filter
- Warping of the picture.
But i think there is something wrong with Kalman because at the end my video is still not stabilized and it's not smooth at all, it even worse than the original...
Here my code of Kalman
void StabilizationTestSimple2::init_kalman(double x, double y)
{
KF.statePre.at<float>(0) = x;
KF.statePre.at<float>(1) = y;
KF.statePre.at<float>(2) = 0;
KF.statePre.at<float>(3) = 0;
KF.transitionMatrix = *(Mat_<float>(4,4) << 1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1);
KF.processNoiseCov = *(Mat_<float>(4,4) << 0.2,0,0.2,0, 0,0.2,0,0.2, 0,0,0.3,0,
0,0,0,0.3);
setIdentity(KF.measurementMatrix);
setIdentity(KF.processNoiseCov,Scalar::all(1e-6));
setIdentity(KF.measurementNoiseCov,Scalar::all(1e-1));
setIdentity(KF.errorCovPost, Scalar::all(.1));
}
and here how i use it. I put only the interesting part. All this code is inside a flor loop. cornerPrev2 and cornerCurr2 contains all the features points detected just before (with calcOpticalFlowPyrLK())
/// Transformation
Mat transformMatrix = estimateRigidTransform(cornersPrev2,cornersCurr2 ,false);
// in rare cases no transform is found. We'll just use the last known good transform.
if(transformMatrix.data == NULL) {
last_transformationmatrix.copyTo(transformMatrix);
}
transformMatrix.copyTo(last_transformationmatrix);
// decompose T
double dx = transformMatrix.at<double>(0,2);
double dy = transformMatrix.at<double>(1,2);
double da = atan2(transformMatrix.at<double>(1,0), transformMatrix.at<double>(0,0));
// Accumulated frame to frame transform
x += dx;
y += dy;
a += da;
std::cout << "accumulated x,y: (" << x << "," << y << ")" << endl;
if (compteur==0){
init_kalman(x,y);
}
else {
vector<Point2f> smooth_feature_point;
Point2f smooth_feature=kalman_predict_correct(x,y);
smooth_feature_point.push_back(smooth_feature);
std::cout << "smooth x,y: (" << smooth_feature.x << "," << smooth_feature.y << ")" << endl;
// target - current
double diff_x = smooth_feature.x - x;//
double diff_y = smooth_feature.y - y;
dx = dx + diff_x;
dy = dy + diff_y;
transformMatrix.at<double>(0,0) = cos(da);
transformMatrix.at<double>(0,1) = -sin(da);
transformMatrix.at<double>(1,0) = sin(da);
transformMatrix.at<double>(1,1) = cos(da);
transformMatrix.at<double>(0,2) = dx;
transformMatrix.at<double>(1,2) = dy;
warpAffine(currFrame,outImg,transformMatrix,prevFrame.size(), INTER_NEAREST|WARP_INVERSE_MAP, BORDER_CONSTANT);
namedWindow("stabilized");
imshow("stabilized",outImg);
namedWindow("Original");
imshow("Original",originalFrame);
}
Can someone have an idea why it's not working ?
Thank
回答1:
KF.transitionMatrix = *(Mat_<float>(4,4) << 1,0,1,0, 0,1,0,1, 0,0,1,0, 0,0,0,1);
KF.processNoiseCov = *(Mat_<float>(4,4) << 0.2,0,0.2,0, 0,0.2,0,0.2, 0,0,0.3,0,
0,0,0,0.3);
setIdentity(KF.measurementMatrix);
setIdentity(KF.processNoiseCov,Scalar::all(1e-6));
setIdentity(KF.measurementNoiseCov,Scalar::all(1e-1));
setIdentity(KF.errorCovPost, Scalar::all(.1));
Your processNoiseCov
is not symmetric and I doubt you have the right off-diagonal terms. I would stick with a diagonal matrix there until you understand the KF better.
On the other hand, you appear to immediately overwrite it with that setIdentity
with tiny values, which is probably a mistake. Maybe you added that after having problems with the invalid matrix above?
If you describe the framerate and the units of your state (meters? pixels?) we can talk about how to pick good values for processNoiseCov
and measurementNoiseCov
.
EDIT:
Ok, given that your state is [ x_pixels, y_pixels, dx_pixels, dy_pixels ]
here are some tips:
- Your measurement matrix is
I
so you are saying that you are measuring the exact same things as are in your state (this is somewhat uncommon: often you'd only measure some subset of your state, e.g. you'd have no estimate of velocity in your measurements). - Given that your measurement matrix is
I
the meaning of theprocessNoiseCov
andmeasurementNoiseCov
are similar, so I will discuss them together. YourprocessNoiseCov
should be a diagonal matrix where each term is the variance (the square of the standard deviation) of how those values might change between frames. For example, if there is a 68% chance (see normal distribution) that your pixel offsets are within 100 pixels each frame, the diagonal entry for position should be100 * 100 = 10000
(remember, variance is squared stddev). You need to make a similar estimate for how velocity will change. (Advanced: there should be co-varying (off-diagonal) terms because change in velocity is related to change in position, but you can get by without this until that makes sense to you. I've explained it in other answers). - Your additive covariance in
processNoiseCov
is added every frame so keep in mind the changes represented are over 1/25th second. - Your
measurementNoiseCov
has the same sort of units (again, measurement matrix is identity) but should reflect how accurate your measurements are compared to the unknowable actual true values. - Often you can measure actual values for your process and measurements and compute their actual covariance (in excel or python or Matlab or whatever).
- Your
errorCovPost
is your initial uncertainty, expressed just like your per-frame additive covariance, but it is a description of how certain you are about your initial state being correct. - Getting those covariance matrices right is the most important thing when using a KF. You will always spend more time getting those right than doing anything else when designing a KF.
来源:https://stackoverflow.com/questions/27296749/c-opencv-kalman-filter-for-video-stabilization