How can I optimize Multiple image stitching?

我是研究僧i 提交于 2019-12-06 05:07:48

问题


I'm working on Multiple image stitching in Visual Studio 2012, C++. I've modified stitching_detailed.cpp according to my requirement and it gives quality results. The problem here is, it takes too much time to execute. For 10 images, it takes around 110 seconds.

Here's where it takes most of the time:

1) Pairwise matching - Takes 55 seconds for 10 images! I'm using ORB to find feature points. Here's the code:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);
matcher(features, pairwise_matches);
matcher.collectGarbage();

I tried using this code, as I already know the sequence of images:

vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);

Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)                                                 
    matchMask.at<char>(i,i+1) =1;                                                       
matcher(features, pairwise_matches, matchMask);                                         

matcher.collectGarbage();

It definitely reduces the time (18 seconds), but does not produce required results. Only 6 images get stitched (last 4 are left out because image 6 and image 7 feature points somehow don't match. And so the loop breaks.)

2) Compositing - Takes 38 seconds for 10 images! Here's the code:

for (int img_idx = 0; img_idx < num_images; ++img_idx)
{
    printf("Compositing image #%d\n",indices[img_idx]+1);

    // Read image and resize it if necessary
    full_img = imread(img_names[img_idx]);

    Mat K;
    cameras[img_idx].K().convertTo(K, CV_32F);

    // Warp the current image
    warper->warp(full_img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);

    // Warp the current image mask
    mask.create(full_img.size(), CV_8U);
    mask.setTo(Scalar::all(255));
    warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);

    // Compensate exposure
    compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);

    img_warped.convertTo(img_warped_s, CV_16S);
    img_warped.release();
    full_img.release();
    mask.release();

    dilate(masks_warped[img_idx], dilated_mask, Mat());
    resize(dilated_mask, seam_mask, mask_warped.size());
    mask_warped = seam_mask & mask_warped;

    // Blend the current image
    blender->feed(img_warped_s, mask_warped, corners[img_idx]);
}

Mat result, result_mask;
blender->blend(result, result_mask);

The original image resolution is 4160*3120. I'm not using compression in compositing because it reduces quality. I've used compressed images in rest of the code.

As you can see I've modified the code and reduced time. But I still want to reduce time as much as possible.

3) Finding Feature points - with ORB. Takes 10 seconds for 10 images. Finds 1530 feature points to the max for an image.

55 + 38 + 10 = 103 + 7 for the rest of the code = 110.

When I used this code in android, it takes almost whole memory(RAM) of smart-phone to execute. How can I reduce time as well as memory consumption for android device? (Android device I used has 2 GB RAM)

I've already optimized the rest of the code. Any help is much appreciated!

EDIT 1: I used image compression in the compositing step and the time got reduced from 38 seconds to 16 seconds. I also managed to reduce time in the rest of the code.

So now, from 110 -> 85 seconds. Help me reduce time for pairwise matching; I've no clue on reducing it!

EDIT 2: I found the code of pairwise matching in matchers.cpp. I created my own function in the main code to optimize the time. For compositing step, I used compression until the final image doesn't lose clarity. For feature finding, I used image scaling to find image features at reduced image scale. Now I am able to stitch upto 50 images easily.


回答1:


Since 55 to 18 seconds is a pretty good improvement, maybe you can control the matching process a little bit more. What I would suggest first is - if you haven't already - learn to debug the process every step of the way, to understand what goes wrong when the image isn't stiched. That way you will always learn to control for example the number of ORB features that you're detecting. Maybe there are cases where you can limit them and still get the results, thus speeding up the process (this should not only speed up finding features but also the matching process).

Hopefully that will lead you to being able to detect the situation, when the - as you put it - loop breaks. Thus, you could react to the situation accordingly. You would still match the sequence in a loop, saving time but force the programme to continue (or change the parameters and try to match the pair again) when you detect there is a problem with matching that particular pair.

I don't think there is much room for improvement in the composition process here, since you don't want to lose quality. What I would try to research if I were you, is if maybe threading and parallel computing could help.

This is an interesting and widespread issue - if you're able to speed it up without giving up quality, you should call LG or Google, since my in my Nexus the algorithm is really poor quality :) It's both slow and inaccurate.



来源:https://stackoverflow.com/questions/42548868/how-can-i-optimize-multiple-image-stitching

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!