How can I speed Up my Android-openCV application?

后端 未结 2 1291
没有蜡笔的小新
没有蜡笔的小新 2021-01-30 03:48

I have implemented an openCV applicationWhere I use SURF descriptor. It is working fine the code looks like this:

I reduce the input video stream size to

相关标签:
2条回答
  • 2021-01-30 04:19

    But the calculation is still way too slow, so I need to find another method to reduce input video stream qualllity.

    The real answer to this question is much closer to "there isn't much you can do!" than to anything else. We have to acknowledge that mobile phones do not have yet strong processing capabilities like any desktop. The majority of Android phones in the world are still using previous versions of the system and most important of all: they are single-core devices, they are clocked at speeds lower than 1GHz, they have limited memory, bla bla...

    Nevertheless, there is always something you can do to improve speed with little changes in performance.

    Now, I am also computing OpenCV SURF on the GalaxyS and I have a frame rate of 1.5 fps for 200 features with hessian threshold at 1500 in a 320x240 image. I admit it is crappy performance, but in my case I only have to compute features every once in a while, since I am measuring optical flow for tracking purposes. However, it is very weird that you can only get only 1 frame every 4-5 seconds.

    1) First, it seems to me that you are using VideoCapture to obtain the camera frames. Well, I am not. I am using the Android camera implementation. I did not check how VideoCapture is implemented in the Java port of OpenCV, but it appears to be slower than using the implementation in some of the tutorials. However, I can't be 100% sure about this, since I haven't tested it. Did you?

    2) Reduce native calls to the minimum possible. Java OpenCV native calls are time-expensive. Also, follow all the guidelines specified in the Android-OpenCV best practices page. If you have multiple native calls, join them all in a single JNI call.

    3) You should also reduce the image size and increase the SURF hessian threshold. This will, however, reduce the number of detected features, but they will be stronger and more robust for the purpose of recognition and matching. You are right when you say that the SURF is the more robust detector (it also is the slowest, and it is patented). But, if this is not a dead lock for you, I would recommend to use the new ORB detector, a variant of BRIEF which performs better in terms of rotation. ORB has disadvantages though, such as the limited number of detected keypoints and bad scale-invariance. This is a very interesting feature detector algorithms comparison report. It also suggests SURF detector is slower in the new OpenCV 2.3.1 version, probably due to some changes in the algorithm, for increased robustness.

    4) Now the fun bits. The ARM processor architecture (in which most of the Android phones are based) has been widely reported for its slowness handling floating point calculations, in which feature detector algorithms rely heavily. There have been very interesting discussions about this issue, and many say you should use fixed-point calculations whenever possible. The new armv7-neon architecture provides faster floating point calculations, but not all devices support it. To check if your device does support it, run adb shell cat proc/cpuinfo. You can also compile your native code with NEON directives (LOCAL_ARM_NEON := true) but I doubt this will do any good, since apparently few OpenCV routines are NEON optimized. So, the only way to increase speed with this, is to rebuild the code with NEON intrinsics (this is completely unexplored ground for me, but you might find it worth looking). In the android.opencv group it was suggested that future OpenCV releases will have more NEON-optimized libraries. This could be interesting, however I am not sure if it is worth working on it or wait for faster CPUs and optimized systems using GPU computing. Note that Android systems < 3.0 do not use built-in hardware acceleration.

    5) If you are doing this for academic purposes, convince your university to buy you a better device ^^. This might ultimately be the best option for faster SURF feature detection. Another option is to rewrite the algorithms. I am aware some guys in the Intel labs did it, with some success but, obviously they won't share it. Honestly, after investigating this issue for a few weeks, I realised that for my specific needs, (and since I am no computer science engineer nor an algorithms expert) there is more value on waiting a few months for better devices, than banging my head on the wall dissecting the algorithms and developing near-assembly code.

    Best regards and good luck!

    0 讨论(0)
  • 2021-01-30 04:19

    Do you need to use the SURF feature/descriptor for your application? SURF is attractive as it matches very nicely, but as you've found out it is somewhat slow. If you're just tracking points through a video you could make the assumption that points will not vary much frame-to-frame and so you could detect and match Harris/FAST corners and then filter matches to be valid only if they're within an x-pixel radius of the original point?

    OpenCV has an (albeit somewhat limited) selection of feature detectors and descriptor extractors and descriptor matchers, it would be worth investigating the options if you've not already.

    0 讨论(0)
提交回复
热议问题