Multithreaded image processing in C++

后端 未结 16 708
既然无缘
既然无缘 2020-12-29 12:16

I am working on a program which manipulates images of different sizes. Many of these manipulations read pixel data from an input and write to a separate output (e.g. blur).

相关标签:
16条回答
  • 2020-12-29 12:43

    I would recommend boost::thread and boost::gil (generic image libray). Because there are quite much templates involved, I'm not sure whether the code-size will still be acceptable for you. But it's part of boost, so it is probably worth a look.

    0 讨论(0)
  • 2020-12-29 12:45

    If your compiler supports OpenMP (I know VC++ 8.0 and 9.0 do, as does gcc), it can make things like this much easier to do.

    You don't just want to make a lot of threads - there's a point of diminishing returns where adding new threads slows things down as you start getting more and more context switches. At some point, using too many threads can actually make the parallel version slower than just using a linear algorithm. The optimal number of threads is a function of the number of cpus/cores available, and the percentage of time each thread spends blocked on things like I/O. Take a look at this article by Herb Sutter for some discussion on parallel performance gains.

    OpenMP lets you easily adapt the number of threads created to the number of CPUs available. Using it (especially in data-processing cases) often involves simply putting in a few #pragma omps in existing code, and letting the compiler handle creating threads and synchronization.

    In general - as long as data isn't changing, you won't have to lock read-only data. If you can be sure that each pixel slot will only be written once and you can guarantee that all the writing has been completed before you start reading from the result, you won't have to lock that either.

    For OpenMP, there's no need to do anything special as far as functors / function objects. Write it whichever way makes the most sense to you. Here's an image-processing example from Intel (converts rgb to grayscale):

    #pragma omp parallel for
    for (i=0; i < numPixels; i++)
    {
       pGrayScaleBitmap[i] = (unsigned BYTE)
           (pRGBBitmap[i].red * 0.299 +
            pRGBBitmap[i].green * 0.587 +
            pRGBBitmap[i].blue * 0.114);
    }
    

    This automatically splits up into as many threads as you have CPUs, and assigns a section of the array to each thread.

    0 讨论(0)
  • 2020-12-29 12:46

    Don't embark on threading lightly! The race conditions can be a major pain in the arse to figure out. Especially if you don't have a lot of experience with threads! (You've been warned: Here be dragons! Big hairy non-deterministic impossible-to-reliably-reproduce dragons!)

    Do you know what deadlock is? How about Livelock?

    That said...


    As ckarmann and others have already suggested: Use a work-queue model. One thread per CPU core. Break the work up into N chunks. Make the chunks reasonably large, like many rows. As each thread becomes free, it snags the next work chunk off the queue.

    In the simplest IDEAL version, you have N cores, N threads, and N subparts of the problem with each thread knowing from the start exactly what it's going to do.

    But that doesn't usually happen in practice due to the overhead of starting/stopping threads. You really want the threads to already be spawned and waiting for action. (E.g. Through a semaphore.)

    The work-queue model itself is quite powerful. It lets you parallelize things like quick-sort, which normally doesn't parallelize across N threads/cores gracefully.


    More threads than cores? You're just wasting overhead. Each thread has overhead. Even at #threads=#cores, you will never achieve a perfect Nx speedup factor.

    One thread per row would be very inefficient! One thread per pixel? I don't even want to think about it. (That per-pixel approach makes a lot more sense when playing with vectorized processor units like they had on the old Crays. But not with threads!)


    Libraries? What's your platform? Under Unix/Linux/g++ I'd suggest pthreads & semaphores. (Pthreads is also available under windows with a microsoft compatibility layer. But, uhgg. I don't really trust it! Cygwin might be a better choice there.)

    Under Unix/Linux, man:

    * pthread_create, pthread_detach.
    * pthread_mutexattr_init, pthread_mutexattr_settype, pthread_mutex_init,
    * pthread_mutexattr_destroy, pthread_mutex_destroy, pthread_mutex_lock,
    * pthread_mutex_trylock, pthread_mutex_unlock, pthread_mutex_timedlock.
    * sem_init, sem_destroy, sem_post, sem_wait, sem_trywait, sem_timedwait.
    

    Some folks like pthreads' condition variables. But I always preferred POSIX 1003.1b semaphores. They handle the situation where you want to signal another thread BEFORE it starts waiting somewhat better. Or where another thread is signaled multiple times.

    Oh, and do yourself a favor: Wrap your thread/mutex/semaphore pthread calls into a couple of C++ classes. That will simplify matters a lot!


    Would I need to lock my read-only and write-only arrays?

    It depends on your precise hardware & software. Usually read-only arrays can be freely shared between threads. But there are cases where that is not so.

    Writing is much the same. Usually, as long as only one thread is writing to each particular memory spot, you are ok. But there are cases where that is not so!

    Writing is more troublesome than reading as you can get into these weird fencepost situations. Memory is often written as words not bytes. When one thread writes part of the word, and another writes a different part, depending on the exact timing of which thread does what when (e.g. nondeterministic), you can get some very unpredictable results!

    I'd play it safe: Give each thread its own copy of the read and write areas. After they are done, copy the data back. All under mutex, of course.

    Unless you are talking about gigabytes of data, memory blits are very fast. That couple of microseconds of performance time just isn't worth the debugging nightmare.

    If you were to share one common data area between threads using mutexes, the collision/waiting mutex inefficiencies would pile up and devastate your efficiency!


    Look, clean data boundaries are the essence of good multi-threaded code. When your boundaries aren't clear, that's when you get into trouble.

    Similarly, it's essential to keep everything on the boundary mutexed! And to keep the mutexed areas short!

    Try to avoid locking more than one mutex at the same time. If you do lock more than one mutex, always lock them in the same order!

    Where possible use ERROR-CHECKING or RECURSIVE mutexes. FAST mutexes are just asking for trouble, with very little actual (measured) speed gain.

    If you get into a deadlock situation, run it in gdb, hit ctrl-c, visit each thread and backtrace. You can find the problem quite quickly that way. (Livelock is much harder!)


    One final suggestion: Build it single-threaded, then start optimizing. On a single-core system, you may find yourself gaining more speed from things like foo[i++]=bar ==> *(foo++)=bar than from threading.


    Addendum: What I said about keeping mutexed areas short up above? Consider two threads: (Given a global shared mutex object of a Mutex class.)

    /*ThreadA:*/ while(1){  mutex.lock();  printf("a\n");  usleep(100000); mutex.unlock(); }
    /*ThreadB:*/ while(1){  mutex.lock();  printf("b\n");  usleep(100000); mutex.unlock(); }
    

    What will happen?

    Under my version of Linux, one thread will run continuously and the other will starve. Very very rarely they will change places when a context swap occurs between mutex.unlock() and mutex.lock().


    Addendum: In your case, this is unlikely to be an issue. But with other problems one may not know in advance how long a particular work-chunk will take to complete. Breaking a problem down into 100 parts (instead of 4 parts) and using a work-queue to split it up across 4 cores smooths out such discrepancies.

    If one work-chunk takes 5 times longer to complete than another, well, it all evens out in the end. Though with too many chunks, the overhead of acquiring new work-chunks creates noticeable delays. It's a problem-specific balancing act.

    0 讨论(0)
  • 2020-12-29 12:50

    There's another option of using assembly for optimization. Now, one exciting project for dynamic code generation is softwire (which dates back awhile - here is the original project's site). It has been developed by Nick Capens and grew into now commercially available swiftshader. But the spin-off of the original softwire is still available on gna.org.

    This could serve as an introduction to his solution.

    Personally, I don't believe you can gain significant performance by utilizing multiple threads for your problem.

    0 讨论(0)
  • 2020-12-29 12:51

    One thread per pixel row is insane, best have around n-1 to 2n threads (for n cpu's), and make each one loop fetching one jobunit (may be one row, or other kind of partition)

    on unix-like, use pthreads it's simple and lightweight.

    0 讨论(0)
  • 2020-12-29 12:52

    I think map/reduce framework will be the ideal thing to use in this situation. You can use Hadoop streaming to use your existing C++ application.

    Just implement the map and reduce jobs.

    As you said, you can use row-level maniputations as a map task and combine the row level manipulations to the final image in the reduce task.

    Hope this is useful.

    0 讨论(0)
提交回复
热议问题