OpenMP/__gnu_parallel for an unordered_map

独自空忆成欢 提交于 2019-11-29 15:14:55

You could split a loop over ranges of bucket indices, then create an intra-bucket iterator to handle elements. unordered_map has .bucket_count() and the bucket-specific iterator-yielding begin(bucket_number), end(bucket_number) that allow this. Assuming you haven't modified the default max_load_factor() from 1.0 and have a reasonable hash function, you'll average 1 element per bucket and shouldn't be wasting too much time on empty buckets.

The canonical approach with containers that do not support random iterators is to use explicit OpenMP tasks:

std::unordered_map<size_t, double> hastTable;

#pragma omp parallel
{
   #pragma omp single
   {
      for(auto it = hastTable.begin(); it != hastTable.end(); it++) {
         #pragma omp task
         {
            //do something
         }
      }
   }
}

This creates a separate task for each iteration which brings some overhead and therefore is only meaningful when //do something actually means //do quite a bit of work.

You can do this by iterating over the buckets of the unordered_map, like so:

#include <cmath>
#include <iostream>
#include <unordered_map>

int main(){
  const int N = 10000000;
  std::unordered_map<int, double> mymap(1.5*N);

  //Load up a hash table
  for(int i=0;i<N;i++)
    mymap[i] = i+1;

  #pragma omp parallel for default(none) shared(mymap)
  for(size_t b=0;b<mymap.bucket_count();b++)
  for(auto bi=mymap.begin(b);bi!=mymap.end(b);bi++){
    for(int i=0;i<20;i++)
      bi->second += std::sqrt(std::log(bi->second) + 1);
  }

  std::cout<<mymap.begin()->first<<" "<<mymap.begin()->second<<std::endl;

  return 0;
}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!