I am learning deep learning recently and my friend recommended me caffe. After install it with OpenBLAS, I followed the tutorial, MNIST task in the doc. But later I found it was
I found that this method works:
When you build the caffe, in your make command, do use this for 8 cores:
make all -j8
and
make pycaffe -j8
Also, make sure
OPENBLAS_NUM_THREADS=8
is set.
This question has a full script for the same.
@Karthik. That also works for me. One interesting discovery that I made was that using 4 threads reduces forward/backward pass during the caffe timing test by a factor of 2. However, increasing the thread count to 8 or even 24 results in f/b speed that is less than what I get with OPENBLAS_NUM_THREADS=4. Here are times for a few thread counts (tested on NetworkInNetwork model).
[#threads] [f/b time in ms]
1 223
2 150
4 113
8 125
12 144
For comparison, on a Titan X GPU the f/b pass took 1.87 ms.
While building OpenBLAS, you have to set the flag USE_OPENMP = 1 to enable OpenMP support. Next set Caffe to use OpenBLAS in the Makefile.config, please export the number of threads you want to use during runtime by setting up OMP_NUM_THREADS=n where n is the number of threads you want.