How to compile Tensorflow with SSE4.2 and AVX instructions?

前端 未结 12 719
南笙
南笙 2020-11-22 04:14

This is the message received from running a script to check if Tensorflow is working:

I tensorflow/stream_executor/dso_loader.cc:125] successfully opened CUD         


        
相关标签:
12条回答
  • 2020-11-22 04:29

    Let me answer your 3rd question first:

    If you want to run a self-compiled version within a conda-env, you can. These are the general instructions I run to get tensorflow to install on my system with additional instructions. Note: This build was for an AMD A10-7850 build (check your CPU for what instructions are supported...it may differ) running Ubuntu 16.04 LTS. I use Python 3.5 within my conda-env. Credit goes to the tensorflow source install page and the answers provided above.

    git clone https://github.com/tensorflow/tensorflow 
    # Install Bazel
    # https://bazel.build/versions/master/docs/install.html
    sudo apt-get install python3-numpy python3-dev python3-pip python3-wheel
    # Create your virtual env with conda.
    source activate YOUR_ENV
    pip install six numpy wheel, packaging, appdir
    # Follow the configure instructions at:
    # https://www.tensorflow.org/install/install_sources
    # Build your build like below. Note: Check what instructions your CPU 
    # support. Also. If resources are limited consider adding the following 
    # tag --local_resources 2048,.5,1.0 . This will limit how much ram many
    # local resources are used but will increase time to compile.
    bazel build -c opt --copt=-mavx --copt=-msse4.1 --copt=-msse4.2  -k //tensorflow/tools/pip_package:build_pip_package
    # Create the wheel like so:
    bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
    # Inside your conda env:
    pip install /tmp/tensorflow_pkg/NAME_OF_WHEEL.whl
    # Then install the rest of your stack
    pip install keras jupyter etc. etc.
    

    As to your 2nd question:

    A self-compiled version with optimizations are well worth the effort in my opinion. On my particular setup, calculations that used to take 560-600 seconds now only take about 300 seconds! Although the exact numbers will vary, I think you can expect about a 35-50% speed increase in general on your particular setup.

    Lastly your 1st question:

    A lot of the answers have been provided above already. To summarize: AVX, SSE4.1, SSE4.2, MFA are different kinds of extended instruction sets on X86 CPUs. Many contain optimized instructions for processing matrix or vector operations.

    I will highlight my own misconception to hopefully save you some time: It's not that SSE4.2 is a newer version of instructions superseding SSE4.1. SSE4 = SSE4.1 (a set of 47 instructions) + SSE4.2 (a set of 7 instructions).

    In the context of tensorflow compilation, if you computer supports AVX2 and AVX, and SSE4.1 and SSE4.2, you should put those optimizing flags in for all. Don't do like I did and just go with SSE4.2 thinking that it's newer and should superseed SSE4.1. That's clearly WRONG! I had to recompile because of that which cost me a good 40 minutes.

    0 讨论(0)
  • 2020-11-22 04:34

    When building TensorFlow from source, you'll run the configure script. One of the questions that the configure script asks is as follows:

    Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -march=native]
    

    The configure script will attach the flag(s) you specify to the bazel command that builds the TensorFlow pip package. Broadly speaking, you can respond to this prompt in one of two ways:

    • If you are building TensorFlow on the same type of CPU type as the one on which you'll run TensorFlow, then you should accept the default (-march=native). This option will optimize the generated code for your machine's CPU type.
    • If you are building TensorFlow on one CPU type but will run TensorFlow on a different CPU type, then consider supplying a more specific optimization flag as described in the gcc documentation.

    After configuring TensorFlow as described in the preceding bulleted list, you should be able to build TensorFlow fully optimized for the target CPU just by adding the --config=opt flag to any bazel command you are running.

    0 讨论(0)
  • 2020-11-22 04:35

    This is the simplest method. Only one step.

    It has significant impact on speed. In my case, time taken for a training step almost halved.

    Refer custom builds of tensorflow

    0 讨论(0)
  • 2020-11-22 04:37

    I just ran into this same problem, it seems like Yaroslav Bulatov's suggestion doesn't cover SSE4.2 support, adding --copt=-msse4.2 would suffice. In the end, I successfully built with

    bazel build -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mfpmath=both --copt=-msse4.2 --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
    

    without getting any warning or errors.

    Probably the best choice for any system is:

    bazel build -c opt --copt=-march=native --copt=-mfpmath=both --config=cuda -k //tensorflow/tools/pip_package:build_pip_package
    

    (Update: the build scripts may be eating -march=native, possibly because it contains an =.)

    -mfpmath=both only works with gcc, not clang. -mfpmath=sse is probably just as good, if not better, and is the default for x86-64. 32-bit builds default to -mfpmath=387, so changing that will help for 32-bit. (But if you want high-performance for number crunching, you should build 64-bit binaries.)

    I'm not sure what TensorFlow's default for -O2 or -O3 is. gcc -O3 enables full optimization including auto-vectorization, but that sometimes can make code slower.


    What this does: --copt for bazel build passes an option directly to gcc for compiling C and C++ files (but not linking, so you need a different option for cross-file link-time-optimization)

    x86-64 gcc defaults to using only SSE2 or older SIMD instructions, so you can run the binaries on any x86-64 system. (See https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html). That's not what you want. You want to make a binary that takes advantage of all the instructions your CPU can run, because you're only running this binary on the system where you built it.

    -march=native enables all the options your CPU supports, so it makes -mavx512f -mavx2 -mavx -mfma -msse4.2 redundant. (Also, -mavx2 already enables -mavx and -msse4.2, so Yaroslav's command should have been fine). Also if you're using a CPU that doesn't support one of these options (like FMA), using -mfma would make a binary that faults with illegal instructions.

    TensorFlow's ./configure defaults to enabling -march=native, so using that should avoid needing to specify compiler options manually.

    -march=native enables -mtune=native, so it optimizes for your CPU for things like which sequence of AVX instructions is best for unaligned loads.

    This all applies to gcc, clang, or ICC. (For ICC, you can use -xHOST instead of -march=native.)

    0 讨论(0)
  • 2020-11-22 04:40

    These are SIMD vector processing instruction sets.

    Using vector instructions is faster for many tasks; machine learning is such a task.

    Quoting the tensorflow installation docs:

    To be compatible with as wide a range of machines as possible, TensorFlow defaults to only using SSE4.1 SIMD instructions on x86 machines. Most modern PCs and Macs support more advanced instructions, so if you're building a binary that you'll only be running on your own machine, you can enable these by using --copt=-march=native in your bazel build command.

    0 讨论(0)
  • 2020-11-22 04:41

    To hide those warnings, you could do this before your actual code.

    import os
    os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
    import tensorflow as tf
    
    0 讨论(0)
提交回复
热议问题