Can I run Keras model on gpu?

前端 未结 5 1417
不知归路
不知归路 2020-12-02 03:42

I\'m running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?

相关标签:
5条回答
  • 2020-12-02 04:15

    Yes you can run keras models on GPU. Few things you will have to check first.

    1. your system has GPU (Nvidia. As AMD doesn't work yet)
    2. You have installed the GPU version of tensorflow
    3. You have installed CUDA installation instructions
    4. Verify that tensorflow is running with GPU check if GPU is working

    sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

    OR

    from tensorflow.python.client import device_lib
    print(device_lib.list_local_devices())
    

    output will be something like this:

    [
      name: "/cpu:0"device_type: "CPU",
      name: "/gpu:0"device_type: "GPU"
    ]
    

    Once all this is done your model will run on GPU:

    To Check if keras(>=2.1.1) is using GPU:

    from keras import backend as K
    K.tensorflow_backend._get_available_gpus()
    

    All the best.

    0 讨论(0)
  • 2020-12-02 04:19

    Sure. I suppose that you have already installed TensorFlow for GPU.

    You need to add the following block after importing keras. I am working on a machine which have 56 core cpu, and a gpu.

    import keras
    import tensorflow as tf
    
    
    config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 56} ) 
    sess = tf.Session(config=config) 
    keras.backend.set_session(sess)
    

    Of course, this usage enforces my machines maximum limits. You can decrease cpu and gpu consumption values.

    0 讨论(0)
  • 2020-12-02 04:32

    See if your script is running GPU in Task manager. If not, suspect your CUDA version is right one for the tensorflow version you are using, as the other answers suggested already.

    Additionally, a proper CUDA DNN library for the CUDA version is required to run GPU with tensorflow. Download/extract it from here and put the DLL (e.g., cudnn64_7.dll) into CUDA bin folder (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin).

    0 讨论(0)
  • 2020-12-02 04:35

    2.0 Compatible Answer: While above mentioned answer explain in detail on how to use GPU on Keras Model, I want to explain how it can be done for Tensorflow Version 2.0.

    To know how many GPUs are available, we can use the below code:

    print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
    

    To find out which devices your operations and tensors are assigned to, put tf.debugging.set_log_device_placement(True) as the first statement of your program.

    Enabling device placement logging causes any Tensor allocations or operations to be printed. For example, running the below code:

    tf.debugging.set_log_device_placement(True)
    
    # Create some tensors
    a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
    b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
    c = tf.matmul(a, b)
    
    print(c)
    

    gives the Output shown below:

    Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32)

    For more information, refer this link

    0 讨论(0)
  • 2020-12-02 04:39

    Of course. if you are running on Tensorflow or CNTk backends, your code will run on your GPU devices defaultly.But if Theano backends, you can use following

    Theano flags:

    "THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py"

    0 讨论(0)
提交回复
热议问题