TensorFlow: InternalError: Blas SGEMM launch failed

后端 未结 16 2261
清酒与你
清酒与你 2020-12-04 15:13

When I run sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) I get InternalError: Blas SGEMM launch failed. Here is the full error and st

相关标签:
16条回答
  • 2020-12-04 15:36

    Restarting my Jupyter processes wasn't enough; I had to reboot my computer.

    0 讨论(0)
  • 2020-12-04 15:39

    2.0 Compatible Answer: Providing 2.0 Code for erko's answer for the benefit of the Community.

    session = tf.compat.v1.Session()
    
    if 'session' in locals() and session is not None:
        print('Close interactive session')
        session.close()
    
    0 讨论(0)
  • 2020-12-04 15:42

    In my case, the network filesystem under which libcublas.so was located simply died. The node was rebooted and everything was fine. Just to add another point to the dataset.

    0 讨论(0)
  • 2020-12-04 15:46

    My environment is Python 3.5, Tensorflow 0.12 and Windows 10 (no Docker). I am training neural networks in both CPU and GPU. I came across the same error InternalError: Blas SGEMM launch failed whenever training in the GPU.

    I could not find the reason why this error happens but I managed to run my code in the GPU by avoiding the tensorflow function tensorflow.contrib.slim.one_hot_encoding(). Instead, I do the one-hot-encoding operation in numpy (input and output variables).

    The following code reproduces the error and the fix. It is a minimal setup to learn the y = x ** 2 function using gradient descent.

    import numpy as np
    import tensorflow as tf
    import tensorflow.contrib.slim as slim
    
    def test_one_hot_encoding_using_tf():
    
        # This function raises the "InternalError: Blas SGEMM launch failed" when run in the GPU
    
        # Initialize
        tf.reset_default_graph()
        input_size = 10
        output_size = 100
        input_holder = tf.placeholder(shape=[1], dtype=tf.int32, name='input')
        output_holder = tf.placeholder(shape=[1], dtype=tf.int32, name='output')
    
        # Define network
        input_oh = slim.one_hot_encoding(input_holder, input_size)
        output_oh = slim.one_hot_encoding(output_holder, output_size)
        W1 = tf.Variable(tf.random_uniform([input_size, output_size], 0, 0.01))
        output_v = tf.matmul(input_oh, W1)
        output_v = tf.reshape(output_v, [-1])
    
        # Define updates
        loss = tf.reduce_sum(tf.square(output_oh - output_v))
        trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
        update_model = trainer.minimize(loss)
    
        # Optimize
        init = tf.initialize_all_variables()
        steps = 1000
    
        # Force CPU/GPU
        config = tf.ConfigProto(
            # device_count={'GPU': 0}  # uncomment this line to force CPU
        )
    
        # Launch the tensorflow graph
        with tf.Session(config=config) as sess:
            sess.run(init)
    
            for step_i in range(steps):
    
                # Get sample
                x = np.random.randint(0, 10)
                y = np.power(x, 2).astype('int32')
    
                # Update
                _, l = sess.run([update_model, loss], feed_dict={input_holder: [x], output_holder: [y]})
    
            # Check model
            print('Final loss: %f' % l)
    
    def test_one_hot_encoding_no_tf():
    
        # This function does not raise the "InternalError: Blas SGEMM launch failed" when run in the GPU
    
        def oh_encoding(label, num_classes):
            return np.identity(num_classes)[label:label + 1].astype('int32')
    
        # Initialize
        tf.reset_default_graph()
        input_size = 10
        output_size = 100
        input_holder = tf.placeholder(shape=[1, input_size], dtype=tf.float32, name='input')
        output_holder = tf.placeholder(shape=[1, output_size], dtype=tf.float32, name='output')
    
        # Define network
        W1 = tf.Variable(tf.random_uniform([input_size, output_size], 0, 0.01))
        output_v = tf.matmul(input_holder, W1)
        output_v = tf.reshape(output_v, [-1])
    
        # Define updates
        loss = tf.reduce_sum(tf.square(output_holder - output_v))
        trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
        update_model = trainer.minimize(loss)
    
        # Optimize
        init = tf.initialize_all_variables()
        steps = 1000
    
        # Force CPU/GPU
        config = tf.ConfigProto(
            # device_count={'GPU': 0}  # uncomment this line to force CPU
        )
    
        # Launch the tensorflow graph
        with tf.Session(config=config) as sess:
            sess.run(init)
    
            for step_i in range(steps):
    
                # Get sample
                x = np.random.randint(0, 10)
                y = np.power(x, 2).astype('int32')
    
                # One hot encoding
                x = oh_encoding(x, 10)
                y = oh_encoding(y, 100)
    
                # Update
                _, l = sess.run([update_model, loss], feed_dict={input_holder: x, output_holder: y})
    
            # Check model
            print('Final loss: %f' % l)
    
    0 讨论(0)
  • 2020-12-04 15:47

    In my case,

    First, I run

    conda clean --all

    to clean up tarballs and unused packages.

    Then, I restart IDE (Pycharm in this case) and it works well. Environment: anaconda python 3.6, windows 10 64bit. I install tensorflow-gpu by a command provided on the anaconda website.

    0 讨论(0)
  • 2020-12-04 15:52

    I closed all other Jupyter Sessions running and this solved the problem. I think It was GPU memory issue.

    0 讨论(0)
提交回复
热议问题