Tensorflow LSTM Error (ValueError: Shapes must be equal rank, but are 2 and 1 )

流过昼夜 提交于 2020-01-24 12:43:26

问题


I know this questions have been asked many times but i am kind of new to tensorflow and none of the previous threads could solve my issue. I am trying to implement a LSTM for series of sensor data to classify data. I want my data be classified as 0 or 1 so its a binary classifier. I have over all 2539 samples which each of them have 555 time_steps and each time_step carries 9 features so my input has shape (2539, 555, 9) and for each sample and i have a label array which hold the value 0 or 1 which its shape is like this (2539, 1) which each column has a value 0 or 1. I have prepared this code below but I get error regarding to dimensionality of my logits and labels. No matter how I reshape them I still get errors.

Can you please help me understand the problem?

 X_train,X_test,y_train,y_test = train_test_split(final_training_set, labels, test_size=0.2, shuffle=False, random_state=42)


epochs = 10
time_steps = 555
n_classes = 2
n_units = 128
n_features = 9
batch_size = 8

x= tf.placeholder('float32',[batch_size,time_steps,n_features])
y = tf.placeholder('float32',[None,n_classes])

###########################################
out_weights=tf.Variable(tf.random_normal([n_units,n_classes]))
out_bias=tf.Variable(tf.random_normal([n_classes]))
###########################################

lstm_layer=tf.nn.rnn_cell.LSTMCell(n_units,state_is_tuple=True)
initial_state = lstm_layer.zero_state(batch_size, dtype=tf.float32)
outputs,states = tf.nn.dynamic_rnn(lstm_layer, x,
                                   initial_state=initial_state,
                                   dtype=tf.float32)


###########################################
output=tf.matmul(outputs[-1],out_weights)+out_bias
print(np.shape(output))

logit = output
logit = (logit, [-1])

cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logit, labels=labels))
optimizer = tf.train.AdamOptimizer().minimize(cost)
with tf.Session() as sess:

        tf.global_variables_initializer().run()
        tf.local_variables_initializer().run()

        for epoch in range(epochs):
            epoch_loss = 0

            i = 0
            for i in range(int(len(X_train) / batch_size)):

                start = i
                end = i + batch_size

                batch_x = np.array(X_train[start:end])
                batch_y = np.array(y_train[start:end])

                _, c = sess.run([optimizer, cost], feed_dict={x: batch_x, y: batch_y})

                epoch_loss += c

                i += batch_size

            print('Epoch', epoch, 'completed out of', epochs, 'loss:', epoch_loss)

        pred = tf.round(tf.nn.sigmoid(logit)).eval({x: np.array(X_test), y: np.array(y_test)})

        f1 = f1_score(np.array(y_test), pred, average='macro')

        accuracy=accuracy_score(np.array(y_test), pred)


        print("F1 Score:", f1)
        print("Accuracy Score:",accuracy)

This is the error:

ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 0 with other shapes. for 'logistic_loss/logits' (op: 'Pack') with input shapes: [555,2], [1].


回答1:


Just an update the problem was with the shape of Labels. After adding onehot encoding for labels and make the 2dimensional problem was solved.



来源:https://stackoverflow.com/questions/54806450/tensorflow-lstm-error-valueerror-shapes-must-be-equal-rank-but-are-2-and-1

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!