Tensorflow: Feeding every LSTM timestep into the same logit layer (generaly feeding a dynamic amount of tensors into one layer)

南笙酒味 提交于 2019-12-12 04:48:20

问题


I stumbled upon this issue while trying to build an LSTM-classifier. Using tf.nn.dynamic_rnn to auto-unfold over time i get an output lstm_output of size [batch_size, time_steps, number_cells] from the lstm cell (ignoring the state which is also an output). Now this output should for every timestep be fed into the same fully connected layer (planned to use tf.contrib.layers.fully_connected(lstm_output_oneTimestep, numClasses) to reduce the size from number_cells to number_classes (for using softmax). Now if i would know the amount of timesteps i could of course just write time_steps single nodes for my graph, which all use the same weights, but this is of course not only ridiculous, but also impossible for an dynamic amount of timesteps in the lstm. My question now is twofold: First i remember there was a way to build an dynamic amount of similar nodes in tensorflow quite easily, but i wasn't able to find it despite excessive googling stackoverflow (i'm aware this sounds ridiculous, i suppose i just try the wrong key words).
Second there must be a smarter way? I assume i could just reshape the output of the lstm to shape [batch_size * max_time, number_cells] and input it into a layer with weights of shape [number_cells, number_classes] to obtain something of shape [batch_size * max_time, numberClasses]? If this would work, is there a smart way to write this sort of reshaping in tensorflow?


回答1:


(Despite trying it for an hour before, just after asking you find some answers): If subquestion proposal from subquesiton two works, the reshaping can of course be done by tf.convert_to_tensor(tf.unstack(lstm_output)). Mea culpa. Still most of the question stands.



来源:https://stackoverflow.com/questions/46433494/tensorflow-feeding-every-lstm-timestep-into-the-same-logit-layer-generaly-feed

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!