lstm

What is the default activation function of cudnnlstm in tensorflow

给你一囗甜甜゛ 提交于 2020-05-14 07:39:25
问题 What's the default activation function of cudnnlstm in TensorFlow? How can I set an activation function such as relu ? Maybe it's just linear model? I read the document, but I did not find it. For example, the code is below: lstmcell=tf.contrib.cudnn_rnn.CudnnLSTM(1,encoder_size,direction="bidirectional") hq,_ =lstmcell(query) And I read the document of TensorFlow From this link. The function is below __init__( num_layers, num_units, input_mode=CUDNN_INPUT_LINEAR_MODE, direction=CUDNN_RNN

LSTM - predicting on a sliding window data

生来就可爱ヽ(ⅴ<●) 提交于 2020-05-12 07:18:21
问题 My training data is an overlapping sliding window of users daily data. it's shape is (1470, 3, 256, 18) : 1470 batches of 3 days of data, each day has 256 samples of 18 features each. My targets shape is (1470,) : a label value for each batch. I want to train an LSTM to predict a [3 days batch] -> [one target] The 256 day samples is padded with -10 for days that were missing 256 sampels I've written the following code to build the model: from tensorflow.keras.models import Sequential from

How to handle LSTMs with many features in python?

醉酒当歌 提交于 2020-04-16 04:23:19
问题 I have a binary classification problem. I use the following keras model to do my classification. input1 = Input(shape=(25,6)) x1 = LSTM(200)(input1) input2 = Input(shape=(24,6)) x2 = LSTM(200)(input2) input3 = Input(shape=(21,6)) x3 = LSTM(200)(input3) input4 = Input(shape=(20,6)) x4 = LSTM(200)(input4) x = concatenate([x1,x2,x3,x4]) x = Dropout(0.2)(x) x = Dense(200)(x) x = Dropout(0.2)(x) output = Dense(1, activation='sigmoid')(x) However, the results I get is extremely bad. I thought the

LSTM to classify long time series sequences

允我心安 提交于 2020-04-11 07:30:31
问题 I am trying to train LSTM model to classify long time series sequences with 700 time steps. Sample time series belonging to class 1 Sample time series belonging to class 2 Model model = Sequential() model.add(LSTM(100, input_shape=(700,1), return_sequences=False)) model.add(Dense(1, activation='sigmoid')) sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy']) history = model.fit(xtrain, y, batch_size = 10,

LSTM to classify long time series sequences

喜你入骨 提交于 2020-04-11 07:30:28
问题 I am trying to train LSTM model to classify long time series sequences with 700 time steps. Sample time series belonging to class 1 Sample time series belonging to class 2 Model model = Sequential() model.add(LSTM(100, input_shape=(700,1), return_sequences=False)) model.add(Dense(1, activation='sigmoid')) sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True) model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy']) history = model.fit(xtrain, y, batch_size = 10,

LSTM having a systematic offset between predictions and ground truth

∥☆過路亽.° 提交于 2020-04-08 18:28:23
问题 Currently i think i'm experiencing a systematic offset in a LSTM model, between the predictions and the ground truth values. What's the best approach to continue further from now on? The model architecture, along with the predictions & ground truth values are shown below. This is a regression problem where the historical data of the target plus 5 other correlated features X are used to predict the target y . Currently the input sequence n_input is of length 256, where the output sequence n

LSTM having a systematic offset between predictions and ground truth

谁说我不能喝 提交于 2020-04-08 18:21:56
问题 Currently i think i'm experiencing a systematic offset in a LSTM model, between the predictions and the ground truth values. What's the best approach to continue further from now on? The model architecture, along with the predictions & ground truth values are shown below. This is a regression problem where the historical data of the target plus 5 other correlated features X are used to predict the target y . Currently the input sequence n_input is of length 256, where the output sequence n

Keras学习手册(三),开始使用 Keras 函数式 API

蓝咒 提交于 2020-04-08 08:30:45
感谢作者分享- http://bjbsair.com/2020-04-07/tech-info/30658.html Keras 函数式 API 是定义复杂模型(如多输出模型、有向无环图,或具有共享层的模型)的方法。 这部分文档假设你已经对 Sequential 顺序模型比较熟悉。 让我们先从一些简单的例子开始。 例一:全连接网络 Sequential 模型可能是实现这种网络的一个更好选择,但这个例子能够帮助我们进行一些简单的理解。 网络层的实例是可调用的,它以张量为参数,并且返回一个张量 输入和输出均为张量,它们都可以用来定义一个模型(Model) 这样的模型同 Keras 的 Sequential 模型一样,都可以被训练 from keras.layers import Input, Dense from keras.models import Model # 这部分返回一个张量 inputs = Input(shape=(784,)) # 层的实例是可调用的,它以张量为参数,并且返回一个张量 x = Dense(64, activation='relu')(inputs) x = Dense(64, activation='relu')(x) predictions = Dense(10, activation='softmax')(x) #

ValueError: Input 0 is incompatible with layer lstm_13: expected ndim=3, found ndim=4

本小妞迷上赌 提交于 2020-04-08 00:52:30
问题 I am trying for multi-class classification and here are the details of my training input and output: train_input.shape= (1, 95000, 360) (95000 length input array with each element being an array of 360 length) train_output.shape = (1, 95000, 22) (22 Classes are there) model = Sequential() model.add(LSTM(22, input_shape=(1, 95000,360))) model.add(Dense(22, activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) print(model.summary()) model

How to efficiently connect features to LSTM model?

▼魔方 西西 提交于 2020-04-07 08:30:49
问题 I have the following ‍LSTM model where I input my time-series to the LSTM layer. The other input (which is the dense layer) contains the 10 features I manually extracted from the time-series. input1 = Input(shape=(26,6)) x1 = LSTM(100)(input1) input2 = Input(shape=(10,1)) x2 = Dense(50)(input2) x = concatenate([x1,x2]) x = Dense(200)(x) output = Dense(1, activation='sigmoid')(x) model = Model(inputs=[input1,input2], outputs=output) I thought that the performance of my model will hugely