mini-batch

How to generate custom mini-batches using Tensorflow 2.0, such as those in the paper “In defense of the triplet loss”?

给你一囗甜甜゛ 提交于 2021-01-28 07:03:55
问题 I want to implement a custom mini-batch generator in Tensorflow 2.0 using tf.data.Dataset API. Concretely, I have image data, 100 classes with ~200 examples each. For each mini-batch, I want to randomly sample P classes, and K images from each class, for a total of P*K examples in a mini-batch (as described in the paper In Defense of the Triplet Loss for Person Re-Identification]). I've been searching through documentation for tf.data.Dataset, but can't seem to find the right method. I've

Zero predictions despite masking support for zero-padded mini batch LSTM training in keras

雨燕双飞 提交于 2019-12-24 03:46:10
问题 Problem Statement I’m training a many-to-many LSTM in keras with tensorflow backend (tf version 1.13.1) on tagged text sequences to predict the tag of each element in the sequence using pretrained GloVe embeddings. My training regime involves mini batch stochastic gradient descent, with each mini batch matrix zero-padded column-wise to ensure equal length input to the network. Crucially, because of custom constrains on my mini batches due to the nature of the task and the data, I am not using

Tensorflow: Convolutions with different filter for each sample in the mini-batch

可紊 提交于 2019-12-21 12:20:45
问题 I would like to have a 2d convolution with a filter which depends on the sample in the mini-batch in tensorflow. Any ideas how one could do that, especially if the number of sample per mini-batch is not known? Concretely, I have input data inp of the form MB x H x W x Channels , and I have filters F of the form MB x fh x fw x Channels x OutChannels . It is assumed that inp = tf.placeholder('float', [None, H, W, channels_img], name='img_input') . I would like to do tf.nn.conv2d(inp, F, strides

Tensorflow: Convolutions with different filter for each sample in the mini-batch

夙愿已清 提交于 2019-12-04 06:11:55
I would like to have a 2d convolution with a filter which depends on the sample in the mini-batch in tensorflow. Any ideas how one could do that, especially if the number of sample per mini-batch is not known? Concretely, I have input data inp of the form MB x H x W x Channels , and I have filters F of the form MB x fh x fw x Channels x OutChannels . It is assumed that inp = tf.placeholder('float', [None, H, W, channels_img], name='img_input') . I would like to do tf.nn.conv2d(inp, F, strides = [1,1,1,1]) , but this is not allowed because F cannot have a mini-batch dimension. Any idea how to

Tensorflow: create minibatch from numpy array > 2 GB

喜你入骨 提交于 2019-12-04 04:34:16
问题 I am trying to feed minibatches of numpy arrays to my model, but I'm stuck with batching. Using 'tf.train.shuffle_batch' raises an error because the 'images' array is larger than 2 GB. I tried to go around it and create placeholders, but when I try to feed the the arrays they are still represented by tf.Tensor objects. My main concern is that I defined the operations under the model class and the objects don't get called before running the session. Does anyone have an idea how to handle this