问题
I have the following situation:
- I want to deploy a face detector model using Tensorflow Serving: https://www.tensorflow.org/serving/.
- In Tensorflow Serving, there is a command line option called
--enable_batching
. This causes the model server to automatically batch the requests to maximize throughput. I want this to be enabled. - My model takes in a set of images (called images), which is a tensor of shape
(batch_size, 640, 480, 3)
. - The model has two outputs:
(number_of_faces, 4)
and(number_of_faces,)
. The first output will be called faces. The last output, which we can call partitions is the index in the original batch for the corresponding face. For example, if I pass in a batch of 4 images and get 7 faces, then I might have this tensor as[0, 0, 1, 2, 2, 2, 3]
. The first two faces correspond to the first image, the third face for the second image, the 3rd image has 3 faces, etc.
My issue is this:
- In order for the
--enable_batching
flag to work, the output from my model needs to have the 0th dimension the same as the input. That is, I need a tensor with the following shape:(batch_size, ...)
. I suppose this is so that the model server can know which grpc connection to send each output in the batch towards. - What I want to do is to convert my output tensor from the face detector from this shape
(number_of_faces, 4)
to this shape(batch_size, None, 4)
. That is, an array of batches, where each batch can have a variable number of faces (e.g. one image in the batch may have no faces, and another might have 3).
What I tried:
tf.dynamic_partition
. On the surface, this function looks perfect. However, I ran into difficulties after realizing that thenum_partitions
parameter cannot be a tensor, only an integer:tensorflow_serving_output = tf.dynamic_partition(faces, partitions, batch_size)
If the tf.dynamic_partition
function were to accept tensor values for num_partition
, then it seems that my problem would be solved. However, I am back to square one since this is not the case.
Thank you all for your help! Let me know if anything is unclear
P.S. Here is a visual representation of the intended process:
回答1:
I ended up finding a solution to this using TensorArray
and tf.while_loop
:
def batch_reconstructor(tensor, partitions, batch_size):
"""
Take a tensor of shape (batch_size, 4) and a 1-D partitions tensor as well as the scalar batch_size
And reconstruct a TensorArray that preserves the original batching
From the partitions, we can get the maximum amount of tensors within a batch. This will inform the padding we need to use.
Params:
- tensor: The tensor to convert to a batch
- partitions: A list of batch indices. The tensor at position i corresponds to batch # partitions[i]
"""
tfarr = tf.TensorArray(tf.int32, size=batch_size, infer_shape=False)
_, _, count = tf.unique_with_counts(partitions)
maximum_tensor_size = tf.cast(tf.reduce_max(count), tf.int32)
padding_tensor_index = tf.cast(tf.gather(tf.shape(tensor), 0), tf.int32)
padding_tensor = tf.expand_dims(tf.cast(tf.fill([4], -1), tf.float32), axis=0) # fill with [-1, -1, -1, -1]
tensor = tf.concat([tensor, padding_tensor], axis=0)
def cond(i, acc):
return tf.less(i, batch_size)
def body(i, acc):
partition_indices = tf.reshape(tf.cast(tf.where(tf.equal(partitions, i)), tf.int32), [-1])
partition_size = tf.gather(tf.shape(partition_indices), 0)
# concat the partition_indices with padding_size * padding_tensor_index
padding_size = tf.subtract(maximum_tensor_size, partition_size)
padding_indices = tf.reshape(tf.fill([padding_size], padding_tensor_index), [-1])
partition_indices = tf.concat([partition_indices, padding_indices], axis=0)
return (tf.add(i, 1), acc.write(i, tf.gather(tensor, partition_indices)))
_, reconstructed = tf.while_loop(
cond,
body,
(tf.constant(0), tfarr),
name='batch_reconstructor'
)
reconstructed = reconstructed.stack()
return reconstructed
来源:https://stackoverflow.com/questions/46267278/organizing-tensor-into-batches-of-dynamically-shaped-tensors