If you don\'t specify a padding_values
then padded_batch
will autopad with 0. However, if you want a different value such as -1, you can\'t just se
You should change padding_values
.
dataset = dataset.padded_batch(2, padded_shapes=([None],[None])
, padding_values=(tf.constant(-1, dtype=tf.int64)
,tf.constant(-1, dtype=tf.int64)))
with tf.Session() as sess:
print(sess.run([i, data]))
print(sess.run([i, data]))
[array([[0],
[2]]), array([[ 0, 1, 2, 3],
[ 2, 3, 4, -1]])]
[array([[3],
[9]]), array([[ 3, 6, 5, 4, 3],
[ 3, 9, -1, -1, -1]])]
Explain
Every entry given in padding_values
represents the padding values to use for the respective components. This means that the length of padded_shapes
should be equal to the length of padding_values
. The latter is used for padding the entire length for every array in here,and the former has the same length and does not need padding -1
.For example:
dataset = dataset.padded_batch(2, padded_shapes=([None],[None])
, padding_values=(tf.constant(-1, dtype=tf.int64)
,tf.constant(-2, dtype=tf.int64)))
with tf.Session() as sess:
print(sess.run([i, data]))
print(sess.run([i, data]))
[array([[0],
[2]]), array([[ 0, 1, 2, 3],
[ 2, 3, 4, -2]])]
[array([[3],
[9]]), array([[ 3, 6, 5, 4, 3],
[ 3, 9, -2, -2, -2]])]