Given the two examples below, is there a performance improvement when autographing the tf.data.Dataset
?
Dataset not in tf.function
Adding @tf.function
does add significant speedup. Take a look at this:
import tensorflow as tf
data = tf.random.normal((1000, 10, 10, 1))
dataset = tf.data.Dataset.from_tensors(data).batch(10)
def iterate_1(dataset):
for x in dataset:
x = x
@tf.function
def iterate_2(dataset):
for x in dataset:
x = x
%timeit -n 1000 iterate_1(dataset) # 1.46 ms ± 8.2 µs per loop
%timeit -n 1000 iterate_2(dataset) # 239 µs ± 10.2 µs per loop
As you can see iterating with @tf.function
is more than 6 times faster.