How to improve data input pipeline performance?

前端 未结 2 1357
礼貌的吻别
礼貌的吻别 2021-02-04 01:28

I try to optimize my data input pipeline. The dataset is a set of 450 TFRecord files of size ~70MB each, hosted on GCS. The job is executed with GCP ML Engine. There is no GPU.<

相关标签:
2条回答
  • 2021-02-04 02:07

    I have a further suggestion to add:

    According to the documentation of interleave(), you can as the first parameter use a mapping function.

    This means, one can write:

     dataset = tf.data.Dataset.list_files(file_pattern)
     dataset = dataset.interleave(lambda x:
        tf.data.TFRecordDataset(x).map(parse_fn, num_parallel_calls=AUTOTUNE),
        cycle_length=tf.data.experimental.AUTOTUNE,
        num_parallel_calls=tf.data.experimental.AUTOTUNE
        )
    

    As I understand it, this maps a parsing function to each shard, and then interleaves the results. This then eliminates the use of dataset.map(...) later on.

    0 讨论(0)
  • 2021-02-04 02:32

    Mentioning the Solution and the Important observations of @AlexisBRENON in the Answer Section, for the benefit of the Community.

    Below mentioned are the Important Observations:

    1. According to this GitHub issue, the TFRecordDataset interleaving is a legacy one, so interleave function is better.
    2. batch before map is a good habit (vectorizing your function) and reduce the number of times the mapped function is called.
    3. No need of repeat anymore. Since TF2.0, the Keras model API supports the dataset API and can use cache (see the SO post)
    4. Switch from a VarLenFeature to a FixedLenSequenceFeature, removing a useless call to tf.sparse.to_dense.

    Code for the Pipeline, with improved performance, in line with above observations is mentioned below:

    def build_dataset(file_pattern):
        tf.data.Dataset.list_files(
            file_pattern
        ).interleave(
            TFRecordDataset,
            cycle_length=tf.data.experimental.AUTOTUNE,
            num_parallel_calls=tf.data.experimental.AUTOTUNE
        ).shuffle(
            2048
        ).batch(
            batch_size=64,
            drop_remainder=True,
        ).map(
            map_func=parse_examples_batch,
            num_parallel_calls=tf.data.experimental.AUTOTUNE
        ).cache(
        ).prefetch(
            tf.data.experimental.AUTOTUNE
        )
    
    def parse_examples_batch(examples):
        preprocessed_sample_columns = {
            "features": tf.io.FixedLenSequenceFeature((), tf.float32, allow_missing=True),
            "booleanFeatures": tf.io.FixedLenFeature((), tf.string, ""),
            "label": tf.io.FixedLenFeature((), tf.float32, -1)
        }
        samples = tf.io.parse_example(examples, preprocessed_sample_columns)
        bits_to_float = tf.io.decode_raw(samples["booleanFeatures"], tf.uint8)
        return (
            (samples['features'], bits_to_float),
            tf.expand_dims(samples["label"], 1)
        )
    
    0 讨论(0)
提交回复
热议问题