Creating TfRecords from a list of strings and feeding a Graph in tensorflow after decoding

前端 未结 1 1161
野的像风
野的像风 2021-02-06 13:59

The aim was to create a database of TfRecords. Given: I have 23 folders each contain 7500 image, and 23 text file, each with 7500 line describing features for the 7500 images i

1条回答
  •  日久生厌
    2021-02-06 14:45

    In order to solve this problem, the coordinator along with the queue runner both had to be initialized within a Session. Additionally, since the number of epoch is controlled internally, it is not a global variable, instead, consider a local variable. Therefore, we need to initialize that local variable before telling the queue_runner to start the enqueuing the file_names into the Queue. Therefore, here is the following code:

    filename_queue = tf.train.string_input_producer(tfrecords_filename, num_epochs=num_epoch, shuffle=False, name='queue')
    reader = tf.TFRecordReader()
    
    key, serialized_example = reader.read(filename_queue)
    features = tf.parse_single_example(
        serialized_example,
        # Defaults are not specified since both keys are required.
        features={
            'height': tf.FixedLenFeature([], tf.int64),
            'width': tf.FixedLenFeature([], tf.int64),
            'image_raw': tf.FixedLenFeature([], tf.string),
            'annotation_raw': tf.FixedLenFeature([], tf.string)
        })
    ...
    init_op = tf.group(tf.local_variables_initializer(),
                   tf.global_variables_initializer())
    with tf.Session() as sess:
        sess.run(init_op)
    
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(coord=coord)
    

    And now should work.

    Now to gather a batch of images before feeding them into the network, we can use tf.train.shuffle_batch or tf.train.batch. Both works. And the difference is simple. One shuffles the images and the other not. But note that, defining a number a threads and using tf.train.batch might shuffle the data samples because of the race that takes part between the threads that are enqueuing file_names. Anyways, the following code should be inserted directly after initializing the Queue as follows:

    min_after_dequeue = 100
    num_threads = 1
    capacity = min_after_dequeue + num_threads * batch_size
    label_batch, images_batch = tf.train.batch([annotation, image],
                                           shapes=[[], [112, 112, 3]],
                                           batch_size=batch_size,
                                           capacity=capacity,
                                           num_threads=num_threads)
    

    Note that here the shape of the tensors could be different. It happened that the reader was decoding a colored image of size [112, 112, 3]. And the annotation has a [] (there is no reason, that was a particular case).

    Finally, we can treat the tf.string datatype as a string. In reality, after evaluating the annotation tensor, we can realize that the tensor is treated as a binary string (This is how it is really treated in tensorflow). Therefore, in my case that string was just a set of features related to that particular image. Therefore, in order to extract specific features, here is an example:

    # The output of string_split is not a tensor, instead, it is a SparseTensorValue. Therefore, it has a property value that stores the actual values. as a tensor. 
    label_batch_splitted = tf.string_split(label_batch, delimiter=', ')
    label_batch_values = tf.reshape(label_batch_splitted.values, [batch_size, -1])
    # string_to_number will convert the feature's numbers into float32 as I need them. 
    label_batch_numbers = tf.string_to_number(label_batch_values, out_type=tf.float32)
    # the tf.slice would extract the necessary feature which I am looking.
    confidences = tf.slice(label_batch_numbers, begin=[0, 3], size=[-1, 1])
    

    Hope this answer helps.

    0 讨论(0)
提交回复
热议问题