tensorflow lite model gives very different accuracy value compared to python model

前端 未结 2 1452
被撕碎了的回忆
被撕碎了的回忆 2021-02-05 10:58

I am using tensorflow 1.10 Python 3.6

My code is based in the premade iris classification model provided by TensorFlow. This means, I am using a Tensorflow DNN premade c

2条回答
  •  再見小時候
    2021-02-05 11:30

    This question is answered here might help.

    As mentioned in the answer share, doing some

    pre-processing

    on the image before it is fed into "interpreter.invoke()" solves the issue if that was the problem in the first place.

    To elaborate on that here is a block quote from the shared link:

    The below code you see is what I meant by pre-processing:

    test_image = cv2.imread(file_name)

    test_image = cv2.resize(test_image,(299,299),cv2.INTER_AREA)

    test_image = np.expand_dims((test_image)/255,axis=0).astype(np.float32)

    interpreter.set_tensor(input_tensor_index, test_image)

    interpreter.invoke()

    digit = np.argmax(output()[0])

    #print(digit)

    prediction = result[digit]

    As you can see there are two crucial commands/pre-processing done on the image once it is read using "imread()":

    i) The image should be resized to the size that is the "input_height" and "input_width" values of the input image/tensor that was used during the training. In my case (inception-v3) this was 299 for both "input_height" and "input_width". (Read the documentation of the model for this value or look for this variable in the file that you used to train or retrain the model)

    ii) The next command in the above code is:

    test_image = np.expand_dims((test_image)/255,axis=0).astype(np.float32)

    I got this from the "formulae"/model code:

    test_image = np.expand_dims((test_image - input_mean)/input_std, axis=0).astype(np.float32)

    Reading the documentation revealed that for my architecture input_mean = 0 and input_std = 255.

    Hope this helps.

提交回复
热议问题