The section "Load a test image" of the Tutorial is wrong: it normalizes input to 2 which is usually not the case

https://docs.tvm.ai/tutorials/frontend/from_tflite.html

image_data[:, :, :, 0] = 2.0 / 255.0 * image_data[:, :, :, 0] - 1

Image comes with data 0…255. The above line normalizes it to 0…2 which is rarely the case. Usually NNs expect normalization to 0…1 or 0…255 for images. MobileNet expects 0…255, but ImagaNet-based NNs expect 0…255.

Instead the tutorial should explain to the user that they should normalize to the range expected by the NN.

Additionally, it is WRONG to access pixels in Python. This was the whole idea to use Python for data management and topology construction, and make C/C++ do the heavy lifting like per-pixel operations. You teach users to do this in Python, which is wrong and shouldn’t be in the tutorial.

I think you should notice that we have one comment of this line to explain: https://github.com/tensorflow/models/blob/edb6ed22a801665946c63d650ab9a0b23d98e1b1/research/slim/preprocessing/inception_preprocessing.py#L243, which have described why we do this and you should do this according to your own model. BTW, it is not [0, 2], it is [-1, 1] of this line of code.

In this tutorial, our goal is to show the pipeline how to load one tflite model (in this tutorial is mobilenet) and predict the correct image. We do not cover everything in just one tutorial, for example how to preprocess more efficient using C++.