TVM run different result on different machine

Hi, Has anyone suffer this question?

Install guide: https://tvm.apache.org/docs/install/from_source.html

but when I run the the demo code in tvm/tutorials/frontend(from_mxnet.py/deploy-prequantized-tflite.py).I find the result has larger difference.

such as from_mxnet.py

machine 1:ubuntu16.04/i5-8250U,results top1 is 0.29184738[282]

machine 2:ubuntu16.04/i5-3570,results top1 is 0.3799484[282]

Same problem with deploy-prequantized-tflite.py(from_tflite.py does not have this problem)

btw,same result on same machine,such as use llvm or cuda.

Does anyone can give some suggestion?

I ran into a similar issue when I started using TVM. It was not because of TVM but because of PIL. Indeed PIL is kinda old and I found out that when it is used to resize an image, it alters which influence the results. I downloaded pillow (I use python 3, I don’t know if you will get the same results with python 2.7) and it fixed the issue.

However you may get another issue for quantized models as TVM only use int8 as data type if your device has instructions to manipulate int8.

thx replying,This might be a problem,which pillow version are you using ?I will check this.

TVM’s result and tflite’s result are differenet on same machine, quantized models from https://github.com/tensorflow/models/tree/master/research/slim/nets/mobilenet ,TVM and tflite use same data after pillow.

I have checked multiplier is same as Tflite,it really has a bit different,but I find no effect on result.

I am ready to check function helper_no_fast_int8_hw_legalization()