TVM graph quantization is giving poor accuracy


#1

Hi All,

I tried to quantize the inception model using tvm quantization.
Non-Quantized model accuracy on imagenet validation dataset is 76.1%
Quantized model accuracy on imagenet validation dataset is 15.7%

Below script modification we have done to enable the quantization.

shape_dict = {‘DecodeJpeg/contents’: (299, 299, 3)}
dtype_dict = {‘DecodeJpeg/contents’: ‘uint8’}
mod, params = relay.frontend.from_tensorflow(graph_def,
layout=layout,
shape=shape_dict)
mod = relay.quantize.quantize(mod[‘main’], params)
with relay.build_config(opt_level=3):
graph, lib, params = relay.build(mod,
target=target,
target_host=target_host,
params=params)
m = graph_runtime.create(graph, lib, ctx)
m.set_input(‘DecodeJpeg/contents’, tvm.nd.array(x.astype(dtype)))
m.set_input(**params)
m.run()

Someone please correct me if anything wrong in the script?
Or the current TVM quantization is giving poor accuracy?


#2

I think @vinx13 @ziheng could help this.


#3

What is the structure of your model,I have similar test results. My test results show that all the results are not good except Inception_v3


#4

@ydy I have used inceptionV1 for my test and found the above results.
I am trying to run inceptionV3, but getting some issues related to labels mapping while measuring accuracy. I working on this part.
@ziheng @vinx13 Is this a known issue that the accuracy will be poor? and working is going on towards improving the accuracy?


#5

The recent change may hurt the accuracy, see [Quantization] Problems with recent refactoring changes in the quantization pass