[Quantization] Problems with recent refactoring changes in the quantization pass


#1

Hi,

After the last changes in the quantization in the following commit, I am facing some issues:

  1. I get the following error:
  File "/home/tvm/tvm/python/tvm/relay/quantize/_partition.py", line 136, in add_partition_function
    if 'cuda' in _target.current_target().keys:
AttributeError: 'NoneType' object has no attribute 'keys'
  1. If I remove the problematic code in 1, the accuracy of a quantized model, which was previously working, drops to the point that the output of the model is not valid anymore.

@vinx13 can you have a look into this? I saw that the mentioned commit touches some aspects of accuracy.

Thanks


TVM graph quantization is giving poor accuracy
#2

@tico I can confirm this issue, cc @ziheng


#3

should be fixed by https://github.com/dmlc/tvm/pull/3792


#4

What’s the model with accuracy drop?


#5

I tested calibration for resnet18 v1, using non-power-2 scale, acc-top1 is 0.51, it raised error when using power-2 scale


#6

resnet18_v1 should be fine with configure here: https://github.com/dmlc/tvm/blob/master/tests/python/nightly/quantization/test_quantization_accuracy.py#L141


#7

Likely there are overflow with int8 addition when custom scales are used
After I commented out


and

resnet18 v1 works fine (acc 0.69)


#8

@ziheng shall we add an option of whether to use int8 addition to prevent overflow?


#9

The changes made by #3543 will affect the accuracy indeed, but by adjusting the scale, we should achieve matched accuracy. Let me check it.


#10

Hi @vinx13, could you check the accuracy of resnet18_v2 with your change? If it make sense, let’s add an option for whether to use int8 addition.

You can also check some configure I used here: https://github.com/dmlc/tvm/blob/8dab80c86e26d093bc1d10b9e56d9ef9925295c3/tests/python/nightly/quantization/test_quantization_accuracy.py#L166


#11

resnet18_v2 0.51 (there are still some accuracy drop, it is around zero before my changes)
resnet50_v2 0.766


#12

Hi,

@vinx13 @ziheng is there any update on the accuracy issues of the quantization pass?

Thanks