Quantize annotate assertion in add_rewrite

Hi,

I am trying to quantize a TensorFlow model but I get an assertion in add_rewrite as shown as follows:

File "/home/tvm/tvm/python/tvm/relay/quantize/_annotate.py", line 263, in add_rewrite   
assert lhs_kind == QAnnotateKind.ACTIVATION TVMError: AssertionError

Any ideas on how to fix this?

Thanks

This line has been removed. https://github.com/dmlc/tvm/blob/master/python/tvm/relay/quantize/_annotate.py#L241

Can you try latest master?

Hi,

I tried the latest master but I get the error. In the current master the assertion is actually still there at line 263:

What line did you mean it was removed?

Thanks

Basically this assertion means that you are adding a bias to some activation (int32 result) and there might be some cases missing here. So you can add a check on lhs_kind and rhs_kind and then handle that case

Hi,

Indeed I have biases in my model. By handling do you mean skipping those cases? or what actions I would need to consider to handle these cases?

Also I am not yet 100% sure of the meaning of lhs and rhs in this context and also about the QAnnotateKind. Could you clarify me a bit this?

Thanks a lot

You should handle it like the other cases.
QAnnotateKind.ACTIVATION means the expression is output of a quantized layer (e.g. output of quantized conv2d)
QAnnotateKind.INPUT means the expression has been quantized to int8

I am still trying to figure out how to handle this. However, I realized that by using the NCHW layout there is no issue but if I use the default layout is when I hit the assertion. The problem with NCHW is that I see lots of transpose operations being added in the Relay IR