Seems Convolution ops bug in relay that it does not support input and weight with mix data type uint8 and int8


Because of the project reason, we need quantize convolution input from float32 to uint8, and weight from float32 to int8 using tvm relay quantize code. But it will appears following bug:

tvm/src/relay/pass/ Check failed: resolved.defined() Unable to unify parent types: TensorType([64, 3, 7, 7], uint8) and TensorType([64, 3, 7, 7], int8)

and after changing

from " reporter->Assign(types[1], TensorTypeNode::make(wshape, data->dtype));" to “reporter->Assign(types[1], TensorTypeNode::make(wshape, weight->dtype));”, the infer_type can pass, but it appears a new error, just like below. I will create a new question, thank you very much!

Traceback (most recent call last):
File “/home/ai/solomon/workspace/code/tvm/python/tvm/relay/backend/”, line 76, in lower
return _backend._CompileEngineLower(self, key)
File “/home/ai/solomon/workspace/code/tvm/python/tvm/_ffi/_ctypes/”, line 185, in call
ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
File “/home/ai/solomon/workspace/code/tvm/python/tvm/_ffi/”, line 71, in check_call
raise TVMError(py_str(_LIB.TVMGetLastError()))
tvm._ffi.base.TVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File “/home/ai/solomon/workspace/code/tvm/python/tvm/_ffi/_ctypes/”, line 55, in cfun
rv = local_pyfunc(*pyargs)
File “/home/ai/solomon/workspace/code/tvm/python/tvm/relay/op/nn/”, line 337, in compute_contrib_conv2d_NCHWc
data_layout, out_layout, out_dtype)
File “”, line 2, in conv2d_NCHWc
File “/home/ai/solomon/workspace/code/tvm/python/tvm/”, line 356, in dispatch_func
return dispatch_dict[k](*args, **kwargs)
File “”, line 2, in config_dispatcher
File “/home/ai/solomon/workspace/code/tvm/python/tvm/autotvm/task/”, line 199, in dispatch_func
return dispatch_dict[‘direct’](cfg, *args, **kwargs)
File “/home/ai/solomon/workspace/code/tvm/python/tvm/autotvm/task/”, line 267, in template_call
node = f(cfg, *args, **kwargs)
File “/home/ai/solomon/workspace/code/tvm/topi/python/topi/x86/”, line 377, in _declaration_conv_NCHWc
oc_chunk, _, kernel_height, kernel_width, _, oc_bn, _ = get_const_tuple(kernel.shape)
ValueError: not enough values to unpack (expected 7, got 6)

How to fix the bug? thank you very much.


The second problem should not be related to the first. Are you using opt_level=3 before running the quantization pass?

Either way, you can try to workaround this by using opt_level < 3 to disable the layout transformation.


I modify the “AlterOpLayout”: from 3 to 4, and using opt_level = 3 after running the quantize pass(firstly, optimize(opt_level = 3); Then, quantize; Eventually, = 3)). and it can pass now, thank you very much.