TVMError: Check failed: args[i].type() == Int(32): Args to call to halide function must be type Int(32)

i convert onnx(pytorch) model to tvm, it gives the errors:

_root@tvm_demo:/workspace/onnx2tvm# python3 from_onnx.py 
_WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm_
_WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm_
_WARNING:root:Attribute momentum is ignored in relay.sym.batch_norm_
_WARNING:root:Infering Reshape argument by precompute_
_Traceback (most recent call last):_
_  File "from_onnx.py", line 84, in <module>_
_    sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)_
_  File "/workspace/python/tvm/relay/frontend/onnx.py", line 1139, in from_onnx_
_    sym, params = g.from_onnx(graph, opset)_
_  File "/workspace/python/tvm/relay/frontend/onnx.py", line 979, in from_onnx_
_    op = self._convert_operator(op_name, inputs, attr, opset)_
_  File "/workspace/python/tvm/relay/frontend/onnx.py", line 1085, in _convert_operator_
_    sym = convert_map[op_name](inputs, attrs, self._params)_
_  File "/workspace/python/tvm/relay/frontend/onnx.py", line 357, in _impl_v1_
_    graph, lib, params = tvm.relay.build(func, target="llvm", params=params)_
_  File "/workspace/python/tvm/relay/build_module.py", line 304, in build_
_    graph_json, lowered_funcs, params = graph_gen.codegen(func)_
_  File "/workspace/python/tvm/relay/backend/graph_runtime_codegen.py", line 90, in codegen_
_    self._codegen(func)_
_  File "tvm/_ffi/_cython/./function.pxi", line 310, in tvm._ffi._cy3.core.FunctionBase.__call___
_  File "tvm/_ffi/_cython/./function.pxi", line 245, in tvm._ffi._cy3.core.FuncCall_
_  File "tvm/_ffi/_cython/./function.pxi", line 234, in tvm._ffi._cy3.core.FuncCall3_
_  File "tvm/_ffi/_cython/./base.pxi", line 170, in tvm._ffi._cy3.core.CALL_
_tvm._ffi.base.TVMError: Traceback (most recent call last):_
_  [bt] (8) /workspace/build/libtvm.so(+0x5047ea) [0x7f97209cf7ea]_
_  [bt] (7) /workspace/build/libtvm.so(+0x5bef1b) [0x7f9720a89f1b]_
_  [bt] (6) /workspace/build/libtvm.so(+0x5d880c) [0x7f9720aa380c]_
_  [bt] (5) /workspace/build/libtvm.so(tvm::compute(tvm::Array<HalideIR::Expr, void>, std::function<HalideIR::Expr (tvm::Array<tvm::Var, void> const&)>, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::Map<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::NodeRef, void, void>)+0x4fe) [0x7f972093be7e]_
_  [bt] (4) /workspace/build/libtvm.so(+0x5d8160) [0x7f9720aa3160]_
_  [bt] (3) /workspace/build/libtvm.so(+0x5d7fa0) [0x7f9720aa2fa0]_
_  [bt] (2) /workspace/build/libtvm.so(tvm::Tensor::operator()(tvm::Array<HalideIR::Expr, void>) const+0x8ee) [0x7f972081f5be]_
_  [bt] (1) /workspace/build/libtvm.so(HalideIR::Internal::Call::make(HalideIR::Type, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, tvm::Array<HalideIR::Expr, void>, HalideIR::Internal::Call::CallType, HalideIR::IR::FunctionRef, int)+0x48b) [0x7f9720d8d24b]_
_  [bt] (0) /workspace/build/libtvm.so(+0x1b7b72) [0x7f9720682b72]_
_  File "/workspace/3rdparty/HalideIR/src/ir/IR.cpp", line 468_
_TVMError: Check failed: args[i].type() == Int(32): Args to call to halide function must be type Int(32)_

my codes :

import onnx
import numpy as np
import tvm
import tvm.relay as relay
from tvm.contrib.download import download_testdata
from PIL import Image

def preprocess_image(image_file):
    resized_image = Image.open(image_file).resize((352, 352))
    image_data = np.asarray(resized_image).astype("float32")
    # convert HWC to CHW
    image_data = image_data.transpose((2, 0, 1))
    # after expand_dims, we have format NCHW
    image_data = np.expand_dims(image_data, axis = 0)
    image_data[:,0,:,:] = 2.0 / 255.0 * image_data[:,0,:,:] - 1 
    image_data[:,1,:,:] = 2.0 / 255.0 * image_data[:,1,:,:] - 1
    image_data[:,2,:,:] = 2.0 / 255.0 * image_data[:,2,:,:] - 1
    return image_data
model_path = 'mobilenetv3_2_small.onnx'
onnx_model = onnx.load(model_path)
from PIL import Image
image_file = 'cat.png'
image_data = preprocess_image(image_file)
target = 'llvm'
input_name = '0'
input_shape = (1, 3, 352, 352)
shape_dict = {input_name: input_shape}

sym, params = relay.frontend.from_onnx(onnx_model, shape_dict)

with relay.build_config(opt_level=1):
    intrp = relay.build_module.create_executor('graph', sym, tvm.cpu(0), target)

dtype = 'float32'
tvm_output = intrp.evaluate(sym)(tvm.nd.array(image_data.astype(dtype)), **params).asnumpy()
1 Like

@tqchen can you give me some help? thanks

One dirty work through is to modify /workspace/3rdparty/HalideIR/src/ir/IR.cpp", line 468

       for (size_t i = 0; i < args.size(); i++) {
            internal_assert(args[i].type() == Int(32))
            << "Args to call to halide function must be type Int(32)\n";
        }

to

       for (size_t i = 0; i < args.size(); i++) {
            internal_assert(args[i].type() == Int(32) or args[i].type() == Int(64))
            << "Args to call to halide function must be type Int(32)\n";
        }

However, this may bring other risks like overflow. Do it at your own risk.