Conversion from Onnx fails for Efficientnet-b0

I have a pytorch model converted to onnx which I aim to convert to TVM. The model has 1 input and 2 outputs. The conversion from pytorch to onnx is successful (checked with onnxruntime) but from onnx to tvm I receive the following error. The network is similar to the efficientnet-b0.

File “to_tvm.py”, line 99, in tune_and_evaluate mod, params, input_shape = get_network()

File “to_tvm.py”, line 35, in get_network mod, params = relay.frontend.from_onnx(model, shape=shape_dict, dtype=dtype)

File “/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py”, line 1879, in from_onnx mod, params = g.from_onnx(graph, opset)

File “/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py”, line 1707, in from_onnx op = self._convert_operator(op_name, inputs, attr, opset)

File “/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py”, line 1807, in _convert_operator sym = convert_map[op_name](inputs, attrs, self._params)

File “/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/common.py”, line 417, in call return get_relay_op(op_name)(*inputs, **new_attrs)

File “/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/op/tensor.py”, line 870, in clip return _make.clip(a, a_min, a_max)

File “/home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/_ffi/_ctypes/packed_func.py”, line 213, in call raise get_last_ffi_error()

tvm. ffi.base.TVMError: Traceback (most recent call last): [bt] (3) /home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x65) [0x7fcff601fb35] [bt] (2) /home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x950a5b) [0x7fcff5d27a5b] [bt] (1) /home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::runtime::TVMPODValue ::operator double() const+0x170) [0x7fcff5782030] [bt] (0) /home/user/.local/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x43) [0x7fcff5771bb3] File “/home/user/libs/tvm/include/tvm/runtime/packed_func.h”, line 418 TVMError: Check failed: type_code_ == kDLFloat (8 vs. 2) : expected float but get Object

1 Like

Hi @NarendraX9, thanks for the bug report!

This looks like a type error somewhere in the ONNX importer. Could you post an example script that reproduces the error? I’d be happy to take a look at debugging it if I can reproduce.

Hi @mbrookhart,

I ended up changing the model by fusing the 2 heads and that solved it. However I will put on an example here soon so it would not be an issue in the future.