Load a tvm model from a float16 onnx model error

The log is

Traceback (most recent call last):

  File "/home/data/git/OnlineActionRecognition/tools/model_convert/tsm_model_to_tvm/tvm_auto_tuning.py", line 174, in <module>
    tune_and_evaluate(tuning_option)

  File "/home/data/git/OnlineActionRecognition/tools/model_convert/tsm_model_to_tvm/tvm_auto_tuning.py", line 124, in tune_and_evaluate
    mod, params = get_tvm_module(torch_inputs[0])

  File "/home/data/git/OnlineActionRecognition/tools/model_convert/tsm_model_to_tvm/tvm_auto_tuning.py", line 48, in get_tvm_module
    relay_module, params = tvm.relay.frontend.from_onnx(onnx_model, shape=shape_dict, dtype="float16")

  File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py", line 1879, in from_onnx
    mod, params = g.from_onnx(graph, opset)

  File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/relay/frontend/onnx.py", line 1726, in from_onnx
    return IRModule.from_expr(func), self._params

  File "/home/cyh/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/ir/module.py", line 223, in from_expr
    return _ffi_api.Module_FromExpr(expr, funcs, defs)

  File "tvm/_ffi/_cython/./packed_func.pxi", line 308, in tvm._ffi._cy3.core.PackedFuncBase.__call__

  File "tvm/_ffi/_cython/./packed_func.pxi", line 243, in tvm._ffi._cy3.core.FuncCall

  File "tvm/_ffi/_cython/./packed_func.pxi", line 232, in tvm._ffi._cy3.core.FuncCall3

  File "tvm/_ffi/_cython/./base.pxi", line 159, in tvm._ffi._cy3.core.CALL

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x61) [0x7fc46241ff61]
  [bt] (7) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x4ec44c) [0x7fc461cba44c]
  [bt] (6) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x189) [0x7fc461cb7739]
  [bt] (5) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xdc) [0x7fc461cb70ec]
  [bt] (4) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x277) [0x7fc461cb6377]
  [bt] (3) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x1d4) [0x7fc4622b0c64]
  [bt] (2) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x86) [0x7fc4622b0486]
  [bt] (1) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::ErrorReporter::RenderErrors(tvm::IRModule const&, bool)+0x2816) [0x7fc461ca8ee6]
  [bt] (0) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7fc461bac5c2]
  [bt] (8) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x4ec44c) [0x7fc461cba44c]
  [bt] (7) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x189) [0x7fc461cb7739]
  [bt] (6) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xdc) [0x7fc461cb70ec]
  [bt] (5) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x277) [0x7fc461cb6377]
  [bt] (4) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x1d4) [0x7fc4622b0c64]
  [bt] (3) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x55) [0x7fc4622b0455]
  [bt] (2) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeSolver::Solve()+0x3b0) [0x7fc4621b0f40]
  [bt] (1) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<bool (tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xd4) [0x7fc461fad504]
  [bt] (0) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::BroadcastRel(tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0xb4d) [0x7fc46215641d]
  [bt] (8) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0x4ec44c) [0x7fc461cba44c]
  [bt] (7) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::IRModule::FromExpr(tvm::RelayExpr const&, tvm::Map<tvm::GlobalVar, tvm::BaseFunc, void, void> const&, tvm::Map<tvm::GlobalTypeVar, tvm::TypeData, void, void> const&)+0x189) [0x7fc461cb7739]
  [bt] (6) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::IRModuleNode::Add(tvm::GlobalVar const&, tvm::BaseFunc const&, bool)+0xdc) [0x7fc461cb70ec]
  [bt] (5) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::RunTypeCheck(tvm::IRModule const&, tvm::GlobalVar const&, tvm::relay::Function)+0x277) [0x7fc461cb6377]
  [bt] (4) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::IRModule const&, tvm::GlobalVar const&)+0x1d4) [0x7fc4622b0c64]
  [bt] (3) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::RelayExpr)+0x55) [0x7fc4622b0455]
  [bt] (2) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::TypeSolver::Solve()+0x3b0) [0x7fc4621b0f40]
  [bt] (1) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<bool (tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)>(bool (*)(tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xd4) [0x7fc461fad504]
  [bt] (0) /home/data/cyh_home/miniconda3/envs/py36-pytorch/lib/python3.6/site-packages/tvm-0.7.dev1-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::BroadcastRel(tvm::Array<tvm::Type, void> const&, int, tvm::Attrs const&, tvm::TypeReporter const&)+0xb4d) [0x7fc46215641d]
  File "/home/data/git/tvm/src/ir/error.cc", line 133
TVMError: 
Error(s) have occurred. The program has been annotated with them:

In `main`: 
v0.0.4
fn (%input: Tensor[(1, 24, 224, 224), float16], %base_model.conv1.weight: Tensor[(64, 3, 7, 7), float16], %base_model.bn1.weight: Tensor[(64), float16], %base_model.bn1.bias: Tensor[(64), float16], %base_model.bn1.running_mean: Tensor[(64), float16], %base_model.bn1.running_var: Tensor[(64), float16], %v344: Tensor[(1, 1, 8, 56, 56), float16], %v346: Tensor[(1, 1, 8, 56, 56), float16], %base_model.layer1.0.conv1.net.weight: Tensor[(64, 64, 1, 1), float16], %base_model.layer1.0.bn1.weight: Tensor[(64), float16], %base_model.layer1.0.bn1.bias: Tensor[(64), float16], %base_model.layer1.0.bn1.running_mean: Tensor[(64), float16], %base_model.layer1.0.bn1.running_var: Tensor[(64), float16], %base_model.layer1.0.conv2.weight: Tensor[(64, 64, 3, 3), float16], %base_model.layer1.0.bn2.weight: Tensor[(64), float16], %base_model.layer1.0.bn2.bias: Tensor[(64), float16], %base_model.layer1.0.bn2.running_mean: Tensor[(64), float16], %base_model.layer1.0.bn2.running_var: Tensor[(64), float16], %base_model.layer1.0.conv3.weight: Tensor[(256, 64, 1, 1), float16], %base_model.layer1.0.bn3.weight: Tensor[(256), float16], %base_model.layer1.0.bn3.bias: Tensor[(256), float16], %base_model.layer1.0.bn3.running_mean: Tensor[(256), float16], %base_model.layer1.0.bn3.running_var: Tensor[(256), float16], %base_model.layer1.0.downsample.0.weight: Tensor[(256, 64, 1, 1), float16], %base_model.layer1.0.downsample.1.weight: Tensor[(256), float16], %base_model.layer1.0.downsample.1.bias: Tensor[(256), float16], %base_model.layer1.0.downsample.1.running_mean: Tensor[(256), float16], %base_model.layer1.0.downsample.1.running_var: Tensor[(256), float16], %v410: Tensor[(1, 1, 32, 56, 56), float16], %v412: Tensor[(1, 1, 32, 56, 56), float16], %base_model.layer1.1.conv1.net.weight: Tensor[(64, 256, 1, 1), float16], %base_model.layer1.1.bn1.weight: Tensor[(64), float16], %base_model.layer1.1.bn1.bias: Tensor[(64), float16], %base_model.layer1.1.bn1.running_mean: Tensor[(64), float16], %base_model.layer1.1.bn1.running_var: Tensor[(64), float16], %base_model.layer1.1.conv2.weight: Tensor[(64, 64, 3, 3), float16], %base_model.layer1.1.bn2.weight: Tensor[(64), float16], %base_model.layer1.1.bn2.bias: Tensor[(64), float16], %base_model.layer1.1.bn2.running_mean: Tensor[(64), float16], %base_model.layer1.1.bn2.running_var: Tensor[(64), float16], %base_model.layer1.1.conv3.weight: Tensor[(256, 64, 1, 1), float16], %base_model.layer1.1.bn3.weight: Tensor[(256), float16], %base_model.layer1.1.bn3.bias: Tensor[(256), float16], %base_model.layer1.1.bn3.running_mean: Tensor[(256), float16], %base_model.layer1.1.bn3.running_var: Tensor[(256), float16], %v474: Tensor[(1, 1, 32, 56, 56), float16], %v476: Tensor[(1, 1, 32, 56, 56), float16], %base_model.layer1.2.conv1.net.weight: Tensor[(64, 256, 1, 1), float16], %base_model.layer1.2.bn1.weight: Tensor[(64), float16], %base_model.layer1.2.bn1.bias: Tensor[(64), float16], %base_model.layer1.2.bn1.running_mean: Tensor[(64), float16], %base_model.layer1.2.bn1.running_var: Tensor[(64), float16], %base_model.layer1.2.conv2.weight: Tensor[(64, 64, 3, 3), float16], %base_model.layer1.2.bn2.weight: Tensor[(64), float16], %base_model.layer1.2.bn2.bias: Tensor[(64), float16], %base_model.layer1.2.bn2.running_mean: Tensor[(64), float16], %base_model.layer1.2.bn2.running_var: Tensor[(64), float16], %base_model.layer1.2.conv3.weight: Tensor[(256, 64, 1, 1), float16], %base_model.layer1.2.bn3.weight: Tensor[(256), float16], %base_model.layer1.2.bn3.bias: Tensor[(256), float16], %base_model.layer1.2.bn3.running_mean: Tensor[(256), float16], %base_model.layer1.2.bn3.running_var: Tensor[(256), float16], %v538: Tensor[(1, 1, 32, 56, 56), float16], %v540: Tensor[(1, 1, 32, 56, 56), float16], %base_model.layer2.0.conv1.net.weight: Tensor[(128, 256, 1, 1), float16], %base_model.layer2.0.bn1.weight: Tensor[(128), float16], %base_model.layer2.0.bn1.bias: Tensor[(128), float16], %base_model.layer2.0.bn1.running_mean: Tensor[(128), float16], %base_model.layer2.0.bn1.running_var: Tensor[(128), float16], %base_model.layer2.0.conv2.weight: Tensor[(128, 128, 3, 3), float16], %base_model.layer2.0.bn2.weight: Tensor[(128), float16], %base_model.layer2.0.bn2.bias: Tensor[(128), float16], %base_model.layer2.0.bn2.running_mean: Tensor[(128), float16], %base_model.layer2.0.bn2.running_var: Tensor[(128), float16], %base_model.layer2.0.conv3.weight: Tensor[(512, 128, 1, 1), float16], %base_model.layer2.0.bn3.weight: Tensor[(512), float16], %base_model.layer2.0.bn3.bias: Tensor[(512), float16], %base_model.layer2.0.bn3.running_mean: Tensor[(512), float16], %base_model.layer2.0.bn3.running_var: Tensor[(512), float16], %base_model.layer2.0.downsample.0.weight: Tensor[(512, 256, 1, 1), float16], %base_model.layer2.0.downsample.1.weight: Tensor[(512), float16], %base_model.layer2.0.downsample.1.bias: Tensor[(512), float16], %base_model.layer2.0.downsample.1.running_mean: Tensor[(512), float16], %base_model.layer2.0.downsample.1.running_var: Tensor[(512), float16], %v604: Tensor[(1, 1, 64, 28, 28), float16], %v606: Tensor[(1, 1, 64, 28, 28), float16], %base_model.layer2.1.conv1.net.weight: Tensor[(128, 512, 1, 1), float16], %base_model.layer2.1.bn1.weight: Tensor[(128), float16], %base_model.layer2.1.bn1.bias: Tensor[(128), float16], %base_model.layer2.1.bn1.running_mean: Tensor[(128), float16], %base_model.layer2.1.bn1.running_var: Tensor[(128), float16], %base_model.layer2.1.conv2.weight: Tensor[(128, 128, 3, 3), float16], %base_model.layer2.1.bn2.weight: Tensor[(128), float16], %base_model.layer2.1.bn2.bias: Tensor[(128), float16], %base_model.layer2.1.bn2.running_mean: Tensor[(128), float16], %base_model.layer2.1.bn2.running_var: Tensor[(128), float16], %base_model.layer2.1.conv3.weight: Tensor[(512, 128, 1, 1), float16], %base_model.layer2.1.bn3.weight: Tensor[(512), float16], %base_model.layer2.1.bn3.bias: Tensor[(512), float16], %base_model.layer2.1.bn3.running_mean: Tensor[(512), float16], %base_model.layer2.1.bn3.running_var: Tensor[(512), float16], %v668: Tensor[(1, 1, 64, 28, 28), float16], %v670: Tensor[(1, 1, 64, 28, 28), float16], %base_model.layer2.2.conv1.net.weight: Tensor[(128, 512, 1, 1), float16], %base_model.layer2.2.bn1.weight: Tensor[(128), float16], %base_model.layer2.2.bn1.bias: Tensor[(128), float16], %base_model.layer2.2.bn1.running_mean: Tensor[(128), float16], %base_model.layer2.2.bn1.running_var: Tensor[(128), float16], %base_model.layer2.2.conv2.weight: Tensor[(128, 128, 3, 3), float16], %base_model.layer2.2.bn2.weight: Tensor[(128), float16], %base_model.layer2.2.bn2.bias: Tensor[(128), float16], %base_model.layer2.2.bn2.running_mean: Tensor[(128), float16], %base_model.layer2.2.bn2.running_var: Tensor[(128), float16], %base_model.layer2.2.conv3.weight: Tensor[(512, 128, 1, 1), float16], %base_model.layer2.2.bn3.weight: Tensor[(512), float16], %base_model.layer2.2.bn3.bias: Tensor[(512), float16], %base_model.layer2.2.bn3.running_mean: Tensor[(512), float16], %base_model.layer2.2.bn3.running_var: Tensor[(512), float16], %v732: Tensor[(1, 1, 64, 28, 28), float16], %v734: Tensor[(1, 1, 64, 28, 28), float16], %base_model.layer2.3.conv1.net.weight: Tensor[(128, 512, 1, 1), float16], %base_model.layer2.3.bn1.weight: Tensor[(128), float16], %base_model.layer2.3.bn1.bias: Tensor[(128), float16], %base_model.layer2.3.bn1.running_mean: Tensor[(128), float16], %base_model.layer2.3.bn1.running_var: Tensor[(128), float16], %base_model.layer2.3.conv2.weight: Tensor[(128, 128, 3, 3), float16], %base_model.layer2.3.bn2.weight: Tensor[(128), float16], %base_model.layer2.3.bn2.bias: Tensor[(128), float16], %base_model.layer2.3.bn2.running_mean: Tensor[(128), float16], %base_model.layer2.3.bn2.running_var: Tensor[(128), float16], %base_model.layer2.3.conv3.weight: Tensor[(512, 128, 1, 1), float16], %base_model.layer2.3.bn3.weight: Tensor[(512), float16], %base_model.layer2.3.bn3.bias: Tensor[(512), float16], %base_model.layer2.3.bn3.running_mean: Tensor[(512), float16], %base_model.layer2.3.bn3.running_var: Tensor[(512), float16], %v796: Tensor[(1, 1, 64, 28, 28), float16], %v798: Tensor[(1, 1, 64, 28, 28), float16], %base_model.layer3.0.conv1.net.weight: Tensor[(256, 512, 1, 1), float16], %base_model.layer3.0.bn1.weight: Tensor[(256), float16], %base_model.layer3.0.bn1.bias: Tensor[(256), float16], %base_model.layer3.0.bn1.running_mean: Tensor[(256), float16], %base_model.layer3.0.bn1.running_var: Tensor[(256), float16], %base_model.layer3.0.conv2.weight: Tensor[(256, 256, 3, 3), float16], %base_model.layer3.0.bn2.weight: Tensor[(256), float16], %base_model.layer3.0.bn2.bias: Tensor[(256), float16], %base_model.layer3.0.bn2.running_mean: Tensor[(256), float16], %base_model.layer3.0.bn2.running_var: Tensor[(256), float16], %base_model.layer3.0.conv3.weight: Tensor[(1024, 256, 1, 1), float16], %base_model.layer3.0.bn3.weight: Tensor[(1024), float16], %base_model.layer3.0.bn3.bias: Tensor[(1024), float16], %base_model.layer3.0.bn3.running_mean: Tensor[(1024), float16], %base_model.layer3.0.bn3.running_var: Tensor[(1024), float16], %base_model.layer3.0.downsample.0.weight: Tensor[(1024, 512, 1, 1), float16], %base_model.layer3.0.downsample.1.weight: Tensor[(1024), float16], %base_model.layer3.0.downsample.1.bias: Tensor[(1024), float16], %base_model.layer3.0.downsample.1.running_mean: Tensor[(1024), float16], %base_model.layer3.0.downsample.1.running_var: Tensor[(1024), float16], %v862: Tensor[(1, 1, 128, 14, 14), float16], %v864: Tensor[(1, 1, 128, 14, 14), float16], %base_model.layer3.1.conv1.net.weight: Tensor[(256, 1024, 1, 1), float16], %base_model.layer3.1.bn1.weight: Tensor[(256), float16], %base_model.layer3.1.bn1.bias: Tensor[(256), float16], %base_model.layer3.1.bn1.running_mean: Tensor[(256), float16], %base_model.layer3.1.bn1.running_var: Tensor[(256), float16], %base_model.layer3.1.conv2.weight: Tensor[(256, 256, 3, 3), float16], %base_model.layer3.1.bn2.weight: Tensor[(256), float16], %base_model.layer3.1.bn2.bias: Tensor[(256), float16], %base_model.layer3.1.bn2.running_mean: Tensor[(256), float16], %base_model.layer3.1.bn2.running_var: Tensor[(256), float16], %base_model.layer3.1.conv3.weight: Tensor[(1024, 256, 1, 1), float16], %base_model.layer3.1.bn3.weight: Tensor[(1024), float16], %base_model.layer3.1.bn3.bias: Tensor[(1024), float16], %base_model.layer3.1.bn3.running_mean: Tensor[(1024), float16], %base_model.layer3.1.bn3.running_var: Tensor[(1024), float16], %v926: Tensor[(1, 1, 128, 14, 14), float16], %v928: Tensor[(1, 1, 128, 14, 14), float16], %base_model.layer3.2.conv1.net.weight: Tensor[(256, 1024, 1, 1), float16], %base_model.layer3.2.bn1.weight: Tensor[(256), float16], %base_model.layer3.2.bn1.bias: Tensor[(256), float16], %base_model.layer3.2.bn1.running_mean: Tensor[(256), float16], %base_model.layer3.2.bn1.running_var: Tensor[(256), float16], %base_model.layer3.2.conv2.weight: Tensor[(256, 256, 3, 3), float16], %base_model.layer3.2.bn2.weight: Tensor[(256), float16], %base_model.layer3.2.bn2.bias: Tensor[(256), float16], %base_model.layer3.2.bn2.running_mean: Tensor[(256), float16], %base_model.layer3.2.bn2.running_var: Tensor[(256), float16], %base_model.layer3.2.conv3.weight: Tensor[(1024, 256, 1, 1), float16], %base_model.layer3.2.bn3.weight: Tensor[(1024), float16], %base_model.layer3.2.bn3.bias: Tensor[(1024), float16], %base_model.layer3.2.bn3.running_mean: Tensor[(1024), float16], %base_model.layer3.2.bn3.running_var: Tensor[(1024), float16], %v990: Tensor[(1, 1, 128, 14, 14), float16], %v992: Tensor[(1, 1, 128, 14, 14), float16], %base_model.layer3.3.conv1.net.weight: Tensor[(256, 1024, 1, 1), float16], %base_model.layer3.3.bn1.weight: Tensor[(256), float16], %base_model.layer3.3.bn1.bias: Tensor[(256), float16], %base_model.layer3.3.bn1.running_mean: Tensor[(256), float16], %base_model.layer3.3.bn1.running_var: Tensor[(256), float16], %base_model.layer3.3.conv2.weight: Tensor[(256, 256, 3, 3), float16], %base_model.layer3.3.bn2.weight: Tensor[(256), float16], %base_model.layer3.3.bn2.bias: Tensor[(256), float16], %base_model.layer3.3.bn2.running_mean: Tensor[(256), float16], %base_model.layer3.3.bn2.running_var: Tensor[(256), float16], %base_model.layer3.3.conv3.weight: Tensor[(1024, 256, 1, 1), float16], %base_model.layer3.3.bn3.weight: Tensor[(1024), float16], %base_model.layer3.3.bn3.bias: Tensor[(1024), float16], %base_model.layer3.3.bn3.running_mean: Tensor[(1024), float16], %base_model.layer3.3.bn3.running_var: Tensor[(1024), float16], %v1054: Tensor[(1, 1, 128, 14, 14), float16], %v1056: Tensor[(1, 1, 128, 14, 14), float16], %base_model.layer3.4.conv1.net.weight: Tensor[(256, 1024, 1, 1), float16], %base_model.layer3.4.bn1.weight: Tensor[(256), float16], %base_model.layer3.4.bn1.bias: Tensor[(256), float16], %base_model.layer3.4.bn1.running_mean: Tensor[(256), float16], %base_model.layer3.4.bn1.running_var: Tensor[(256), float16], %base_model.layer3.4.conv2.weight: Tensor[(256, 256, 3, 3), float16], %base_model.layer3.4.bn2.weight: Tensor[(256), float16], %base_model.layer3.4.bn2.bias: Tensor[(256), float16], %base_model.layer3.4.bn2.running_mean: Tensor[(256), float16], %base_model.layer3.4.bn2.running_var: Tensor[(256), float16], %base_model.layer3.4.conv3.weight: Tensor[(1024, 256, 1, 1), float16], %base_model.layer3.4.bn3.weight: Tensor[(1024), float16], %base_model.layer3.4.bn3.bias: Tensor[(1024), float16], %base_model.layer3.4.bn3.running_mean: Tensor[(1024), float16], %base_model.layer3.4.bn3.running_var: Tensor[(1024), float16], %v1118: Tensor[(1, 1, 128, 14, 14), float16], %v1120: Tensor[(1, 1, 128, 14, 14), float16], %base_model.layer3.5.conv1.net.weight: Tensor[(256, 1024, 1, 1), float16], %base_model.layer3.5.bn1.weight: Tensor[(256), float16], %base_model.layer3.5.bn1.bias: Tensor[(256), float16], %base_model.layer3.5.bn1.running_mean: Tensor[(256), float16], %base_model.layer3.5.bn1.running_var: Tensor[(256), float16], %base_model.layer3.5.conv2.weight: Tensor[(256, 256, 3, 3), float16], %base_model.layer3.5.bn2.weight: Tensor[(256), float16], %base_model.layer3.5.bn2.bias: Tensor[(256), float16], %base_model.layer3.5.bn2.running_mean: Tensor[(256), float16], %base_model.layer3.5.bn2.running_var: Tensor[(256), float16], %base_model.layer3.5.conv3.weight: Tensor[(1024, 256, 1, 1), float16], %base_model.layer3.5.bn3.weight: Tensor[(1024), float16], %base_model.layer3.5.bn3.bias: Tensor[(1024), float16], %base_model.layer3.5.bn3.running_mean: Tensor[(1024), float16], %base_model.layer3.5.bn3.running_var: Tensor[(1024), float16], %v1182: Tensor[(1, 1, 128, 14, 14), float16], %v1184: Tensor[(1, 1, 128, 14, 14), float16], %base_model.layer4.0.conv1.net.weight: Tensor[(512, 1024, 1, 1), float16], %base_model.layer4.0.bn1.weight: Tensor[(512), float16], %base_model.layer4.0.bn1.bias: Tensor[(512), float16], %base_model.layer4.0.bn1.running_mean: Tensor[(512), float16], %base_model.layer4.0.bn1.running_var: Tensor[(512), float16], %base_model.layer4.0.conv2.weight: Tensor[(512, 512, 3, 3), float16], %base_model.layer4.0.bn2.weight: Tensor[(512), float16], %base_model.layer4.0.bn2.bias: Tensor[(512), float16], %base_model.layer4.0.bn2.running_mean: Tensor[(512), float16], %base_model.layer4.0.bn2.running_var: Tensor[(512), float16], %base_model.layer4.0.conv3.weight: Tensor[(2048, 512, 1, 1), float16], %base_model.layer4.0.bn3.weight: Tensor[(2048), float16], %base_model.layer4.0.bn3.bias: Tensor[(2048), float16], %base_model.layer4.0.bn3.running_mean: Tensor[(2048), float16], %base_model.layer4.0.bn3.running_var: Tensor[(2048), float16], %base_model.layer4.0.downsample.0.weight: Tensor[(2048, 1024, 1, 1), float16], %base_model.layer4.0.downsample.1.weight: Tensor[(2048), float16], %base_model.layer4.0.downsample.1.bias: Tensor[(2048), float16], %base_model.layer4.0.downsample.1.running_mean: Tensor[(2048), float16], %base_model.layer4.0.downsample.1.running_var: Tensor[(2048), float16], %v1248: Tensor[(1, 1, 256, 7, 7), float16], %v1250: Tensor[(1, 1, 256, 7, 7), float16], %base_model.layer4.1.conv1.net.weight: Tensor[(512, 2048, 1, 1), float16], %base_model.layer4.1.bn1.weight: Tensor[(512), float16], %base_model.layer4.1.bn1.bias: Tensor[(512), float16], %base_model.layer4.1.bn1.running_mean: Tensor[(512), float16], %base_model.layer4.1.bn1.running_var: Tensor[(512), float16], %base_model.layer4.1.conv2.weight: Tensor[(512, 512, 3, 3), float16], %base_model.layer4.1.bn2.weight: Tensor[(512), float16], %base_model.layer4.1.bn2.bias: Tensor[(512), float16], %base_model.layer4.1.bn2.running_mean: Tensor[(512), float16], %base_model.layer4.1.bn2.running_var: Tensor[(512), float16], %base_model.layer4.1.conv3.weight: Tensor[(2048, 512, 1, 1), float16], %base_model.layer4.1.bn3.weight: Tensor[(2048), float16], %base_model.layer4.1.bn3.bias: Tensor[(2048), float16], %base_model.layer4.1.bn3.running_mean: Tensor[(2048), float16], %base_model.layer4.1.bn3.running_var: Tensor[(2048), float16], %v1312: Tensor[(1, 1, 256, 7, 7), float16], %v1314: Tensor[(1, 1, 256, 7, 7), float16], %base_model.layer4.2.conv1.net.weight: Tensor[(512, 2048, 1, 1), float16], %base_model.layer4.2.bn1.weight: Tensor[(512), float16], %base_model.layer4.2.bn1.bias: Tensor[(512), float16], %base_model.layer4.2.bn1.running_mean: Tensor[(512), float16], %base_model.layer4.2.bn1.running_var: Tensor[(512), float16], %base_model.layer4.2.conv2.weight: Tensor[(512, 512, 3, 3), float16], %base_model.layer4.2.bn2.weight: Tensor[(512), float16], %base_model.layer4.2.bn2.bias: Tensor[(512), float16], %base_model.layer4.2.bn2.running_mean: Tensor[(512), float16], %base_model.layer4.2.bn2.running_var: Tensor[(512), float16], %base_model.layer4.2.conv3.weight: Tensor[(2048, 512, 1, 1), float16], %base_model.layer4.2.bn3.weight: Tensor[(2048), float16], %base_model.layer4.2.bn3.bias: Tensor[(2048), float16], %base_model.layer4.2.bn3.running_mean: Tensor[(2048), float16], %base_model.layer4.2.bn3.running_var: Tensor[(2048), float16], %new_fc.weight: Tensor[(60, 2048), float16], %new_fc.bias: Tensor[(60), float16]) {
  %0 = reshape(%input, newshape=[8, 3, 224, 224]);
...
  %499 = nn.avg_pool2d(%498, pool_size=[7, 7], padding=[0, 0, 0, 0]);
  %500 = reshape(%499, newshape=[8, -1]);
  %501 = nn.batch_flatten(%500);
  %502 = multiply(1f, %501) an internal invariant was violated while typechecking your program [16:23:08] /home/data/git/tvm/src/relay/op/type_relations.cc:112: Check failed: t0->dtype == t1->dtype (float32 vs. float16) : 
; ;
  %503 = nn.dense(%502, %new_fc.weight, units=60);
  %504 = multiply(1f, %new_fc.bias) an internal invariant was violated while typechecking your program [16:23:08] /home/data/git/tvm/src/relay/op/type_relations.cc:112: Check failed: t0->dtype == t1->dtype (float32 vs. float16) : 
; ;
  %505 = nn.bias_add(%503, %504);
  %506 = reshape(%505, newshape=[1, 8, 60]);
  %507 = mean(%506, axis=[1], keepdims=True);
  squeeze(%507, axis=[1])
}

It seems the multiply op raised error. However, I do not know 1f means.

have solved the issue?

hi @chenyihang,

if Issue was not solved please go through operator converter for Gemm in tvm in file tvm/relay/frontend/onnx.py line no 443.

I got it. Thank you!