ONNX error when compiling with nnvm


#1

So I am new with the tvm/nnvm compiler and I am trying to use some onnx pretrained models to run but I keep getting this error no matter what model I use. I have downloaded them from the official onnx repository but I cant make it work. Anyone can help me?

Traceback (most recent call last):
  File "imagenet_classify.py", line 180, in <module>
    main()
  File "imagenet_classify.py", line 126, in main
    net, args.target, shape={"data": img_shape}, params=params)
  File "/.local/lib/python3.5/site-packages/nnvm-0.8.0-py3.5.egg/nnvm/compiler/build_module.py", line 250, in build
    shape, dtype = _update_shape_dtype(shape, dtype, params)
  File "/.local/lib/python3.5/site-packages/nnvm-0.8.0-py3.5.egg/nnvm/compiler/build_module.py", line 136, in _update_shape_dtype
    "%s: dtype not expected %s vs %s" % (k, dtype, v.dtype))
ValueError: reshape_attr_tensor421: dtype not expected float32 vs int64

Anyone successful converting tensorflow resnet50 model to nnvm?
#2

Pass the dtype arg for build as dictionary instead of string should solve this.

Passing string (‘float64’) assumes the same dtype used across the model.


#3

Thank you so much. You saved me :smiley:


#4

welcome :slight_smile:


#5

hi~ srkreddy~
”Passing string (‘float64’)will or not decrease the performance ?
we will run the model on armV8


#6

I am not sure I understood your question.

You need not pass float64 always. It should be same as the model input type.


#7

a dict? is like this “{“dtype”:“float32”}”


#8

dtype_dict={'input0':'float32'}

and

nnvm.compiler.build(dtype=dtype_dict, ....)


#9

Thank you !I am a newcomer to TVM.I am very interested in its running mechanism.I am going to explore further.Thank u again.