AutoTVM error when loading ONNX model and invoking NNVM compiler

HI

I recently upgraded TVM to the latest on git with all its sub-modules. I am using PyTorch 1.0 and ONNX 1.4.1. I was able to build TVM with target as “LLVM” on my Mac. Building and installation of both the C++ and python went smoothly.

I was trying to execute this script to load a ONNX model and instantiate the NNVM compiler using the steps listed in: (I just changed line 70 target to ‘llvm’)

However I am getting this error message:

>>> from PIL import Image

>>> img_url = 'https://github.com/dmlc/mxnet.js/blob/master/data/cat.png?raw=true'

>>> download(img_url, 'cat.png')

File cat.png existed, skip.

>>> img = Image.open('cat.png').resize((224, 224))

>>> img_ycbcr = img.convert("YCbCr") # convert to YCbCr

>>> img_y, img_cb, img_cr = img_ycbcr.split()

>>> x = np.array(img_y)[np.newaxis, np.newaxis, :, :]


>>> import nnvm.compiler

>>> target = 'llvm'

>>> input_name = sym.list_input_names()[0]

>>> shape_dict = {input_name: x.shape}

>>> with nnvm.compiler.build_config(opt_level=3):

... graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)

...

Traceback (most recent call last):

File "<stdin>", line 2, in <module>

File "/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/nnvm/python/nnvm/compiler/build_module.py", line 245, in build

tophub_context = autotvm.tophub.context(target)

File "/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/tophub.py", line 84, in context

best_context.load(os.path.join(AUTOTVM_TOPHUB_ROOT_PATH, filename))

File "/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/task/dispatcher.py", line 279, in load

for inp, res in records:

File "/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/record.py", line 168, in load_from_file

yield decode(row)

File "/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/record.py", line 116, in decode

row = json.loads(row)

File "/Users/pallabdatta/anaconda3/lib/python3.7/json/__init__.py", line 348, in loads

return _default_decoder.decode(s)

File "/Users/pallabdatta/anaconda3/lib/python3.7/json/decoder.py", line 337, in decode

obj, end = self.raw_decode(s, idx=_w(s, 0).end())

File "/Users/pallabdatta/anaconda3/lib/python3.7/json/decoder.py", line 353, in raw_decode

obj, end = self.scan_once(s, idx)

json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 426 (char 425)

>>>

Can you please help me why I am getting this error?! Thanks a lot in advance,
pallab

Not sure what is wrong in your particular case, seems was due to corrupted record.

Can you try the relay version instead? https://docs.tvm.ai/tutorials/frontend/from_onnx.html and if you can provide a reproduction step, perhaps someone can take a look

Hi @tqchen

I tried the relay version as suggested in your link and failed at the “execute on TVM” step and hit the same error message when trying to:
How to fix the corrupted record? Can I still execute the NNVM compiler build ?
You can reproduce this with a simple node (say conv2d instantiated and onnx generated using ONNX 1.4.1)
Please let me know…thanks so much for responding and taking a look at it.

dtype = ‘float32’
tvm_output = intrp.evaluate(sym)(tvm.nd.array(x.astype(dtype)), **params).asnumpy()

Traceback (most recent call last):

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/relay/backend/interpreter.py”, line 211, in evaluate

return self._make_executor(expr)

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/relay/build_module.py”, line 414, in _make_executor

graph_json, mod, params = build(func, target=self.target)

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/relay/build_module.py”, line 269, in build

tophub_context = autotvm.tophub.context(target)

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/tophub.py”, line 84, in context

best_context.load(os.path.join(AUTOTVM_TOPHUB_ROOT_PATH, filename))

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/task/dispatcher.py”, line 279, in load

for inp, res in records:

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/record.py”, line 168, in load_from_file

yield decode(row)

File “/Users/pallabdatta/nnvm_tvm_npcompiler/np-compiler/tvm/python/tvm/autotvm/record.py”, line 116, in decode

row = json.loads(row)

File “/Users/pallabdatta/anaconda/lib/python3.6/json/init.py”, line 354, in loads

return _default_decoder.decode(s)

File “/Users/pallabdatta/anaconda/lib/python3.6/json/decoder.py”, line 339, in decode

obj, end = self.raw_decode(s, idx=_w(s, 0).end())

File “/Users/pallabdatta/anaconda/lib/python3.6/json/decoder.py”, line 355, in raw_decode

obj, end = self.scan_once(s, idx)

json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 426 (char 425)

In my use case I am actually more interested to run the NNVM compiler build instead of the relay. So can we make sure that the fix accounts for the nnvm build. Thanks so much…

See smallnet.onnx @

https://drive.google.com/drive/folders/16Fe-9iR6Z-181u_WmpG0j7YsozbXV-u1

import nnvm
import tvm
import onnx
import numpy as np

onnx_graph = onnx.load(‘smallnet.onnx’)

sym, params = nnvm.frontend.from_onnx(onnx_graph)

a = np.random.randn(2,3,16,16)
import nnvm.compiler
target = ‘llvm’
shape_dict = {‘0’: a.shape}
opt_level = 0
dtype_dict={‘0’:‘float32’}
with nnvm.compiler.build_config(opt_level=opt_level):
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, dtype=dtype_dict, params=params)

Hi, @pallabdatta what is the Mac machine requirements to install tvm?

I have Mac mini with 500GB storage, 4GB RAM, and Intel Core i5 processor. is it possible to install tvmon my Mac mini?

OS: MacOS Mojave 10.14.2
Processor: Intel Core i5 2.5 GHz
Graphics: Intel HD Graphics 4000 1536 MB
RAM: 4GB DDR3
Storage: 500GB SATA

Yeah I have a Mac laptop using Intel Core i7 processor with latest mac OSX 10.14.5 and latest SDK. Yes it should be possible to install TVM on your Mac mini.

@pallabdatta Thanks for suggestion.