[solved][ONNX] Error when deploying on Raspberry pi 4

Hello,

I am currently trying to deploy an ONNX model on a raspberry Pi 4, based on the two tutorials available on the TVM doc website (“Compile ONNX Models” and “Deploy a Pretrained Model on Raspberry pi”).

I get the following error, no matter which ONNX model and corresponding inputs I use, so the problem should be coming from the way I compile the model.

Traceback (most recent call last):
  File "tvm_discuss.py", line 61, in <module>
    module.set_input('data', tvm.nd.array(x.astype('float32')))
  File "tvm/python/tvm/contrib/graph_runtime.py", line 149, in set_input
    self._get_input(key).copyfrom(value)
  File tvm/python/tvm/_ffi/_ctypes/function.py", line 210, in __call__
    raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) pi/tvm/build/libtvm_runtime.so(tvm::runtime::RPCSession::ServerLoop()+0xf0) [0xb01b8ab0]
  [bt] (7) pi/tvm/build/libtvm_runtime.so(tvm::runtime::RPCSession::HandleUntilReturnEvent(tvm::runtime::TVMRetValue*, bool, tvm::runtime::PackedFunc const*)+0x154) [0xb01b87cc]
  [bt] (6) tvm/build/libtvm_runtime.so(tvm::runtime::RPCSession::EventHandler::HandleNextEvent(tvm::runtime::TVMRetValue*, bool, tvm::runtime::PackedFunc const*)+0x23c) [0xb01be234]
  [bt] (5) pi/tvm/build/libtvm_runtime.so(tvm::runtime::RPCSession::EventHandler::HandleRecvPackedSeqArg()+0x368) [0xb01bd2f4]
  [bt] (4) pi/tvm/build/libtvm_runtime.so(tvm::runtime::RPCSession::EventHandler::SwitchToState(tvm::runtime::RPCSession::EventHandler::State)+0x1f8) [0xb01bcc08]
  [bt] (3) pi/tvm/build/libtvm_runtime.so(tvm::runtime::RPCSession::EventHandler::HandlePackedCall()+0x4d0) [0xb01b6028]
  [bt] (2) pi/tvm/build/libtvm_runtime.so(+0x90b00) [0xb01c8b00]
  [bt] (1) pi/tvm/build/libtvm_runtime.so(+0x90a04) [0xb01c8a04]
  [bt] (0) pi/tvm/build/libtvm_runtime.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x38) [0xb01597b4]
TVMError: Except caught from RPC call: [10:00:04] pi/tvm/src/runtime/graph/graph_runtime.cc:450: Check failed: in_idx >= 0 (-1 vs. 0) : 

I compile and run the model using the following code :

input_name = '1'
shape_dict = {input_name: x.shape}
mod, params = relay.frontend.from_onnx(onnx_model, shape_dict)

target = tvm.target.create('llvm -device=arm_cpu -model=bcm2711 -target=arm-linux-gnueabihf -mattr=+neon')
with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(mod, target, params=params)

# Save the library at local temporary directory.
tmp = util.tempdir()
lib_fname = tmp.relpath('net.tar')
lib.export_library(lib_fname)

# Connection à la rasp
host = '192.168.20.147'
port = 9090
remote = rpc.connect(host, port)  # create and return RPC session

# upload the library to remote device and load it
remote.upload(lib_fname)
rlib = remote.load_module('net.tar')

# create the remote runtime module
ctx = remote.cpu(0)
module = runtime.create(graph, rlib, ctx)

# set parameter (upload params to the remote device. This may take a while)
module.set_input(**params)
# set input data
module.set_input('data', tvm.nd.array(x.astype('float32')))
# run
module.run()

Moreover, compiling and running the ONNX model on my host machine only using the relay.build_module.create_executor is fine, so the model is valid.

[edit] : My bad, I had a conflict in the input names. I changed it and it worked

Hi!
target = tvm.target.arm_cpu(‘rasp3b’)

why the lib can’t be saved as a ’ .so’ file, but to be saved as a ‘.tar’ file.

Hi !

The target you specified is the one for a Raspberry pi 3b and is the short version of target = tvm.target.create('llvm -device=arm_cpu -model=bcm2837 -target=armv7l-linux-gnueabihf -mattr=+neon'). As I am using a Raspberry pi 4, I just updated the target (found running gcc -v and the model (available on the internet).

I don’t know why the library is written in a tar instead of a .so file.

Have you installed python on the arm? Do you have wechat?

Yes, python is installed on the Arm.

No, text chat only.

Have you solved the issue??