Run tvm module on rk3399 without RPC server


hello,I can’t use RPC way to control my board, because my host pc is at home and my firefly-rk3399 boad is in my office they are not in the same net region (or net segment).so Is there any other ways to instead of RPC.I can config mali target and build output graph /param /libraray and then export .finaly ,copy these to my rk3399 board , does the tvm(runtime) on rk3399 have some interface to load these ouput on pc without RPC way ? I am new in tvm. thanks.


You could definitely load all the deployed files with or without RPC. Here is an typical example . It shows that you can either use c++ or python to do that. Hope this helps.


On the side note, if you can manage to copy things over, does that mean you can do an ssh tunneling to your office subnet? that might makes things easier


Yes,thanks, the link you said is on board side. on pc side I use

with nnvm.compiler.build_config(opt_level=3):
graph, lib, params =, target=target,
shape={“data”: data_shape}, params=params, target_host=target_host)


this will export a .tar file (.cc and .o in tar)
but I need a so file .

I can get json and param by

with open(cfg.json_path, “w”) as fo:
with open(cfg.params_path, “wb”) as fo:


I can copy things over ,because I use teamviewer to my home pc, this software has ftp function.In addition,I prefer to use tvm without RPC because I want my board to run deeplearning independently after finishing debug stage in my project. I found this link: [ How to create arm-based .so file without RPC on my laptop (ubuntu)] Can I try this way?


You can export shared library like this

path_lib = os.path.join(thisdir, "")

where your lib should be built with nnvm compiler like this

graph, lib, params =, target, shape_dict, params=nnvm_params)

And in C/C++, you could load the so file following

tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("");

, and similarly in Python, you could load it with

loaded_lib = tvm.module.load(“”)


thanks a lot,I found it , Now I get .so .json .params successfully. my build function below:

target_host = “llvm -target=aarch64-linux-gnu”
target =
with nnvm.compiler.build_config(opt_level=3):
graph, lib, params =, target=target,
shape={“data”: data_shape}, params=params, target_host=target_host)
with open(“net.json”, “w”) as fo:
with open(“net.params”, “wb”) as fo:

on my board side (rk3399)

import tvm
loaded_lib = tvm.module.load(“”)
loaded_json = open(“net.json”).read()
loaded_params = bytearray(open(“net.params”, “rb”).read())
fcreate = tvm.get_global_func(“tvm.graph_runtime.create”)
ctx = tvm.gpu(0)
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
set_input, get_output, run = gmodule[“set_input”], gmodule[“get_output”], gmodule[“run”]
set_input(“x”, tvm.nd.array(x_np))
out = tvm.nd.empty(shape)
get_output(0, out)

run it on my board side, It reports a error

Traceback (most recent call last):
File “”, line 12, in
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
File “/home/firefly/Documents/tvm/python/tvm/_ffi/_ctypes/”, line 185, in call
ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
File “/home/firefly/Documents/tvm/python/tvm/_ffi/”, line 68, in check_call
raise TVMError(py_str(_LIB.TVMGetLastError()))
tvm._ffi.base.TVMError: [07:05:11] /home/firefly/Documents/tvm/src/runtime/ Check failed: allow_missing Device API gpu is not enabled.

It looks like my mali doesn’t enable ,but test my mali by clpeak and clinfo , the mali can work and the opencl installed. and also , I install tvm runtime with opencl enable .do you know what other reasons of it?


I try instead of tvm.gpu(0) ,It works. tvm.gpu just for cuda?


Hi, @emelife are you able to fix this issue?, i am also facing same issue.