Run tvm module on rk3399 without RPC server

hello,I can’t use RPC way to control my board, because my host pc is at home and my firefly-rk3399 boad is in my office they are not in the same net region (or net segment).so Is there any other ways to instead of RPC.I can config mali target and build output graph /param /libraray and then export .finaly ,copy these to my rk3399 board , does the tvm(runtime) on rk3399 have some interface to load these ouput on pc without RPC way ? I am new in tvm. thanks.

You could definitely load all the deployed files with or without RPC. Here is an typical example https://docs.tvm.ai/deploy/nnvm.html . It shows that you can either use c++ or python to do that. Hope this helps.

On the side note, if you can manage to copy things over, does that mean you can do an ssh tunneling to your office subnet? that might makes things easier

Yes,thanks, the link you said is on board side. on pc side I use

with nnvm.compiler.build_config(opt_level=3):
graph, lib, params = nnvm.compiler.build(net, target=target,
shape={“data”: data_shape}, params=params, target_host=target_host)

lib.export_library(lib_fname)

this will export a .tar file (.cc and .o in tar)
but I need a so file .

I can get json and param by

with open(cfg.json_path, “w”) as fo:
fo.write(graph.json())
with open(cfg.params_path, “wb”) as fo:
fo.write(nnvm.compiler.save_param_dict(params))

I can copy things over ,because I use teamviewer to my home pc, this software has ftp function.In addition,I prefer to use tvm without RPC because I want my board to run deeplearning independently after finishing debug stage in my project. I found this link: [ How to create arm-based .so file without RPC on my laptop (ubuntu)] Can I try this way?

You can export shared library like this

path_lib = os.path.join(thisdir, "deploy.so")
lib.export_library(path_lib)

where your lib should be built with nnvm compiler like this

graph, lib, params = nnvm.compiler.build(nnvm_sym, target, shape_dict, params=nnvm_params)

And in C/C++, you could load the so file following

tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("deploy.so");

, and similarly in Python, you could load it with

loaded_lib = tvm.module.load(“deploy.so”)

thanks a lot,I found it , Now I get .so .json .params successfully. my build function below:

target_host = “llvm -target=aarch64-linux-gnu”
target = tvm.target.mali()
with nnvm.compiler.build_config(opt_level=3):
graph, lib, params = nnvm.compiler.build(net, target=target,
shape={“data”: data_shape}, params=params, target_host=target_host)
lib.export_library(“net.so”,cc="/opt/gcc-linaro-4.9-2016.02-x86_64_aarch64-linux-gnu/bin/aarch64-linux-gnu-g++")
with open(“net.json”, “w”) as fo:
fo.write(graph.json())
with open(“net.params”, “wb”) as fo:
fo.write(nnvm.compiler.save_param_dict(params))

on my board side (rk3399)

import tvm
loaded_lib = tvm.module.load(“net.so”)
loaded_json = open(“net.json”).read()
loaded_params = bytearray(open(“net.params”, “rb”).read())
fcreate = tvm.get_global_func(“tvm.graph_runtime.create”)
ctx = tvm.gpu(0)
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
set_input, get_output, run = gmodule[“set_input”], gmodule[“get_output”], gmodule[“run”]
set_input(“x”, tvm.nd.array(x_np))
gmodule"load_params"
run()
out = tvm.nd.empty(shape)
get_output(0, out)
print(out.asnumpy())

run it on my board side, It reports a error

Traceback (most recent call last):
File “test.py”, line 12, in
gmodule = fcreate(loaded_json, loaded_lib, ctx.device_type, ctx.device_id)
File “/home/firefly/Documents/tvm/python/tvm/_ffi/_ctypes/function.py”, line 185, in call
ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
File “/home/firefly/Documents/tvm/python/tvm/_ffi/base.py”, line 68, in check_call
raise TVMError(py_str(_LIB.TVMGetLastError()))
tvm._ffi.base.TVMError: [07:05:11] /home/firefly/Documents/tvm/src/runtime/c_runtime_api.cc:90: Check failed: allow_missing Device API gpu is not enabled.

It looks like my mali doesn’t enable ,but test my mali by clpeak and clinfo , the mali can work and the opencl installed. and also , I install tvm runtime with opencl enable .do you know what other reasons of it?

I try tvm.cl(0) instead of tvm.gpu(0) ,It works. tvm.gpu just for cuda?

Hi, @emelife are you able to fix this issue?, i am also facing same issue.