How to create arm-based .so file without RPC on my laptop (ubuntu)


#1

Hello,

Following the guidance named get_started.py given on github, I created .so .params and .json files successfully in my laptop(ubuntu 14.04). The question is, how to create arm-based files locally without using RPC(due to the limitation of network)?


[SOLVED] How to build and run nnvm model in a libaray independently instead of RPC
How to load and read the .params in big endian system?
How to load and read the .params in big endian system?
#2

There are a few possible answers to your question:

  • First of all RPC is highly recommended, it makes experiment much easier. if you have a ubuntu on your board, depending on your networking condition
    • If you can ssh to your board, you could directly setup a tunnel
    • You don’t strictly need to start rpc server in your board, we also provide a rpc_proxy which you can start on your laptop(which means only proxy server need static ip),
  • You could try to setup a cross compiler on your laptop, this usually works well for android
  • If non of above works, you can save your model in .o file, copy over to the arm board, and manually create a shared library using gcc

Run tvm module on rk3399 without RPC server
#3

At first, many thanks for your answer.
Unfortunately, the experiment environment of mine is more like ‘offline’, which means I could not use RPC or any other remote ways. The core idea of what I want to achieve is to create ARM-used .so files and build a ARM-used executable program of an inference model. So the third way is not much appropriate.
Let me give you a brief summary that what I’ve done.

(In x86, target machine is x86)

  1. Got the checkpoint files after training my own Mxnet model.
    (1.1 Executed the makefile of TVM and NNVM to get shared libraries which would be used)
  2. Loaded the checkpoint files to create shared library, json and params files as get_started.py in nnvm
  3. Loaded these so, json and params to a cc file as cpp_deploy.cc described in tvm, and generated a executable program (with libtvm_runtime.so) by using gcc

(in x86, target machine is arm)
While in 1.1, I could not get shared libraries as mentioned above because error occurred during make in tvm while I changed the llvm-config in config.mk and edited CXX in Makefile as the cross-compiler. I’ve tried several times, but still struggled in this step. Could you tell me any details about how to edit config.mk and Makefile in tvm for creating ARM-used shared libraries and json and params files?

Thanks.


#4

OK, because you said experiment instead of deployment, I thought you want to continuous experiment, in which case RPC is preferred.

What you want is more like a deployment scenario.

You SHOULD NOT use cross compiler when you build tvm because tvm is on your host(x86) side.
You just want to pack everything TVM creates into an ARM shared library. In that case, you just need to pass a cross-compiler function when you call export_library, the most common example again is android deployment, which you should checkout, http://docs.tvmlang.org/how_to/deploy.html#build-model-for-android-target note that ndk.create_shared here is a thin wrapper that takes input and output argument and invokes the NDK cross-compiler.

This will give you a .so file that can run on your target platform, assuming your cross compiler is setup correctly. You also need to cross compile the tvm runtime library and link that in your executable in ARM. This runtime need to be separately cross-compiled, one easy way to do so is to just compile the tvm_runtime_pack.cc at https://github.com/dmlc/tvm/tree/master/apps/howto_deploy


#5

Sorry about the confusion between experiment and deployment. After skimming your suggestions, roughly I understand what I should do next. One more question, the example you gave aims at android deployment instead of normal arm boards. If wanna achieve general arm deployment, what is the substitution of ndk.create_shared? Like my own cross compile toolchain or something else.Just a little confused about the parameters (fcompile and kwargs).

Thank you.


#6

The fcompile should be a function like http://docs.tvmlang.org/api/python/contrib.html#tvm.contrib.cc.create_shared, except that the compiler should change, you can likely pass in

module.export_library("xx.so", contrib.cc.create_shared, cc="path/to/cross-compiler")

#7

First of all, thanks for your patience Dr Chen, but I’m still struggling with some stumbling blocks.

In your opinion, I should cross compile the cc file as your said, get the required runtime so file and then link to my executable. But I noticed the comment written there is You only have to use this file to compile libtvm_runtime to include in your project. Copy this file into your project which depends on tvm runtime. Does it mean I need to cross compile my main cc file and this tvm_runtime_pack.cc as input together to generate my executable?


#8

yes if you want to pack everything together in one executable


#9

It works! Thanks!
As you said above, just only need to change cc into my cross compiler route as show below

module.export_library("depress_deploy.so", contrib.cc.create_shared, cc="/opt/arm/bin/armeb-linux-gnueabi-g++")

So the target in my code should also be changed into the same triple as cc, is it right?

target= "llvm -target=armeb-linux-gnueabi"

[SOLVED] How to export model library to so file instead of tar for armv7 on x86 box
How to load and read the .params in big endian system?
#10

yes you need to set target triple correctly


#11

Thanks so much Dr. Chen!


#12

checkout http://docs.tvmlang.org/tutorials/deployment/cross_compilation_and_rpc.html#sphx-glr-tutorials-deployment-cross-compilation-and-rpc-py with comments about target


#13

Excuse me, I also got the checkpoint from Mxnet model .but I got a error:
Fiirst, if I used gluoncv model, the code was :
block = gluoncv.model_zoo.get_model(‘ssd_512_mobilenet1.0_voc’,pretrained=True)
net, params = nnvm.frontend.from_mxnet(block):

NNVMError: Cannot find argument ‘axes’, Possible Arguments:

axis : tuple of <int>, optional, default=[] List of axes on which input data will be sliced according to the corresponding size of the second input. By default will slice on all axes. Negative axes are supported. , in operator slice_like(name="", axes="(2, 3)")

Secend, I loaded model from my saved model :
sym,arg_params,aux_params = load_checkpoint(“ssd_resnet50_512”,0)
net,params = from_mxnet(sym,arg_params,aux_params)

Operator: _contrib_MultiBoxTarget is not supported in nnvm.

reference: https://docs.tvm.ai/tutorials/nnvm/deploy_ssd.html
did you know how to fix it ?
Thanks very much.


#14

Hi,

@tqchen it this still suppose to work. I had to change the contrib.cc.create_shared to tvm.contrib.cc.create_shared

If I run the following command I get an error:

func.export_library("test.so", tvm.contrib.cc.create_shared, cc="/usr/bin/aarch64-linux-gnu-g++")

RuntimeError: Compilation error:
/tmp/tmpgitwk1fj/lib.o: error adding symbols: File in wrong format
collect2: error: ld returned 1 exit status

It seems that is not correctly cross compiling the file. Any ideas?