Tvm ndk arm64 compile without RPC

Hi

I’m on Ubuntu 16.04 server, LLVM 6.0, android-ndk-r16b-linux-x86_64. As a test, I’m using nnvm to compile a simple convnet for target = 'llvm -target=arm64-linux-android' with target_host=None (on cpu now. Is this correct?). I want to create the shared lib, graph and params without RPC (as described http://www.tvmlang.org/2017/11/08/android-rpc-introduction.html) locally and put it in an android app.

I have already created the arm64 toolchain through ./make_standalone_toolchain.py though I exported TVM_NDK_CC=/my_toolchain/bin/aarch64-linux-android-clang and then used tvm.contrib.ndk for lib.export_library(path, ndk.create_shared). However, the build is done though I suspect since I don’t have remote ctx and using mx.cpu ctx when loading a pretrained params, the build wouldn’t be for android, am I right?

So is this possible to build for android without RPC on a server?

I’d appreciate any help on these issues
Thanks

refer below code to save compile model for android phone platform with cpu/gpu (opencl) flavor.
Will save deploy_lib.so, deploy_graph.json and deploy_param.params for your model, same can deploy on android phone using android_deploy application.

import os
import numpy as np
import tvm
from tvm.contrib import ndk
import nnvm.frontend
import nnvm.compiler
from nnvm.testing.darknet import __darknetffi__

# download model and framework library
#wget -O libdarknet.so 'https://github.com/siju-samuel/darknet/blob/master/lib/libdarknet.so?raw=true'
#wget -O extraction.cfg 'https://github.com/pjreddie/darknet/blob/master/cfg/extraction.cfg?raw=true'
#wget -O extraction.weights 'http://pjreddie.com/media/files/extraction.weights?raw=true'

darknet_lib = __darknetffi__.dlopen('./libdarknet.so')
net = darknet_lib.load_network('./extraction.cfg'.encode('utf-8'), './extraction.weights'.encode('utf-8'), 0)

data = np.empty([1, net.c, net.h, net.w], np.float32)
shape = {'data': data.shape}

# GET model from frameworks
sym, params = nnvm.frontend.darknet.from_darknet(net, np.float32)

exec_gpu = True
opt_level = 0
arch = "arm64"
if exec_gpu:
    # Mobile GPU
    target = 'opencl'
    target_host = "llvm -target=%s-linux-android" % arch
else:
    # Mobile CPU
    target = "llvm -target=%s-linux-android" % arch
    target_host = None

print('Build Graph...')
with nnvm.compiler.build_config(opt_level=opt_level, add_pass=None):
    graph, lib, params = nnvm.compiler.build(sym, target, shape, params=params, target_host=target_host)

lib.export_library("deploy_lib.so", ndk.create_shared)
with open("deploy_graph.json", "w") as fo:
    fo.write(graph.json())
with open("deploy_param.params", "wb") as fo:
    fo.write(nnvm.compiler.save_param_dict(params))
print('Save complete...')
1 Like

Great! thanks for the clarification Pariksheet.