How to build tvm as static library and inference using c++?

Error Message:

tvm/src/runtime/module.cc:58: Check failed: f != nullptr Loader of (module.loadfile_so) is not presented.
terminate called after throwing an instance of 'dmlc::Error'
  what():  [16:49:05] tvm/src/runtime/module.cc:58: Check failed: f != nullptr Loader of (module.loadfile_so) is not presented.

Reproducing step:

  1. Get the pretrained resnet18 model and dump the necessary files: graph json, parameters and library file
# -*- coding: utf-8 -*-

import mxnet
import nnvm
import tvm
import numpy as np
import os

from mxnet.gluon.model_zoo.vision import get_model
from PIL import Image
import matplotlib.pylab as plt
project_root = r'/home/xxx/data/tvm_demo'
block = get_model('resnet18_v1', pretrained=True)
synset_name = os.path.join(project_root, 'imagenet1000_clsid_to_human.txt')
img_name = os.path.join(project_root, 'cat.png')
with open(synset_name) as f:
    synset = eval(f.read())
image = Image.open(img_name).resize((224, 224))
plt.imshow(image)
# plt.show()

def transform_image(image):
    image = np.array(image) - np.array([123., 117., 104.])
    image /= np.array([58.395, 57.12, 57.375])
    image = image.transpose((2, 0, 1))
    image = image[np.newaxis, :]
    return image

x = transform_image(image)
print('x', x.shape)

sym, params = nnvm.frontend.from_mxnet(block)

sym = nnvm.sym.softmax(sym)

import nnvm.compiler

target = 'llvm'
shape_dict = {'data': x.shape}
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)

from tvm.contrib import util
# dumping model files
print("Dumping model files...")
lib_path = os.path.join(project_root, 'resnet18_deploy.so')
lib.export_library(lib_path)

graph_json_path = os.path.join(project_root, 'resnet18.json')
with open(graph_json_path, 'w') as fo:
    fo.write(graph.json())

param_path = os.path.join(project_root, 'resnet18.params')
with open(param_path, 'wb') as fo:
    fo.write(nnvm.compiler.save_param_dict(params))

from tvm.contrib import graph_runtime
ctx = tvm.cpu(0)
dtype = 'float32'
m = graph_runtime.create(graph, lib, ctx)

# set inputs
data_x = tvm.nd.array(x.astype(dtype))
cat_file = os.path.join(project_root, 'cat.bin')
data_x.asnumpy().tofile(cat_file)
m.set_input('data', data_x)
m.set_input(**params)
# execute
m.run()
# get outputs
tvm_output = m.get_output(0, tvm.nd.empty((1000,), dtype))
top1 = np.argmax(tvm_output.asnumpy())
print('TVM prediction top-1:', top1, synset[top1])

The above script works well

  1. Build tvm as a static library(using bazel like build system) and get libtvm.a

  2. Test cpp inference using code from this page and get the error above.

Could you tell me how to fix this problem or am I doing anything wrong???

check your libtvm.a contents. Should be like below.

#include “src/runtime/c_runtime_api.cc”
#include “src/runtime/cpu_device_api.cc”
#include “src/runtime/workspace_pool.cc”
#include “src/runtime/module_util.cc”
#include “src/runtime/module.cc”
#include “src/runtime/registry.cc”
#include “src/runtime/file_util.cc”
#include “src/runtime/threading_backend.cc”
#include “src/runtime/thread_pool.cc”
#include “src/runtime/dso_module.cc”
#include “src/runtime/system_lib_module.cc”
#include “src/runtime/graph/graph_runtime.cc”

module.cc registers loadfile_so packed function.

It seems like the build file is fine, see below!

You may verify it using bazel!

And I found loadfile_so packed function defined in dso_module.cc#106, not module.cc

Yes, it’s in dso_module.cc and called from module.cc.

Please grab some debug from below patch.

diff --git a/src/runtime/registry.cc b/src/runtime/registry.cc
index 3f728283
77234d30 100644
— a/src/runtime/registry.cc
+++ b/src/runtime/registry.cc
@@ -45,6 +45,7 @@ Registry& Registry::set_body(PackedFunc f) { // NOLINT(*)
}

Registry& Registry::Register(const std::string& name, bool override) { // NOLINT(*)

  • LOG(WARNING) << “Register:” << name;
    Manager* m = Manager::Global();
    std::lock_guardstd::mutex lock(m->mutex);
    auto it = m->fmap.find(name);

Output:

[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._Enabled
[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._GetSource
[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._ImportsSize
[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._GetImport
[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._GetTypeKey
[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._LoadFromFile
[20:05:03] tvm/src/runtime/registry.cc:48: Register:module._SaveToFile
[20:05:03] tvm/src/runtime/registry.cc:48: Register:__tvm_set_device
[20:05:03] tvm/src/runtime/registry.cc:48: Register:_GetDeviceAttr

should get below list in Register.

[19:38:04] src/runtime/registry.cc:48: Register:__tvm_set_device
[19:38:04] src/runtime/registry.cc:48: Register:_GetDeviceAttr
[19:38:04] src/runtime/registry.cc:48: Register:device_api.cpu
[19:38:04] src/runtime/registry.cc:48: Register:module._Enabled
[19:38:04] src/runtime/registry.cc:48: Register:module._GetSource
[19:38:04] src/runtime/registry.cc:48: Register:module._ImportsSize
[19:38:04] src/runtime/registry.cc:48: Register:module._GetImport
[19:38:04] src/runtime/registry.cc:48: Register:module._GetTypeKey
[19:38:04] src/runtime/registry.cc:48: Register:module._LoadFromFile
[19:38:04] src/runtime/registry.cc:48: Register:module._SaveToFile
[19:38:04] src/runtime/registry.cc:48: Register:module.loadfile_so
[19:38:04] src/runtime/registry.cc:48: Register:module._GetSystemLib
[19:38:04] src/runtime/registry.cc:48: Register:tvm.graph_runtime.create
[19:38:04] src/runtime/registry.cc:48: Register:tvm.graph_runtime.remote_create

You could check for symbols on your libtvm.a for other function in ./src/runtime/dso_module.cc ?

@qingyuanxingsi

android native deploy this helps you to show how to run on android os using native function call, library which you need to deploy on target(Android) should be precompiled with proper target and target host and should be export using NDK toolcahin.

Please refer this link to compile cpu and gpu flavor version for android target.

@dayanandasiet My use case is deploying deep learning model in ordinary cpu.(Tensorflow/caffe won’t work due to protobuf version issues). So would you given a more detailed tutorial on how to deploy a mxnet model(like a pretrained resnet18 model) to a cpu environment! That will be much helpful! And building a tvm static library is preferred in my production environment.

And the key is not the python part, how to deploy a model in a c++ enviroment is the key here!

nm tvm/libtvm.a | grep module
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
module_util.cc.o:
0000000000000c3c t _GLOBAL__sub_I_module_util.cc
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
module.cc.o:
0000000000003978 t _GLOBAL__sub_I_module.cc
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
dso_module.cc.o:
0000000000000677 t _GLOBAL__sub_I_dso_module.cc
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
system_lib_module.cc.o:
00000000000006e5 t _GLOBAL__sub_I_system_lib_module.cc
0000000000000000 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000030 d _ZN3tvm7runtime6symbolL15tvm_module_mainE
0000000000000038 d _ZN3tvm7runtime6symbolL14tvm_module_ctxE
0000000000000068 d _ZN3tvm7runtime6symbolL15tvm_module_mainE

Will this be of any help for your debug in this case??

Can you try including all files inside your c++ app instead of another libtvm.a ?

Or

Create tvm.cc which just includes all above files and build libtvm.a from this single file ?
I use this way in a different build environment and it works.

1 Like

Build one single file works now, much thanks! And could you tell me why it is like this??

And if I want to deploy in a cpu target, what should I set to the target parameter in the python script??

Great.
Multi files issue may due to some pre processor magic while compilation.

target=‘llvm’ should be good enough for CPU target.

Hi, I am also getting the same error

tvm/src/runtime/module.cc:58: Check failed: f != nullptr Loader of (module.loadfile_so) is not presented.
terminate called after throwing an instance of ‘dmlc::Error’
what(): [16:49:05] tvm/src/runtime/module.cc:58: Check failed: f != nullptr Loader of (module.loadfile_so) is not presented.

I have combined all the .cc.o files into one .a file, then used this archive file to generated the application. Application build fine, but when I am deploying on the arm device, it shows that error, whereas it works fine for libtvm_runtime.so.
I did not understand your above mentioned solution
“Create tvm.cc which just includes all above files and build libtvm.a from this single file ?
I use this way in a different build environment and it works.”

Just to be sure,while building tvm as static library, we should be using the lib.so (for arm64 ) generated from the python file right? That will work with c++ deployement?
thanks

There are two parts for deployment

  • TVM runtime

Above solution is about how we build static TVM runtime and compile against our final executable.
Instead of building individual files of runtime and then archiving I suggested to include all .cc files in another file and then make an archive.

  • Compiled module (graph, params, model)
    Here model is the lib.so (python build output) which is just compiled module not runtime.

i have the same error , can you help me, how to build all file into one single file and get the libtvm.a to dependecy. thanks!

Hi,

Could someone clarify how to actually build TVM as a Static Library and then link it against a given module (graph, params, model)?

Thanks

You may refer to https://github.com/dmlc/tvm/blob/master/apps/android_deploy/app/src/main/jni/tvm_runtime.h