How to make module.so static link glibc?


#1

I have export module.so in Ubuntu and want to deploy on Centos. The problem is that Ubuntu’s glibc version is higher than Centos’s. I know how to static link in normal C/C++ compiling. But how to control llvm in TVM?


#2

You can create a statically linked library using the llvm --system-lib target. Here’s an example of how to do that.


#3

@nhynes The example given is too simple, can you give a more detailed example on how to generate the system lib using any of the real pretrained models, like this one

import nnvm.compiler
target = 'cuda'
shape_dict = {'data': x.shape}
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)

Will it work if I simply change target to llvm --system-lib??

And what should I do with a bazel like build system as given by this page

Bundle the compiled library into your project in system module mode.

Forgive me if the problems are too stupid!


#4

@qingyuanxingsi below should work to export as system module.

target = ‘llvm --system-lib’
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
lib.save(“deploy_static.o”)


#5

to add to what @srkreddy1238 suggested, if we want to build a CUDA library, the key is to make instead target_host as llvm --system-lib. Then you will be able to directly get the module. Then create graph runtime with this one.

Tianqi


#6

see also https://docs.tvm.ai/deploy/nnvm.html


#7

followup on a issue with actionable items https://github.com/dmlc/tvm/issues/1523


#8

@tqchen @srkreddy1238 Much thanks, I will try it!


#9

Hi, @srkreddy1238 @tqchen @qingyuanxingsi I am compiling tvm model for Cuda below is python snip to build model, it is generating .o .json and .param file after saving them but when i try ti inference them in c++ it gives fuse_transpse_karnal0 function not found error.
target = 'cuda --system-lib’
target_host = 'lvm --system-lib’
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params, target_host)


Module.cc:92: Check failed: f != nullptr Cannot find function fuse_transpose_kernel0 in the imported modules or global registry
#10

Ref.

The make rule for building your final executable should be like this.
It should include the object build out of compilation and should pack tvm_runtime too.

Hope this helps.


#11

I have linked my model.o file while compiling, And I am able to execute same for llvm as system model but Cuda and OpenCL I am facing a problem. For non-cpu, we have to use some extra commands while building the model, how I have to use them here?


#12

Understood. CUDA compilation will generate a ptx too.

cc @tqchen Do we have a reference to compile and deploy CUDA as system module ?


#13

You will need to understand what is happening behind https://github.com/dmlc/tvm/blob/master/python/tvm/module.py#L135

We embed the ptx and opencl binary in a C file, and compile that together with the so.


#14

No @tqchen @srkreddy1238 i am not understanding just I want to compile tvm model for Cuda and OpenCL. And what is ptx?

target = 'cuda --system-lib’
target_host = 'lvm --system-lib’
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params, target_host)

is above steps are correct to compile tvm model for Cuda?
target i made as - ‘cuda --system-lib’ and
target_host as - ‘llvm --system-lib


#15

@myproject24

I see you are using lib.save to export the module with LLVM.

Try using lib.export_library(“net.tar”) // “.tar” important here.
This creates net.tar with lib.o and dev.cc inside.

Include these two files in your final build.
Hope this helps.


#16

You mean like below?

target ='cuda --system-lib'
target_host = 'llvm --system-lib'
ctx = tvm.gpu(0)        

sym, params = nnvm.frontend.from_tensorflow(graph_def,'NCHW',shape=shape_dict)

##############Compile###########################################

with nnvm.compiler.build_config(opt_level=opt_level):
    graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, dtype_dict, params=params, target_host)

################### save module #########################################################

from tvm.contrib import util

temp = util.tempdir()
path_lib = temp.relpath("/deploy/deploy_lib.tar")
lib.export_library(path_lib)
with open(temp.relpath("/deploy/deploy_graph.json"), "w") as fo:
    fo.write(graph.json())
with open(temp.relpath( "/deploy/deploy_param.params"), "wb") as fo:
    fo.write(nnvm.compiler.save_param_dict(params))

#17

Thank you @srkreddy1238 and @tqchen my issue is resolved.


#18

@srkreddy1238 @tqchen Can i inference in python also as system model?


#19

What do you mean by python with system model ?


#20

Yes, how i am doing for c++ as system model like for python also?