How to make module.so static link glibc?

You can create a statically linked library using the llvm --system-lib target. Here’s an example of how to do that.

1 Like

@nhynes The example given is too simple, can you give a more detailed example on how to generate the system lib using any of the real pretrained models, like this one

import nnvm.compiler
target = 'cuda'
shape_dict = {'data': x.shape}
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)

Will it work if I simply change target to llvm --system-lib??

And what should I do with a bazel like build system as given by this page

Bundle the compiled library into your project in system module mode.

Forgive me if the problems are too stupid!

@qingyuanxingsi below should work to export as system module.

target = ‘llvm --system-lib’
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params)
lib.save(“deploy_static.o”)

to add to what @srkreddy1238 suggested, if we want to build a CUDA library, the key is to make instead target_host as llvm --system-lib. Then you will be able to directly get the module. Then create graph runtime with this one.

Tianqi

see also https://docs.tvm.ai/deploy/nnvm.html

followup on a issue with actionable items https://github.com/dmlc/tvm/issues/1523

@tqchen @srkreddy1238 Much thanks, I will try it!

Hi, @srkreddy1238 @tqchen @qingyuanxingsi I am compiling tvm model for Cuda below is python snip to build model, it is generating .o .json and .param file after saving them but when i try ti inference them in c++ it gives fuse_transpse_karnal0 function not found error.
target = 'cuda --system-lib’
target_host = 'lvm --system-lib’
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params, target_host)

Ref.

The make rule for building your final executable should be like this.
It should include the object build out of compilation and should pack tvm_runtime too.

Hope this helps.

I have linked my model.o file while compiling, And I am able to execute same for llvm as system model but Cuda and OpenCL I am facing a problem. For non-cpu, we have to use some extra commands while building the model, how I have to use them here?

Understood. CUDA compilation will generate a ptx too.

cc @tqchen Do we have a reference to compile and deploy CUDA as system module ?

You will need to understand what is happening behind https://github.com/dmlc/tvm/blob/master/python/tvm/module.py#L135

We embed the ptx and opencl binary in a C file, and compile that together with the so.

No @tqchen @srkreddy1238 i am not understanding just I want to compile tvm model for Cuda and OpenCL. And what is ptx?

target = 'cuda --system-lib’
target_host = 'lvm --system-lib’
graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, params=params, target_host)

is above steps are correct to compile tvm model for Cuda?
target i made as - ‘cuda --system-lib’ and
target_host as - ‘llvm --system-lib

@myproject24

I see you are using lib.save to export the module with LLVM.

Try using lib.export_library(“net.tar”) // “.tar” important here.
This creates net.tar with lib.o and dev.cc inside.

Include these two files in your final build.
Hope this helps.

You mean like below?

target ='cuda --system-lib'
target_host = 'llvm --system-lib'
ctx = tvm.gpu(0)        

sym, params = nnvm.frontend.from_tensorflow(graph_def,'NCHW',shape=shape_dict)

##############Compile###########################################

with nnvm.compiler.build_config(opt_level=opt_level):
    graph, lib, params = nnvm.compiler.build(sym, target, shape_dict, dtype_dict, params=params, target_host)

################### save module #########################################################

from tvm.contrib import util

temp = util.tempdir()
path_lib = temp.relpath("/deploy/deploy_lib.tar")
lib.export_library(path_lib)
with open(temp.relpath("/deploy/deploy_graph.json"), "w") as fo:
    fo.write(graph.json())
with open(temp.relpath( "/deploy/deploy_param.params"), "wb") as fo:
    fo.write(nnvm.compiler.save_param_dict(params))

Thank you @srkreddy1238 and @tqchen my issue is resolved.

@srkreddy1238 @tqchen Can i inference in python also as system model?

What do you mean by python with system model ?

Yes, how i am doing for c++ as system model like for python also?

In python you may use directly shared lib. Why do you need system lib module approach ?