How to extract tvm module

I have tvm compiled module, I have to extract all function present inside the module what is the way to know the functions inside the module. Basically, I am trying to deploy the module in c++ and need to call those functions.
@srkreddy1238 @FrozenGene @tqchen kindly help me on this question

I also have similar question. first attempt is to print the source code vialib.imported_modules[0].get_source(), also get `lib.get_source(). which outputs two set of codes. first is cuda kernel code, second is host’s glue for checking shape or somewhat ilke that.
those code actually can be compiled and statically linked after some manipulation.
But, seems we cannot the find the source code of packedfunc that preparing the context and launch the cuda kernel.

Something like this tutorial either using tvm::runtime::Module::GetFunction or TVMFuncListGlobalNames should help?

Yes, but we should know functions present inside module before GetFunction, in tvm tutorial they used an example as mod.GetFunction(“set_input”); here set_input is function name present inside module. My question is how to know the list of such function inside a module ?

@srkreddy1238 @FrozenGene @tqchen Any suggestion on this question?

Dump the TVM graph as JSON which contain func_name attribute which is nothing but the symbol of compiled library. See below ref

{
  "nodes": [
    {
      "op": "null",
      "name": "input",
      "inputs": []
    },
    {
      "op": "tvm_op",
      "name": "fused_nn_pad_19",
      "attrs": {
        "func_name": "fused_nn_pad_19",
        "flatten_data": "0",
        "num_inputs": "1",
        "num_outputs": "1"
      },

Thank you @srkreddy1238
Here function name is “func_name”: “fused_nn_pad_19” ?
and in mod.GetFunction(“"fused_nn_pad_19”); ?

@myproject24

Those are the symbol names using which TVM runtime extract when loaded.
I couldn’t understand your objective of calling them directly. Can you explain ?

@srkreddy1238 Basically i need to Deploy TVM Module using C++ API, for this i should know the function inside the module. My question is how to extract function from module or how to know functions from that module.
Like “set_input”, “set_out” and “run” which i can use for mod.GetFunction(), and do operations.

Just i need function names from tvm module to call those function from C++ API.

I couldn’t imagine the case you should call these functions directly. Basically, when you deploy using C++ API, you could just call set_input , run and get_output. When we call run, TVM will parse deploy_graph.json, find tvm_op and its related compiled function like fuse_conv2d, fuse_conv2d_1. (Every model has unique compiled function). Everything is handled by TVM runtime including data manipulation.

@FrozenGene Thanks for your replay.

After calling run how to call tvm_oprations in C++? like if I want to call fuse_conv2d operation, how to call this?

Is it any customised deployment ?

ref. apps/howto_deploy/
Cpp deployment doesn’t need any details about function names inside the lib.
The run method of graph runtime calls all these functions in a loop.

@srkreddy1238 @FrozenGene When i try to get out from module i am getting below error. I am running tvm tutorial c++ example to deploy module from that example get_output(0, y); is giving below error.

terminate called after throwing an instance of ‘dmlc::Error’
what(): [13:42:14] /home/ubuntu/tvm_opencl/tvm/src/runtime/graph/graph_runtime.cc:121: Check failed: data->shape[j] == data_out->shape[j] (256 vs. 1000)

Stack trace returned 7 entries:
[bt] (0) /usr/lib32/libtvm_runtime.so(+0x1198d) [0x7ffff7b1198d]
[bt] (1) /usr/lib32/libtvm_runtime.so(+0x125dd) [0x7ffff7b125dd]
[bt] (2) /usr/lib32/libtvm_runtime.so(+0x7b214) [0x7ffff7b7b214]
[bt] (3) /usr/lib32/libtvm_runtime.so(+0x7bfc2) [0x7ffff7b7bfc2]
[bt] (4) /home/ubuntu/Mallappa/NNVM/NNVMDeploy/testdeploy() [0x402b04]
[bt] (5) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7ffff6d9d830]
[bt] (6) /home/ubuntu/Mallappa/NNVM/NNVMDeploy/testdeploy() [0x402fa9]

Thread 1 “testdeploy” received signal SIGABRT, Aborted.
0x00007ffff6db2428 in __GI_raise (sig=sig@entry=6) at …/sysdeps/unix/sysv/linux/raise.c:54
54 …/sysdeps/unix/sysv/linux/raise.c: No such file or directory.

What is the model output type and shape ?

Alternatively don’t pass second argument in which case get_output returns NDArray.

Below is the output model and Shape.

DLTensor* y;
int out_ndim = 2;
int dtype_code = kDLFloat;
int dtype_bits = 32;
int dtype_lanes = 1;
int device_type = 4 ;
int device_id = 0;
int64_t out_shape[2] = {1, 1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);
tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
CHECK(get_output != nullptr);
get_output(0, y);

The output shape seem to be {1, 256}, hence this error.

Thank you @srkreddy1238.

And @srkreddy1238 @FrozenGene i am trying set input as ppm image file to tvm module so i am getting different result for python and c++ i have deployed same tvm module in both python and c++ i am getting different result.

Can you share the graph.json if possible. I will have a look at the input and output signatures.

@srkreddy1238 I can’t share complete json file it’s company confidential i have shared some snap of it.

{
“nodes”: [
{
“op”: “null”,
“name”: “input”,
“inputs”: []
},
{
“op”: “tvm_op”,
“name”: “transpose0”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_transpose”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[0, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “pad0”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_pad”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[1, 0, 0]]
},
{
“op”: “null”,
“name”: “SENet/first_layer/first_layer_conv1/conv2d/kernel”,
“inputs”: []
},
{
“op”: “tvm_op”,
“name”: “transpose1”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_transpose_1”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[3, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_conv1/conv2d/Conv2D”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_conv2d”,
“num_inputs”: “2”,
“num_outputs”: “1”
},
“inputs”: [[2, 0, 0], [4, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “transpose2”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_transpose_2”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[5, 0, 0]]
},
{
“op”: “null”,
“name”: “SENet/first_layer/first_layer_batch1/moving_variance”,
“inputs”: []
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_add_eps”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse___add_scalar__”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[7, 0, 1]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_sqrt”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse_sqrt”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[8, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_div”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse___rdiv_scalar__”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[9, 0, 0]]
},
{
“op”: “null”,
“name”: “SENet/first_layer/first_layer_batch1/gamma”,
“inputs”: []
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_gamma_mul_div”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse_elemwise_mul”,
“num_inputs”: “2”,
“num_outputs”: “1”
},
“inputs”: [[10, 0, 0], [11, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_a_mul_data”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_broadcast_mul”,
“num_inputs”: “2”,
“num_outputs”: “1”
},
“inputs”: [[6, 0, 0], [12, 0, 0]]
},
{
“op”: “null”,
“name”: “SENet/first_layer/first_layer_batch1/moving_mean”,
“inputs”: []
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_neg_mean”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse_negative”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[14, 0, 1]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_neg_mean_mul_a”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse_elemwise_mul”,
“num_inputs”: “2”,
“num_outputs”: “1”
},
“inputs”: [[15, 0, 0], [12, 0, 0]]
},
{
“op”: “null”,
“name”: “SENet/first_layer/first_layer_batch1/beta”,
“inputs”: []
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_add_beta”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse_elemwise_add”,
“num_inputs”: “2”,
“num_outputs”: “1”
},
“inputs”: [[16, 0, 0], [17, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/first_layer_batch1/FusedBatchNorm_out”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_broadcast_add”,
“num_inputs”: “2”,
“num_outputs”: “1”
},
“inputs”: [[13, 0, 0], [18, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “SENet_2/first_layer/Relu”,
“attrs”: {
“flatten_data”: “1”,
“func_name”: “fuse_relu”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[19, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “transpose3”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_transpose_3”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[20, 0, 0]]
},
{
“op”: “tvm_op”,
“name”: “pad1”,
“attrs”: {
“flatten_data”: “0”,
“func_name”: “fuse_pad_1”,
“num_inputs”: “1”,
“num_outputs”: “1”
},
“inputs”: [[21, 0, 0]]
},

I can understand it’s confidential and i don’t want to know the complete graph.

Just look for shape and dltype section for input and output nodes.

For example InceptionV3 has got info like below.


    "dltype": ["list_str", [
        "float32",
        "float32",
        "float32",
 
      - - - - - - - -- 


        "float32",
        "float32",
        "float32",
        "float32"]],
    "shape": ["list_shape", [
        [1, 299, 299, 3],
        [3, 3, 3, 32],
         -------

               [1, 1001], 
        [1, 1001], 
        [1, 1001]]]
  }
}

Here
input shape is [1, 299, 299, 3] with dtype ‘float32’
output shape is [1, 1001] with dtype ‘float32’

check these sections and allocate the out array accordingly.

Thank you @srkreddy1238. i understood graph.

Do you know how to convert 3d array to 4d array using Mat::reshape in opencv like in python,
x = Image.open(‘cat,png’)
x = np.array(x)
x = np.reshape(x, (1,64,64,3))
basically here ‘x’ is 3D RGB array then they are reshaping it 4D array using numpy.reshape in python, similarly can i do in C++ using Mat::reshape() in opencv?

Thank you.