How to deploy two different tvm compiled model in c++ statically?


Hi @srkreddy1238, @tqchen is it possible to deploy and generate c++ API’s with two different tvm compiled models at the same time. Like I have two tvm compiled models for face detection and object detection can inference them same time?
Note: It should be statically not dynamically


Check out

Likely you will want to use the system module. What you want is possible but with a bit of effort. Note that tvm’s graph runtime takes two inputs:

  • graph_json which is the graph json file
  • lib the tvm module containing all the functions needed by the graph.

To deploy two modules together, we somehow need to combine the generated code together to create a single module that contains functions needed by both modules. Then we can create to create two graph runtimes, one for each module.


@tqchen thank you for your suggestion, i am able to deploy single tvm compiled model in both dynamic and static way. For deploying two models in one c++ code any more suggestions or samples above shared links gives idea about model deployment in c++. i need help on deploying two models.


My comment above is for two models. You need to somehow generate a module that contains functions used in two models (in normal c code it could be as simple as linking everything together) and two versions of json


Hi @tqchen Any samples to generate a module that contains functions used in two different models?


@tqchen the symbol name of the model in the .o is tvm_runtime_create, tvm_runtime_run…, so if there are two models such as a.o and b.o, when we link a.o,b.o,runtime.o, main.o to the final executable, the symbol name will be the same and will cause a link error.

usecases: use mobilenetv1, mobilenetv2 in a same app, how to identify them? by there json and params file? there should be some ways to identify them at the .o or .so level.