Where can i find any c++ sample?


I want to integrate my c++ code into TVM, but I couldn’t find any c++ docs or c++ related sample, I try to implement an A+B compute with TVM C++ API, just like this https://discuss.tvm.ai/t/lower-function-crash-in-c-api/778, but it’s run failed with a strange memory access exception.
So, maybe we need some c++ samples or some documents about how to use c++ API.

How to traverse HalideIR by tvm C++ API

Hi @blueyi, we do have some c++ sample code, which would help you integrate TVM into your current system. Please checkout the sample code here .
And there is also an introductory document describing how to deploy TVM on your system:
Deploy TVM Module using C++ API.

In addition, I personally have a running sample code that enable running inference of MobileNetV2 on top of TVM, and it’s located here.


Thanks a lot, is there have some samples about how to use IRVisitor?, I have found many samples of HalideIR, but the HalideIR in TVM has been changed a lot, Thanks.


src/pass/simple_passes.cc has some simple visitor code, IRSideEffect to detect side effect operation, ExprUseVarVisitor to track variable usage.

More involved example like IRVerifySSA that checks whether IR is in SSA form. LinearAccessPatternFinder that generates a linearized sequence of instructions.

Or you can try this search to learn all the subclasses of IRVisitor in tvm code base.


It’s really useful, thanks.


I want to visit all of the IR node in TVM and then transform it to another back-end language’s IR node, can I just use tvm:ir::PostOrderVisit() to get each IR node recursively? Is this function can get correct order of IR node?


Yes, tvm:ir::PostOrderVisit traverse the tree recursively and applies your lambda function on each node exactly once. You can build your tree inside your lambda function. You might also be interested in ExprFunctor/StmtFunctor.


@wweic Thanks for your reply
I’ve attempted to use PostOrderVisit to traverse the tree of IR node, I got some information about the TVM IR node, but I think it’s not something that I need.
Maybe the entrance to be traversed is incorrect.
Following is the code I used.
Could you give me some information about what’s wrong with me?
Thanks a lot.

auto n = tvm::var("n");
tvm::Array<tvm::Expr> shape;

// define algorithm
auto A = tvm::placeholder(shape, tvm::Float(32), "A");
auto B = tvm::placeholder(shape, tvm::Float(32), "B");
tvm::Tensor C = tvm::compute(A->shape, [&A, &B](tvm::Expr i) { return A[i] + B[i]; }, "C");

// set schedule
tvm::Schedule s = tvm::create_schedule({ C->op });

//	tvm::BuildConfig config();
tvm::BuildConfig config(std::make_shared<tvm::BuildConfigNode>());
auto target = tvm::Target::create("llvm");
auto target_host = tvm::Target::create("llvm");
auto args = tvm::Array<tvm::Tensor>({ A, B, C });

std::unordered_map<tvm::Tensor, tvm::Buffer> binds;
tvm::Array<tvm::LoweredFunc> lowered = tvm::lower(s, args, "fadd", binds, config);

auto body = C->op.as<tvm::ComputeOpNode>()->body;
dPrint(body.size(), "body size");
int nCnt = 0;

auto fvisit = [&n_var, &nCnt](const tvm::NodeRef& n) {
    std::cout << "\n==fvisit==: " << ++nCnt;
    dPrintNode(*n.node_, ", ");
    if (const tvm::Variable* var = n.as<tvm::Variable>())
        std::cout << "NodeRef-type: " << var->_type_key << std::endl;
        std::cout << "var-name_hint: " << var->name_hint << std::endl;
    const tvm::ir::Call* call = n.as<tvm::ir::Call>();
    if (call != nullptr)
        std::cout << "NodeRef-type: " << call->_type_key << std::endl;
        std::cout << "call-name: " << call->name << std::endl;
        std::cout << "call-type: " << call->type << std::endl;
        if (call->func.defined())
            std::cout << "Function-name: " << call->func->func_name() << std::endl;
            tvm::Tensor t = tvm::Operation::Operation(call->func.node_).output(call->value_index);
        std::cout << "args-size: " << call->args.size() << std::endl;
        std::cout << "args: " << std::endl;
        for (const auto& e : call->args)
            std::cout << "arg-expr-value: " << e << std::endl;
            std::cout << "arg-expr-type: " << e.type() << std::endl;

    const tvm::ir::Add* add = n.as<tvm::ir::Add>();
    if (add != nullptr)
        std::cout << "NodeRef-type: " << add->_type_key << std::endl;
        std::cout << "type: " << add->type << std::endl;
        std::cout << "Expr-a: " << add->a << ", type: " << add->a.type() << std::endl;
        std::cout << "Expr-b: " << add->b << ", type: " << add->b.type() << std::endl;

for (auto& e : body)
    std::cout << "Expr-type: " << e.type() << std::endl;
    tvm::ir::PostOrderVisit(e, fvisit);


I tweaked your code a bit, and it seems like it’s traversing the tree in the right order(from leaf node to root node).

stmt is: 
produce C {
  for (ax0, 0, n) {
    C[ax0] = (A[ax0] + B[ax0])

body size: 1
Expr-type: float32

==fvisit==: 1 -> ax0
NodeRef-type: Variable
var-name_hint: ax0

==fvisit==: 2 -> A(ax0)
NodeRef-type: Call
call-name: A
call-type: float32
Function-name: A
args-size: 1
arg-expr-value: ax0
arg-expr-type: int32

==fvisit==: 3 -> B(ax0)
NodeRef-type: Call
call-name: B
call-type: float32
Function-name: B
args-size: 1
arg-expr-value: ax0
arg-expr-type: int32

==fvisit==: 4 -> (A(ax0) + B(ax0))
NodeRef-type: Add
type: float32
Expr-a: A(ax0), type: float32
Expr-b: B(ax0), type: float32

Which part do you think is wrong?



Thanks for your reply.
There are 2 questions I can’t understand clearly:

  1. In my understanding, the IR should include any detail related to computing, such as the for loop node with range, the range of ax0 should be [0, n], etc. But the result of the above shows A and B is Call node instead of something like Tensor or maybe something else.
  2. In the end, How could I get more details about the IR node? would it be more help for traverse all IR nodes by inheriting the IRVisitor and implementing all of the virtual Visit_ function for each node?

  1. Call node is expected. A and B are of type Tensor. When you take subscript of Tensor, you get back Slice. So A[i] is of type Slice. If you apply binary operations on Slice, it will be converted to Call. If you care about For, then add your listener code for it in the lambda function.
  2. You should be able to get the same information with PostOrderVisit, just add all the listener code in your lambda function. Though I would slightly prefer write your own class and inherit from IRVisitor.


Continuing the discussion from Where can I find any c++ sample?:

Thanks for your reply, it’s really useful for me.
I have used tvm::Operation method like this tvm::Tensor t = tvm::Operation::Operation(call->fun.node_).output(call->value_index); to get a tensor from call, is this method correct?


My 2 cents:

Simple model and runner in C++, plus the build script https://github.com/grwlf/nixtvm/tree/master/src/mironov/tvm0