What is the relation among topi, nnvm, tvm, vta

TVM contains different aspects/packages such as topi, nnvm, tvm, and vta. I would like to know what is the relation among them, i.e. which is more abstract than which. What I think it should be topi >> nnvm >> tvm >> vta, but I did not find any direct proof from the documents.

Also, is there a graph to show the lifetime of these different representations (maybe these are not presentations)? I mean LLVM have something like C/C++/… ==> AST ==> LLVM/IR ==> LLVM/MIR ==> ELF/… I think TVM should have something similar.

Can anyone who is familiar with TVM give me an answer? Thanks :slight_smile:

1 Like

You can think of:

  • NNVM as the graph optimizer
  • TOPI as the tensor operator library that NNVM calls into (think of it as a in-house CuDNN)
  • TVM as the DSL used to describe and schedule TOPI implementations. TVM generates code for several backends, like LLVM, or CUDA etc.
  • VTA is a customizable hardware accelerator backend: and to program it TVM generates LLVM code that calls into a VTA runtime. This VTA runtime JIT compiles the VTA instruction stream, and microkernels.

I hope this helps navigate the complexity.

Can I have this conclusion:

  1. TOPI functions are instructions (or primitive operators)
  2. NNVM specifies the algorithm, i.e. to define a graph to show how the TOPI functions are organized.
  3. NNVM also have few passes to optimize the graph
  4. TVM (the code in src folder?) will again optimize the NNVM graph and give the schedule of the TOPI functions (I am not quite sure about this one)
  5. VTA is a way to specify the details of a backend, but tvm generates a canonical code (the syntax is the same, but the compiler may have different choice of instructions according to the VTA specification. Am I right?) for all different backends.

I am confusing on the steps of the whole compiling stack and the resulted representation after each step. Do you have some documents about this? I have read most of the documents, but I think they are all for different parts. I need something in general to show the whole picture of TVM, I mean, something show how different parts work together to compiler users’ specification for a DL algorithm. I guess I will not be able to understand these terms before I know the whole stack well. Thanks for your patient :slight_smile:

Maybe you can refer to this picture:

2 Likes

@ricann Thanks.

Yeah, I think this is something related to what I wanted. Now, I understand the relation of different components of TVM stack. I think NNVM is for computation graph optimization, TOPI for tensor computation description, TVM (the folder src) for schedule space and optimizations.

It would be better if someone can tell me what kind of representations are generated by each components. I guess the representation generated from computation graph optimization and schedule space and optimizations will definitely be different. Maybe the computation graph optimization and tensor computation description also generates different representations (I do not know yet).

There would be value in drilling through the stack, and understanding all of the different IRs indeed, as discussed here as well: TVM-VTA Architecture Scope and Roadmap

Just to clarify not to add any confusion, TOPI not only defines the tensor computation, but also the schedule space for each NN operator (e.g. conv2d). The point of defining a schedule space is for AutoTVM, the autotuner built for TVM to automatically tune those libraries for each backend.

TVM is the DSL in which the TOPI operators are written.

1 Like

Thanks :slight_smile:

Do you have any example that requires only CPU and can show the different of almost most of them?
Thanks :slight_smile: