Reuse the code from VTA

Hi, @tqchen our team is planning to use TVM and some of the VTA code for our AI chips.

Do you think it is possible to reuse most of the code from the folder vta by enhancing the tensorization related passes for our tensor instructions?

 57                  (1, ir_pass.annotate_alu_coproc_scope),                        
 58                  (1, lambda x: tvm.ir_pass.LiftAttrScope(x, "coproc_uop_scope", True)),
 59                  (1, lift_coproc_scope),                                        
 60                  (1, ir_pass.inject_coproc_sync),

Yes, one of VTA’s goals is to serve as a blue-print for ASIC compilers.

@tqchen
This is great!

In my understanding, VTA is a backend of TVM, which reuses the frontend and middle-end. Therefore, the output IR is a combination of basic ops in Halide and for loops. Then, VTA inserts DMA ops and tensorize basic ops.

Do you think it could be reasonable to support intrinsics in TVM low level IR that can easily lowered into common instructions in AI chips?

By the way, how efficient is current tensorization. Does it support for detecting user implemented node which uses lambda expr? How is the theory behind this?

Hi,I just start to study TVM , I want to know except VTA’s code vta.cc whcih was design by someone.Does the TVM can aotu-print code of FPGA like it print cuDNN code?

please see https://docs.tvm.ai/vta/tutorials/index.html, which contains detailed guide on how this happens