What is the long-term development of TVM? I have some puzzles


#1

I have my doubts about the development of TVM

Q1: the support degree of TVM for operators, such as AddN, TensorArrayV3 and so on. Since the above operators are not supported, TVM cannot support fastRCNN, SSD and other models. But these models are very general and common.

Q2: according to the test results of INT8 calculation, which is newly supported in TVM version, there is no significant advantage compared with TensorRT in GTX 1080Ti GPU.In the test results of TX2, TVM does not support FP16 computation and FP32 performance is far inferior to TensorRT.

Q3: does TVM have a long-term, reliable plan to support the commercialization of TVM?For example, comprehensive support for front-end framework and back-end hardware, as well as performance improvement of TensorRT in ARM platform, similar to TX2 edge computing application scenario.

I am very optimistic about the future of TVM, but I think these things need to be considered


#2

some of these suggestions are fair and as tvm is a community project, you are more than welcomed to contribute to some of these specific aspects.

In terms of FP32 perf, https://tvm.ai/2018/10/03/auto-opt-all have some comparisons that shows perf are comparable.

See https://docs.tvm.ai/tutorials/frontend/deploy_ssd_gluoncv.html#sphx-glr-tutorials-frontend-deploy-ssd-gluoncv-py for example SSD support