Does NNVM+TVM support train-process optimization on different targets?

Hello,

I just installed the NNVM+TVM and learned to use their tutorials and examples. For inference-process, NNVM+TVM works well and it is really useful for the deployment of well-trained models on different targets.

Now I have a special scenario for training-process: Firstly I have a compute graph built by deep learning framework like MXNETor Caffe. Then I have a deep learning accelerator which does not support these framework for now. The IR of these frameworks different from each other and porting these frameworks is an onerous task. The NNVM+TVM offers reusable computation graph optimization and compilation for different deep learning systems. So I think it may be a feasible scheme for compiling and optimizing neural networks built by different frameworks and training them by our accelerator.

However, I didn’t find any tutorial or example related to training-process. Does NNVM+TVM support train-process optimization for different targets? Thanks.