Hi Community members,
I am quite new to TVM and have some very basic questions. I have gone through the documentation available on the web-page but still few things are not clear to me. So I am asking on thing discussion forum. Sorry for any trivial question which you may feel.
-
Does TVM support training or it is currently for inference and training will be added in future ?
-
Does TVM support quantization as well ?
-
In case of opencl or cuda backend, does we get llvm as IR generated somewhere ? I read somewhere that LLVM is generated for CPU like x86/ARM/AMDGPU and source code is generated for opencl/CUDA. So if I want to see what all optimization happened for OpenCL backend where should I see that ?
Thanks,
Tarun