All the models currently are executed for float32 datatype. Are any quantized models(say INT8) also supported, if yes please brief me about them.
You mean suppor existing INT8 quantized model (for example TFLite) or support quantize FP32 model?
If you could answer both the questions then it would be great. About the first one I found out that work is in progress for existing INT8 quantized model am I right?
Please comment on the second one also.
Yes. The fisrt one we are working in progress.
The second one is also in progress. https://github.com/dmlc/tvm/pull/2116 is PR for second one you want to know.