Whether TVM supports the frontend conversion to FLOAT16


I noticed that in “from_mxnet” ,There’s a parameter is “dtype” , but if it is “float16”, The model will fail to transform.
Check failed: t0->dtype == t1->dtype (float16 vs. float32) :

and in “from_tensorflow”, There is no “dtype” parameter , so i want to know whether TVM supports the frontend conversion to float16 , or It needs to be something like quantification to support float16.

thanks a lot !


Seems TVM do not support convert model to fp16 in frontend directly.
There are two ways:

  1. convert model to fp16 outside TVM.
  2. modify TVM code to support conversion.


thanks so much! I have another maybe silly question. Is there a model download for FP16,I’ve looked at almost all the official websites and only offer FP32 and INT8.

thanks again!