I noticed that in “from_mxnet” ,There’s a parameter is “dtype” , but if it is “float16”, The model will fail to transform.
Check failed: t0->dtype == t1->dtype (float16 vs. float32) :
and in “from_tensorflow”, There is no “dtype” parameter , so i want to know whether TVM supports the frontend conversion to float16 , or It needs to be something like quantification to support float16.
thanks a lot !