[TVM Tensorflow] how to convert a fp32 tensorflow model to fp16 and run it on nvidia GPU?

Hi, all, i am currently want to convert a fp32 tensorflow model into fp16 and then run it on nvidia GPU, as far as i know, it is very easy to do so if you use TensorRT, just set the inference data type to fp16, then tensorRT will generate a inference engine for fp16. But for TVM, i haven’t seen such usage, i just saw this
post https://discuss.tvm.ai/t/relay-automatic-fp16-downcasting/3952, but i am not very clear about the idea, could someone give some advice or suggesstions? Thx a lot!