About support for FP16

I recently had some problems converting the newly trained FP16 mxnet model into a TVM relay function
First, the basic description is as follows:

  1. I trained according to the tutorials provided by mxnet
    2.The test code is also the official code in the TVM tutorial
    The questions are as follows:
  2. When I was testing on the server, I found I couldn’t use the GPU to test
  3. I found that the parameter types of my conversion output are FP16 and FP32, and most of them are FP32
    The error message:
    File “/root/workspace/tvm/python/tvm/relay/frontend/mxnet.py”, line 1156, in _update_shape_dtype
    “%s: dtype not expected %s vs %s” % (k, dtype, v.dtype))

ValueError: bn_data_gamma: dtype not expected float16 vs float32

Is there any solution to this issue? How to use mxnet mixed precision models in TVM?