[codegen_cuda] How to write a unittest for my PR?

I have fixed an error when compiled with float16 model in cuda.
Related question in TVM Discuss can be found here.
Can someone check my pull request?
#3811

Here is the error detail.

/tmp/tmpz_0pydlm/my_kernel.cu(9890): error: more than one instance of overloaded function "max" matches the argument list:
            function "max(int, int)"
            function "max(unsigned int, unsigned int)"
            function "max(int, unsigned int)"
            function "max(unsigned int, int)"
            function "max(long, long)"
            function "max(unsigned long, unsigned long)"
            function "max(long, unsigned long)"
            function "max(unsigned long, long)"
            function "max(long long, long long)"
            function "max(unsigned long long, unsigned long long)"
            function "max(long long, unsigned long long)"
            function "max(unsigned long long, long long)"
            function "max(float, float)"
            argument types are: (half, __half)

I have submitted a pull request #3811.
@ cchung100m told me need to add the unittest in tests/python/unittest/test_codegen_cuda.py.
Do I need implement the unittest myself?

Great work, man!
But I have another question here. The max/min functions are defined for almost any other types (i.e. int, long, float, …) in cuda. Why they are not defined for half type?

Because half is not a basic dtype in cpu?
Actually they have defined many other functions for half, like gt lt and etc…

I have add a unit test for my PR here #3811
Can anybody go for a check?
This unit test was passed on all of my machines.
If the PR is not committed on you TVM, you will get an error when running this unittest, otherwise will success.