[SOLVED] Compile an 'int32' model instead of 'float32'

I have seen the tutorials and they are supporting only float32 models. I am looking for some model which can support int32. Putting the int32 datatype for the input gives a wrong inference.

I think the post title here and post body are asking two slightly different questions. If the question is can we compile an int32 model that is either natively int32 or has been converted, the answer is yes as we support the int32 datatype. If the question is can we quantize a floating point model to int32, the answer is sort of, as there is currently a quantization pass PR here that has not been merged yet.

thanks I got my answer