How to use int8 ops

I have test the batch_matmul op under int8 and it shows great improvement. How can I quantize batch_matmul ops in my neural network and use this tuned int8 schedule to inference them?