TVM and BLAS libraries

Yes, that’s correct. Also set USE_OPENMP to gnu.

1 Like

@haichen, could u kindly share your protocols to reproduce bert base model ?

I plan to write a blog about how to reproduce bert base model performance using TVM. I’ll let you know after I post it.

1 Like

@haichen @gasgallo I have the same situation with @gasgallo. There is no performance improvement with mkl-dnn. The model is a UNet cnn model and here are some options in config.cmake

set(USE_BLAS mkl)
set(USE_MKL_PATH /home/abc/sdk/intel/mkl)
set(USE_MKLDNN /home/abc/sdk/dnnl_lnx_1.1.1_cpu_gomp)
set(USE_OPENMP gnu)
  1. When runing with llvm, the inference time is about 400ms.
  2. When runing with llvm -libs=cblas, the inference time is about 400ms. NO improvement
  3. When runing with llvm -mcpu=skylake, the inference time is about 200ms. large improvement

It seems mkl-dnn doesn’t work, however when use MXNet framework with mkl-dnn, it does brings big improvement

Currently USE_MKLDNN can be only ON or OFF. It doesn’t support customized library path. It relies on cmake to find the MKLDNN library location. See here.

If MKLDNN is enabled, you should find the following line in the cmake output:

Use MKLDNN library /path/to/mkldnn

@haichen Thanks for the info, My MKLDNN path is /home/abc/sdk/dnnl_lnx_1.1.1_cpu_gomp, when I turn set(USE_MKLDNN on), theren is NO Use MKLDNN library /path/to/mkldnn log in the cmake output. It seems MKLDNN is not found. How can I set MKLDNN path using cmake params?

@7oud Sorry about the late response. I pushed an update on this and now you can specify a customized location to the MKLDNN library.

Any updates ? Get similar results here.