[SOLVED] Using OpenCL without LLVM



I’m trying to run relay_quick_start.py tutorial example, but with a small twist. Instead of CUDA (this one is working perfectly fine), I wanted to run Intel graphics with OpenCL.

To make it happen I enabled OpenCL build and set target to tvm.target.intel_graphics. With LLVM enabled I’m facing error as described e.g. here.

In the comments of source code of the example there is said:

Notice that you need to build TVM with cuda and llvm enabled.

At the same time in Install from Source there is info:

  • It is possible to build TVM without the LLVM dependency if you only want to use CUDA/OpenCL

In documentation for tvm.relay.build_module.build(func, target=None, target_host=None, params=None) also is said:

  • target_host (str or tvm.target.Target , optional) – Host compilation target, if target is device. When TVM compiles device specific program such as CUDA, we also need host(CPU) side code to interact with the driver setup the dimensions and parameters correctly. target_host is used to specify the host side codegen target. By default, llvm is used if it is enabled, otherwise a stackvm intepreter is used.

So taking it all into account I’d understand that if I do not enable LLVM and enable STACKVM instead I should be able to build OpenCL example without LLVM at all.

However, when executed I am getting error that LLVM is not enabled.

Is there anything more I’m missing? Or is it expected and I do need to have LLVM if I want to build for OpenCL target using RelayIR?


Yes, llvm is required. It is used for constant evaluation during codegen.


OK, thanks for clarification.


How can I build TVM with OpenCL?


You can find official instruction how to build TVM here. If that’s not enough I suppose it’s better to open new question.


I have build TVM with OpenCL, but some errors i meet, when i reproduce the benchmark on mobile-gpu.
The new questio as follows:https://discuss.tvm.ai/t/how-to-build-tvm-with-opencl/2272