TVM issue with ROCm backend

Hi there,
I am able to build and run TVM with Cuda backend without any problem. I am also able to compile and build the tvm shared libraries when I enable rocm backend in the config.cmake. However, when rocm is enabled I am not able to run the tvm anymore. Even importing the tvm library in the python script fails:
[1] 29777 abort (core dumped) python

What could be the source of the problem?

hmm, do you have AMDGPU and rocm installed correctly?

Can you try clone the repo again and do fresh build?

OK. I realized that I cannot activate both Cuda and Rocm Backend in the config.cmake file. Only one of them can be active. Since I have both AMD and Nvidia cards on the same workstation, does this mean that I need to have two different installations of TVM?

Interesting. Are you sure your rocm is working correctly? I don’t know why enabling two gpu backends cause some issues. Can you trying disable cuda and enable only rocm backend?

I think my rocm installation is working correctly, as I am able to run MIOpen without any problem.
I have two build folders now for TVM. One for Cuda and one for rocm backend.
Since there is no explicit error, I cannot understand what the error is when enabling both backends at the same time.

I have exactly the same problem.

My machine has both NVIDIA and AMD GPUs installed. CUDA and MIOpen sample programs run on this machine. It is noted that:
i) TVM (v 0.4) works when LLVM and CUDA are enabled
ii) It also works when LLVM and ROCM are enabled
iii) It crashes when LLVM + CUDA + ROCM are all enabled

For case iii, import tvm or nnvm.compiler results in the reported “abort (core dumped)” problem in Python 2.7/3.6.

If you enable both cuda and rocm, the libtvm.so will be linked against both cuda and hcc runtime. That’s the only problematic issue I can think of. You can try linking the two runtimes into a normal c++ application and see what happens.