[Q] "TVMError: Cannot convert type int64x4 to CUDA type on a L32 platform" - test_ewise.py::test_add fails

Hello all,

I have built tvm with python bindings on Windows. Now I’m testing it and found that some tests fail.

I run python -m pytest -v tvm_source/tests/python/integration and test_ewise.py::test_add fails with:

def test_add():
    def run(dtype):
    ....
    run("float32")
    run("int32")
    run("int64")
tvm\tests\python\integration\test_ewise.py:256:
...
E   File "...\tvm_source\src\target\source\codegen_cuda.cc", line 247
E   TVMError: Cannot convert type int64x4 to CUDA type on a L32 platform

(I omitted some details.)

Basically my CUDA looks like working with the GPU correctly:

import tvm
print(tvm.gpu(0).exsit)
print(tvm.gpu(0).compute_version)

gives:

True
5.0

Can you please help how to make this test pass?
Can you please tell what the error message and the “L32 platform” mean?

Studying codegen_cuda.cc did not help me, I’m quite a newbie in tvm, in fact I just would like to make a package from it on Windows. :slight_smile:

Details:

  • I build tvm from source code version 0.7dev1
  • Statically linked with llvm 9.0.1
  • Cuda 10.2 and cudnn 7.6.5.32
  • On Windows 10, 64 bit
  • msvc: cl.exe 19.16 (Visual Studio 2017)

Any help appreciated!

It seems that for Windows sizeof(long)=4 (here) while it is typically 8 for other platforms. Since longlong3 and longlong4 are now supported since CUDA 10, maybe you can try replace L247 to something like

if (t.lanes() == 3){
  os << "longlong3";
}else{
  os << "longlong4";
}

, if you need int64 support.

Thank you very much for your comment!

However it seems to me, that the current code (and/or tests) simply do not fully support Windows. Looks like primarily the test code… E.g. from https://github.com/apache/incubator-tvm/pull/5853/checks it seems to me, that tests are not run on Windows CI…