Fails to run tutorial on AWS V100 machine

I installed TVM according to the instructions on an amazon aws instance with v100. Then I tried to run a tutorial file in autotvm, tune_conv2d_cuda. The results are obviously wrong and show Cannot find config for target=cuda, workload=(‘conv2d_no_batching’, 1, 7, 7, 512, 512, 3, 3, (1, 1), (1, 1)). A fallback configuration is used, which may bring great performance regression.

A snippet of other output:

No: 79 GFLOPS: 0.00/0.00 result: MeasureResult(costs=(ValueError(‘Module[stackvm]: can only be saved as stackvm format.did you build with LLVM enabled?’),), error_no=2, all_cost=0.42400193214416504, timestamp=1564183876.2266424) [(‘tile_f’, [1, 2, 4, 64]), (‘tile_y’, [7, 1, 1, 1]), (‘tile_x’, [7, 1, 1, 1]), (‘tile_rc’, [32, 2, 8]), (‘tile_ry’, [3, 1, 1]), (‘tile_rx’, [3, 1, 1]), (‘auto_unroll_max_step’, 512), (‘unroll_explicit’, 0)],None,1841168
No: 80 GFLOPS: 0.00/0.00 result: MeasureResult(costs=(InstantiationError(‘Traceback (most recent call last):\n [bt] (1) /persist/.local/lib/python3.7/site-packages/tvm-0.6.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(TVMFuncCall+0x61) [0x7fa9b6226651]\n [bt] (0) /persist/.local/lib/python3.7/site-packages/tvm-0.6.dev0-py3.7-linux-x86_64.egg/tvm/libtvm.so(+0xaeeb4b) [0x7fa9b6221b4b]\n File “tvm/_ffi/_cython/./function.pxi”, line 56, in tvm._ffi._cy3.core.tvm_callback\n File “/persist/.local/lib/python3.7/site-packages/tvm-0.6.dev0-py3.7-linux-x86_64.egg/tvm/autotvm/measure/measure_methods.py”, line 607, in verify_pass\n raise InstantiationError(“Skipped because of invalid gpu kernel”)\ntvm.autotvm.task.space.InstantiationError: Skipped because of invalid gpu kernel’),), error_no=1, all_cost=0.023105859756469727, timestamp=1564183876.2267394) [(‘tile_f’, [1, 8, 8, 8]), (‘tile_y’, [1, 7, 1, 1]), (‘tile_x’, [7, 1, 1, 1]), (‘tile_rc’, [16, 1, 32]), (‘tile_ry’, [1, 3, 1]), (‘tile_rx’, [1, 3, 1]), (‘auto_unroll_max_step’, 0), (‘unroll_explicit’, 1)],None,6142777

I thought I don’t have to build with LLVM for CUDA?

Can somebody help? Thanks!

have you tried building with set(USE_LLVM ON)?

Yes. I just tried. I get the same output…But, now I get some other interesting results for some explored schedules: No: 1 GFLOPS: 0.00/0.00 result: MeasureResult(costs=(ValueError(‘Direct host side access to device memory is detected in default_function. Did you forget to bind?’),), error_no=2, all_cost=0.009617805480957031, timestamp=1564779531.5478618) [(‘tile_y’, [32, 16]), (‘tile_x’, [4, 128])],None,74…

I looked at the other posts talking about this problem but there seems to be no official solution as of yet?

I am not familiar with this issue; is it V100 specific? Can you perhaps update your post title to include V100 since it may be GPU specific

Just updated title, thanks!

Has anybody else managed to run TVM on a V100 machine?

I managed to run auto tvm on V100 and didn’t meet the error. It is normal to see this error in a small portion of your output

From your output:

ValueError(‘Direct host side access to device memory is detected in default_function. Did you forget to bind?’

Is that because you access host memory?

I am just running the autotuning tutorial…

OK. I recompiled everything and it seems to work now. No idea why the error was there in the first place. Thanks for all the help.