[SOLVED] Compile error related to autotvm

Hi,
I’m trying to go through the Quick Start Tutorial for Compiling Deep Learning Models. An error has appeared at this part:

opt_level = 3
target = tvm.target.cuda()
with nnvm.compiler.build_config(opt_level=opt_level):
graph, lib, params = nnvm.compiler.build(
net, target, shape={“data”: data_shape}, params=params)

And the error message shows like this:

TVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File “/home/ubuntu/.local/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/_ffi/_ctypes/function.py”, line 55, in cfun
rv = local_pyfunc(*pyargs)
File “/home/ubuntu/.local/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/autotvm/measure/measure_methods.py”, line 560, in tvm_callback_cuda_compile
ptx = nvcc.compile_cuda(code, target=“ptx”, arch=AutotvmGlobalScope.current.cuda_target_arch)
File “/home/ubuntu/.local/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/contrib/nvcc.py”, line 56, in compile_cuda
raise ValueError(“arch(sm_xy) is not passed, and we cannot detect it from env”)
ValueError: arch(sm_xy) is not passed, and we cannot detect it from env

I have run the nvcc --verion to check my nvcc installation, and it looks fine.
nvcc: NVIDIA ® Cuda compiler driver
Copyright © 2005-2017 NVIDIA Corporation
Built on Fri_Sep__1_21:08:03_CDT_2017
Cuda compilation tools, release 9.0, V9.0.176

It seems to be a problem related to autotvm. Anyone has ideas about fixing it? thanks!

1 Like

It is not related to autotvm. It means we cannot run device query on your device.

Can you try this script ?

import tvm
print(tvm.gpu(0).exsit)
print(tvm.gpu(0).compute_version)

If you see ‘False’ or ‘None’, then there is something wrong with your installation.

hi, I tried this. The output is
False
Traceback (most recent call last):
File “test.py”, line 3, in
print(tvm.gpu(0).compute_version)
File “/home/ubuntu/.local/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/_ffi/runtime_ctypes.py”, line 168, in compute_version
self.device_type, self.device_id, 4)
File “/home/ubuntu/.local/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/_ffi/_ctypes/function.py”, line 185, in call
ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
File “/home/ubuntu/.local/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/_ffi/base.py”, line 66, in check_call
raise TVMError(py_str(_LIB.TVMGetLastError()))
tvm._ffi.base.TVMError: [17:17:39] /home/ubuntu/tvm/src/runtime/cuda/cuda_device_api.cc:48: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: unknown error

Is this a problem with my tvm or cuda installation?
Thank you!

What is the output of nvidia-smi?

I figured out this is a problem related to my gpu installation. Thank you and @merrymercy!

(want to add a comment since it’s a first link from searching the error “ValueError: arch(sm_xy) is not passed”)

s,exsit,exist,:

import tvm
print(tvm.gpu(0).exist)
print(tvm.gpu(0).compute_version)

Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading CUDA: unknown error

On Windows while installing CUDA make sure you have checked “Driver components” in “Select driver components” section.

Hi,

Currently I don’t have access to Nvidia GPU, but I want to try tvm and the first tutorial. I have installed tvm and cuda successfully, but also I get the same issue:

ValueError: arch(sm_xy) is not passed, and we cannot detect it from env

Is it possible to use tvm without a Nvidia gpu?

Thanks,
Nick

1 Like

I’m having the an error for Quick Start Tutorial for Compiling Deep Learning as well

TVMError Traceback (most recent call last)
in
3 with relay.build_config(opt_level=opt_level):
4 graph, lib, params = relay.build_module.build(
----> 5 net, target, params=params)

~/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py in build(func, target, target_host, params)
194 bld_mod = BuildModule()
195 graph_json, mod, params = bld_mod.build(func, target, target_host,
–> 196 params)
197 return graph_json, mod, params
198

~/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py in build(self, func, target, target_host, params)
105 self._set_params(params)
106 # Build the function
–> 107 self._build(func, target, target_host)
108 # Get artifacts
109 graph_json = self.get_json()

~/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/_ffi/_ctypes/function.py in call(self, *args)
207 self.handle, values, tcodes, ctypes.c_int(num_args),
208 ctypes.byref(ret_val), ctypes.byref(ret_tcode)) != 0:
–> 209 raise get_last_ffi_error()
210 _ = temp_args
211 _ = args

File “/home/jameshill/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/contrib/nvcc.py”, line 98, in compile_cuda
raise RuntimeError(msg)
RuntimeError: Compilation error:
nvcc fatal : Path to libdevice library not specified

When I run
import tvm
print(tvm.gpu(0).exsit)
print(tvm.gpu(0).compute_version)

I get
True
6.1

CAN YOU DESCRIBE HOW YOU FIX IT ??

1 Like

Make sure you can do nvidia-smi from your computer, also make sure you rebooted after installing CUDA toolkit, this was what happened to me after I installed on Ubuntu.

1 Like

I had the exact same problem and after like 3 months, I finally found a solution:

I assume you have installed CUDA and also exported /usr/local/cuda/bin|lib64 (-> nvcc -V works).

ValueError: arch(sm_xy) is not passed, and we cannot detect it from env

This suggests that we can pass it somehow, right? Yes, we can, but it is hidden deep down in autoTVM:

import tvm
tvm.autotvm.measure.measure_methods.set_cuda_target_arch("sm_62")

"sm_62" means that the target GPU has tvm.gpu(0).compute_capability == 6.2. Adapt this if necessary, e.g. compute_capability == 5.1 would be "sm_51"

1 Like

i also error same you, after i add to bashrc two line:

export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}$

export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

and solved!

2 Likes