Arch(sm_xy) is not passed, and we cannot detect it from env

Hi all,

I am a newer to tvm.
I can run from_tensorflow.py successfully but fail to run from_mxnet.py
So I would like to consult what happens when I run from_mxnet.py sample.
My environment is set as follows.

  1. Ubuntu 18.04
  2. CMake 3.15.4
  3. OpenCV (3.4.7)
    4, Clang/LLVM (6.0.1)
    5, Boost FileSystem (1.66.0 rc2)
  4. root@d86f53208355:/src/tutorials_python/frontend# nvcc --version
    nvcc: NVIDIA ® Cuda compiler driver
    Copyright © 2005-2017 NVIDIA Corporation
    Built on Fri_Nov__3_21:07:56_CDT_2017
    Cuda compilation tools, release 9.1, V9.1.85
  5. root@d86f53208355:/src/tutorials_python/frontend# nvidia-smi
    Fri Oct 25 08:38:54 2019
    ±----------------------------------------------------------------------------+
    | NVIDIA-SMI 430.50 Driver Version: 430.50 CUDA Version: 10.1 |
    |-------------------------------±---------------------±---------------------+
    | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
    |===============================+======================+======================|
    | 0 GeForce GTX 108… Off | 00000000:17:00.0 Off | N/A |
    | 0% 44C P5 15W / 250W | 0MiB / 11178MiB | 0% Default |
    ±------------------------------±---------------------±---------------------+
    | 1 GeForce GTX 108… Off | 00000000:65:00.0 Off | N/A |
    | 16% 44C P5 26W / 250W | 0MiB / 11176MiB | 0% Default |
    ±------------------------------±---------------------±---------------------+

±----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
±----------------------------------------------------------------------------+

Issue : here is the error when running from_mxnet.py
root@d86f53208355:/src/tutorials_python/frontend# python3.6 from_mxnet.py
File /root/.tvm_test_data/data/cat.png exists, skip.
exist file got corrupted, downloading /root/.tvm_test_data/data/imagenet1000_clsid_to_human.txt file freshly…
Downloading from url https://gist.githubusercontent.com/zhreshold/4d0b62f3d01426887599d4f7ede23ee5/raw/596b27d23537e5a1b5751d2b0481ef172f58b539/imagenet1000_clsid_to_human.txt to /root/.tvm_test_data/data/imagenet1000_clsid_to_human.txt
…100%, 0.03 MB, 147 KB/s, 0 seconds passed
x (1, 3, 224, 224)
Cannot find config for target=llvm, workload=(‘dense’, (1, 512, ‘float32’), (1000, 512, ‘float32’), 0, ‘float32’). A fallback configuration is used, which may bring great performance regression.
Traceback (most recent call last):

File “from_mxnet.py”, line 103, in
m = graph_runtime.create(graph, lib, ctx)

File “/src/tvm/python/tvm/contrib/graph_runtime.py”, line 59, in create
return GraphModule(fcreate(graph_json_str, libmod, *device_type_id))

File “/src/tvm/python/tvm/_ffi/_ctypes/function.py”, line 207, in call
raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (7) /src/tvm/build/libtvm.so(TVMFuncCall+0x65) [0x7f9c42e1b6a5]
[bt] (6) /src/tvm/build/libtvm.so(+0x22c1444) [0x7f9c42e9f444]
[bt] (5) /src/tvm/build/libtvm.so(+0x22c126f) [0x7f9c42e9f26f]
[bt] (4) /src/tvm/build/libtvm.so(tvm::runtime::GraphRuntimeCreate(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, tvm::runtime::Module const&, std::vector<DLContext, std::allocator > const&)+0xf4) [0x7f9c42e9f044]
[bt] (3) /src/tvm/build/libtvm.so(tvm::runtime::GraphRuntime::Init(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, tvm::runtime::Module, std::vector<DLContext, std::allocator > const&)+0x25f) [0x7f9c42e9ec8f]
[bt] (2) /src/tvm/build/libtvm.so(tvm::runtime::GraphRuntime::SetupStorage()+0x51d) [0x7f9c42e9d36d]
[bt] (1) /src/tvm/build/libtvm.so(tvm::runtime::NDArray::Empty(std::vector<long, std::allocator >, DLDataType, DLContext)+0x1e6) [0x7f9c42e38ee6]
[bt] (0) /src/tvm/build/libtvm.so(tvm::runtime::CUDADeviceAPI::AllocDataSpace(DLContext, unsigned long, unsigned long, DLDataType)+0x32d) [0x7f9c42eaaf8d]
File “/src/tvm/src/runtime/cuda/cuda_device_api.cc”, line 115
CUDA: Check failed: e == cudaSuccess || e == cudaErrorCudartUnloading: unknown error

root@d86f53208355:/src/tutorials_python/frontend# ls
build_gcn.py deploy_model_on_rasp.py from_caffe2.py from_darknet.py from_mxnet.py from_tensorflow.py using_external_lib.py
deploy_model_on_android.py deploy_ssd_gluoncv.py from_coreml.py from_keras.py from_onnx.py from_tflite.py
root@d86f53208355:/src/tutorials_python/frontend# vi from_mxnet.py
root@d86f53208355:/src/tutorials_python/frontend# git diff
Not a git repository
To compare two paths outside a working tree:
usage: git diff [–no-index]
root@d86f53208355:/src/tutorials_python/frontend# python from_mxnet.py
Traceback (most recent call last):
File “from_mxnet.py”, line 39, in
import mxnet as mx
ImportError: No module named mxnet
root@d86f53208355:/src/tutorials_python/frontend# python3.6 from_mxnet.py
File /root/.tvm_test_data/data/cat.png exists, skip.
exist file got corrupted, downloading /root/.tvm_test_data/data/imagenet1000_clsid_to_human.txt file freshly…
Downloading from url https://gist.githubusercontent.com/zhreshold/4d0b62f3d01426887599d4f7ede23ee5/raw/596b27d23537e5a1b5751d2b0481ef172f58b539/imagenet1000_clsid_to_human.txt to /root/.tvm_test_data/data/imagenet1000_clsid_to_human.txt
…100%, 0.03 MB, 69 KB/s, 0 seconds passed
x (1, 3, 224, 224)
Cannot find config for target=cuda, workload=(‘dense’, (1, 512, ‘float32’), (1000, 512, ‘float32’), 0, ‘float32’). A fallback configuration is used, which may bring great performance regression.
Traceback (most recent call last):

File “from_mxnet.py”, line 94, in
graph, lib, params = relay.build(func, target, params=params)

File “/src/tvm/python/tvm/relay/build_module.py”, line 207, in build
graph_json, mod, params = bld_mod.build(func, target, target_host, params)

File “/src/tvm/python/tvm/relay/build_module.py”, line 108, in build
self._build(func, target, target_host)

File “/src/tvm/python/tvm/_ffi/_ctypes/function.py”, line 207, in call
raise get_last_ffi_error()

ValueError: Traceback (most recent call last):
[bt] (8) /src/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::shared_ptrtvm::runtime::ModuleNode const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x251) [0x7fc6fbb10cc1]
[bt] (7) /src/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::relay::Function, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, tvm::runtime::NDArray> > > const&)+0xcbf) [0x7fc6fbb1061f]
[bt] (6) /src/tvm/build/libtvm.so(tvm::build(tvm::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::Array<tvm::LoweredFunc, void>, void, void> const&, tvm::Target const&, tvm::BuildConfig const&)+0x409) [0x7fc6fb5d6679]
[bt] (5) /src/tvm/build/libtvm.so(tvm::build(tvm::Map<tvm::Target, tvm::Array<tvm::LoweredFunc, void>, void, void> const&, tvm::Target const&, tvm::BuildConfig const&)+0x2dc) [0x7fc6fb5d57cc]
[bt] (4) /src/tvm/build/libtvm.so(tvm::DeviceBuild(tvm::Array<tvm::LoweredFunc, void> const&, tvm::Target const&)+0x88) [0x7fc6fb5d0008]
[bt] (3) /src/tvm/build/libtvm.so(tvm::codegen::Build(tvm::Array<tvm::LoweredFunc, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)+0x24d) [0x7fc6fb5dd2ed]
[bt] (2) /src/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::runtime::Module (tvm::Array<tvm::LoweredFunc, void>)>::AssignTypedLambda<tvm::runtime::Module ()(tvm::Array<tvm::LoweredFunc, void>)>(tvm::runtime::Module ()(tvm::Array<tvm::LoweredFunc, void>))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x54) [0x7fc6fb60ed14]
[bt] (1) /src/tvm/build/libtvm.so(tvm::codegen::BuildCUDA(tvm::Array<tvm::LoweredFunc, void>)+0x32c) [0x7fc6fbc60c2c]
[bt] (0) /src/tvm/build/libtvm.so(+0x223869b) [0x7fc6fbcc169b]
File “/src/tvm/python/tvm/_ffi/_ctypes/function.py”, line 72, in cfun
rv = local_pyfunc(*pyargs)
File “/src/tvm/python/tvm/autotvm/measure/measure_methods.py”, line 585, in tvm_callback_cuda_compile
ptx = nvcc.compile_cuda(code, target=“ptx”, arch=AutotvmGlobalScope.current.cuda_target_arch)
File “/src/tvm/python/tvm/contrib/nvcc.py”, line 72, in compile_cuda
raise ValueError(“arch(sm_xy) is not passed, and we cannot detect it from env”)
ValueError: arch(sm_xy) is not passed, and we cannot detect it from env

Please help me check the environment and how to fix it

Many thanks
Lucien

See [SOLVED] Compile error related to autotvm