CUDA model compiled can not be invoked for inference on Jetson


#1

Hi All,

Host
Windows 8.1 64bit
CUDA 10.0
CUDNN 7
LLVM 8.0 (biuld from source successfully)
TVM 0.6dev
PYTHON 3.5
GPU RTX 2070
Device
Jetson Nano Ubuntu18.04 (aarch64-linux-gnu)
CUDA 10.0
CUDNN 7
TVM 0.6dev (TVM Runtime built with CUDA)
PYTHON 3.6
GPU 128-core Maxwell
CPU Quad-core ARM A57 @ 1.43 GHz

CPU model can be compiled on PC and invoked for reference successfully on Jetson Nano.

Though CUDA model can be compiled successfully on PC , It’s aborted with error as below when inference.

RuntimeError: Compilation error:
/tmp/tmpyrgrc0ue/lib.o: file not recognized: File format not recognized 

I tried a lot , but no lucky. How to find the problem?
Help me please! thanks !