CUDA model compiled can not be invoked for inference on Jetson

Hi All,

Host
Windows 8.1 64bit
CUDA 10.0
CUDNN 7
LLVM 8.0 (biuld from source successfully)
TVM 0.6dev
PYTHON 3.5
GPU RTX 2070
Device
Jetson Nano Ubuntu18.04 (aarch64-linux-gnu)
CUDA 10.0
CUDNN 7
TVM 0.6dev (TVM Runtime built with CUDA)
PYTHON 3.6
GPU 128-core Maxwell
CPU Quad-core ARM A57 @ 1.43 GHz

CPU model can be compiled on PC and invoked for reference successfully on Jetson Nano.

Though CUDA model can be compiled successfully on PC , It’s aborted with error as below when inference.

RuntimeError: Compilation error:
/tmp/tmpyrgrc0ue/lib.o: file not recognized: File format not recognized 

I tried a lot , but no lucky. How to find the problem?
Help me please! thanks !

I had the same issue and resolved it by adding the missing target_host parameter of the build function.

You can get the desired value by executing “gcc -v” on your board and look for the “Target” field. For example, in the case of a Raspberry Pi 4 instead of a Jetson Nano, you would get “arm-linux-gnueabihf” and so pass target_host=“arm-linux-gnueabihf” to the build function.

1 Like