Autotvm - A fallback configuration is used, which may bring great performance regression

Hi,

I am trying to run an online demo of the gesture recognition model on the NVIDIA Jetson TX2 and the model uses TVM to autotune for the underlying ARM CPU.

I get the following warning multiple times: … WARNING:autotvm:Cannot find config for target=llvm -target=aarch64-linux-gnu, workload=(‘conv2d’, (1, 3, 224, 224, ‘float32’), (32, 3, 3, 3, ‘float32’), (2, 2), (1, 1), (1, 1), ‘NCHW’, ‘float32’). A fallback configuration is used, which may bring great performance regression. …

I also noticed that the performance is degraded as the warning suggests.

Some info about setup: The model uses MobileNetV2 as backbone, and I am trying to use only the CPU to run the model. I tried modifying the -target argument after finding the target from running “gcc -v” on the device. Also i do not use RPC from the host. I am running the demo on the device.

Can someone tell me how to fix this?

Did you tune the model by yourself before building it? And how did you build the model?

The online demo uses a pretrained torch model. I modified the target and ran the demo. The demo seems to download a torch model and convert to onnx and then to tvm module using the specified target. Hence i expect that model should already be tuned to be run on TX2. Or am i wrong somewhere?

Apart from this, for target = ‘cuda’, there seems to only one warning of this kind ,

WARNING:autotvm:Cannot find config for target=cuda, workload=(‘dense’, (1, 1280, ‘float32’), (27, 1280, ‘float32’), 0, ‘float32’). A fallback configuration is used, which may bring great performance regression.

This is the code which converts torch to tvm using relay:

So it’s not doing auto-tuning. If you didn’t tune the model by yourself, TVM will try to use pre-tuned logs, but if a workload in your model doesn’t appear in the pre-tuned logs, you will see the WARNING.

You could follow this tutorial to tune your model on ARM CPU or TX2 after converting it to Relay: https://docs.tvm.ai/tutorials/autotvm/tune_relay_arm.html#sphx-glr-tutorials-autotvm-tune-relay-arm-py

Is there any way to see these pre-tuned logs?

  • Managed to find the link to the log file being downloaded by TVM during compilation.

https://github.com/uwsampl/tvm-distro/tree/master/tophub

2 Likes