What does this warning 'Cannot find config for target=cuda' mean

What does this warning mean? How can I update the config? I’m running a keras resnet50 model. Thanks.

WARNING:autotvm:Cannot find config for target=cuda, 
workload=('conv2d', (1, 512, 9, 9, 'float32'), (512, 512, 3, 3, 'float32'), 
(1, 1), (0, 0), 'NCHW', 'float32'). 
A fallback configuration is used, which may bring great performance regression.
2 Likes

It means we don’t have tuned config for this layer. You can generate the log by following this tutorial https://docs.tvm.ai/tutorials/autotvm/tune_nnvm_cuda.html#sphx-glr-tutorials-autotvm-tune-nnvm-cuda-py

Notice that we have tuned all layers for official resnet-50. So your keras resnet50 model seems to be wrong in some shapes (possibly due to wrong padding).

Thanks for your reply.

My resnet50 model is the official pretrained model, and it did not have such warnings in July or August. I just built a new docker image with the latest TVM code, and got such warnings.

I ran the example again: https://docs.tvm.ai/tutorials/nnvm/from_keras.html, and also got similar warnings:

WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 3, 230, 230, 'float32'), (64, 3, 7, 7, 'float32'), (2, 2), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 64, 55, 55, 'float32'), (64, 64, 1, 1, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 64, 57, 57, 'float32'), (64, 64, 3, 3, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 64, 55, 55, 'float32'), (256, 64, 1, 1, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 256, 55, 55, 'float32'), (64, 256, 1, 1, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 256, 55, 55, 'float32'), (128, 256, 1, 1, 'float32'), (2, 2), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 128, 30, 30, 'float32'), (128, 128, 3, 3, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 256, 55, 55, 'float32'), (512, 256, 1, 1, 'float32'), (2, 2), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 256, 16, 16, 'float32'), (256, 256, 3, 3, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=cuda, workload=('conv2d', (1, 512, 9, 9, 'float32'), (512, 512, 3, 3, 'float32'), (1, 1), (0, 0), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
1 Like

We have only recently moved to a recent change that does explicit, highly specialized tuning for every GPU target, instead of relying on prespecified handcrafted GPU schedules. The recent change requires more tuning (automatic) for different GPU targets, but you should see substantially improved performance if you can use prepacked parameters or run the tuning example as @merrymercy linked.

There are some problems in the keras model converter. It separates conv2d with padding into two nnvm operators. This pr fixes this. After this patch, some warnings will disappear.

The keras official resnet-50 model is also weird. Because in resnet-50, we don’t have any layer that has input size of (55 x 55). It should be (56 x 56).

If you care about the performance, you should tune it by yourself. If you don’t care about the performance, you can just ignore these warnings.

2 Likes

Hi: I have a question here So what does fallback configuration mean here? Will this layer still run on GPU?

Thank you.

Yes, it will still run on GPU. Fallback means that the tunable parameters aren’t tuned, so the defaults will be used. Performance won’t be optimal.