Quick start tutorial gives " Cannot find config for target=cuda -model=unknown, workload=('conv2d_nchw.cuda'"

After installing tvm on AWS EC2 instance, I ran the sample code to compile resnet. I got the below;

Cannot find config for target=cuda -model=unknown, workload=('conv2d_nchw.cuda', ('TENSOR', (1, 64, 56, 56), 'float32'), ('TENSOR', (64, 64, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=cuda -model=unknown, workload=('conv2d_nchw.cuda', ('TENSOR', (1, 128, 28, 28), 'float32'), ('TENSOR', (128, 128, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=cuda -model=unknown, workload=('conv2d_nchw.cuda', ('TENSOR', (1, 256, 14, 14), 'float32'), ('TENSOR', (256, 256, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=cuda -model=unknown, workload=('conv2d_nchw.cuda', ('TENSOR', (1, 512, 7, 7), 'float32'), ('TENSOR', (512, 512, 3, 3), 'float32'), (1, 1), (1, 1, 1, 1), (1, 1), 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=cuda -model=unknown, workload=('dense_small_batch.cuda', ('TENSOR', (1, 512), 'float32'), ('TENSOR', (1000, 512), 'float32'), None, 'float32'). A fallback configuration is used, which may bring great performance regression.

And I got;

[‘deploy_lib.tar’, ‘deploy_param.params’, ‘deploy_graph.json’]

, but there is no compiled module file found in the current directory.

“Cannot find config” mean there’s no log to optimize that configuration so you need to autotune them.

And I think the reason there is no module is by using the function below.

tvm.contrib.util. tempdir ( custom_path=None )

this function Create temp dir which deletes the contents when exit. Therefore, the module will not be saved.

Hi, I am also facing same warning issue while running resnet-50 on ‘GeForce GTX 1070’ with –model=1080ti, and taking 4.58ms for inferencing which is nearly twice than actual expected value(w.r.t benchmark on 1080ti). Do I need to fine-tune those workloads for hardware? or is there any pre-tuned cache available for 1070?

Please help me on this.

I am not sure, but titan, 2080ti, 1080ti have tuned configuration in TVM. So if you want to get the optimal performance for the 1070ti, tuning seems to be the right choice.

also There is a simple tuning template provided in the TVM tutorial, so use it to tune it.

Thank you so much, for your immediate input. Let me try tuning in my local machine.

I tried to do the same test on another machine ‘RTX 2080ti’, getting similar warning issue. I couldn’t find pre-tuned log file for 2080ti in the log repo.

  • Is this updated or I am looking in wrong link?
  • Is there an option to clear log-cache, and run benchmarking freshly, or is it not necessary?

Thanks in advance.

I’ve just referenced the benchmarks, but the 2080ti doesn’t seem to be there. Maybe the 2080ti also needs to be tuned.

Thank you. Looking forward to.