”Auto-tuning for x86 CPU“ Not running correctly

I am running Compile ONNX Models and the following warning appears.

WARNING:autotvm:Cannot find config for target=llvm, workload=('conv2d', (1, 32, 224, 224, 'float32'), (9, 32, 3, 3, 'float32'), (1, 1), (1, 1), (1, 1), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm, workload=('conv2d', (1, 64, 224, 224, 'float32'), (32, 64, 3, 3, 'float32'), (1, 1), (1, 1), (1, 1), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
WARNING:autotvm:Cannot find config for target=llvm, workload=('conv2d', (1, 1, 224, 224, 'float32'), (64, 1, 5, 5, 'float32'), (1, 1), (2, 2), (1, 1), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.

I use Auto-tuning a convolutional network for x86 CPU, which is executed in the project, but there is no result output, only

Process finished with exit code 0
The file does not seem to compile correctly, how should this situation be solved?

Would you want to share how you used AutoTVM?

I installed TVM according to the tutorial. There is no problem in the process. When I try to run the sample, it will appear when compiling the algorithm of the neural network.

Cannot find config for target=llvm, workload=('conv2d', (1, 3, 512, 512, 'float32'), (64, 3, 7, 7, 'float32'), (2, 2), (3, 3), (1, 1), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=llvm, workload=('conv2d', (1, 64, 128, 128, 'float32'), (64, 64, 1, 1, 'float32'), (1, 1), (0, 0), (1, 1), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.
Cannot find config for target=llvm, workload=('conv2d', (1, 64, 128, 128, 'float32'), (64, 64, 3, 3, 'float32'), (1, 1), (1, 1), (1, 1), 'NCHW', 'float32'). A fallback configuration is used, which may bring great performance regression.

But in the end I can show the results. I queried this question in the community. It should be that the workload=‘conv2d’ corresponding to the X86 platform is not defined, then use the Auto-tuning in the tutorial to run tune_relay_x86.py in the project, but the result No expected output, such as given in the sample

Extract tasks...
Tuning...
[Task  1/12]  Current/Best:  598.05/2497.63 GFLOPS | Progress: (252/252) | 1357.95 s Done.
[Task  2/12]  Current/Best:  522.63/2279.24 GFLOPS | Progress: (784/784) | 3989.60 s Done.

But there are no errors, only Process finished with exit code 0
I am not sure how to solve this problem.

What do you mean by result output? AutoTVM will produce a tuned implementation of a model, rather than the output of model inference.

The result output I am referring to is the result of the image in the sample deploy-ssd. The program running tune_relay_x86.py does not seem to work.

You should not run tuning to get the inference result of an image. It is used to produce a fast version of the model, which you then run to get the inference result on an image.

I understand what you mean, it may be that I am not clear, I want to generate a quick version for X86, but tune_relay_x86.py has no effect after running, the alarm about LLVM still exists.

In that case you can wrap your model compilation with the same with autotvm.apply_history_best(log_file) to build your model using the tuned configs, where log_file is the log generated by autotvm.

1 Like