Is autoTVM an option?

Hi, I would like to ask a simple question.
In the tutorial, there is optimize / opt_gemm.py.
In this sample, I can not find an automated schedule optimizer (autoTVM).
Is not the sample optimized for the schedule?
For example, do I need to add an autotvm module to optimize this sample?
Thank you in advance.

The sample here was tuned for M,K,N = 1024 and an Intel i7-4770HQ CPU. In that sense, it is “already optimized.” However, if different hardware or data shapes are used, there are no performance guarantees. You should be able to run it already, but if you wish to make changes to the data shapes or hardware than autotvm can improve the performance by finding a different schedule configuration.

Thank you very much for your answer.

Basically, what is the final result of autoTVM?
If I change only deep learning model to same data shape and hardware, then can I still use “already optimized” result?

The original approach of TVM scheduling was to handcraft a schedule with fixed tiling factors, loop reorderings, etc. that were hand-tuned for a given workload or a small number of workloads. If you look at the old schedules, you will see many hardcoded values that were handpicked, and branches written to manually handle popular data shapes.

This approach quickly becomes tedious when we want to support many variations of hardware and data shapes, so the goal of AutoTVM is to pick each of the configurable properties of a schedule (tiling factors, loop reorderings, etc.) automatically and to specialize each configuration to a specific data shape. There is no requirement to use AutoTVM for every task, but it is the most efficient way we currently have to get the most performance for an operator.

So to answer your last question, it depends on what else is different about the models. If the input shapes of two models are the same, this does not mean that the data shapes of each layer will be the same as different models can have different properties (e.g., stride, depth, etc.) for each layer. To guarantee the most performance when using an already optimized configuration from AutoTVM, all operator shapes (we call this the workload) must match exactly. In practice we can find that similar shapes (especially those that share common factors) may be close in performance even when sharing a schedule configuration. Note that these considerations are just performance—barring any brittle schedule transformations, you will likely be able to just use a fallback or another schedule configuration to just run your task, even if the performance will not be great.

It’s interesting to me. Thank you very much for your clear answer. @eqy