Schedule step necessary when importing from TensorFlow?

I’ve noticed, in many of the tutorials, that a scheduling step is necessary in between building and compiling.
Is this same scheduling step required when one imports a pre-existing TensorFlow design, using nnvm.frontend.from_tensorflow() followed by nnvm.graph.create(), or is the scheduling either:

  • already implicit to the TensorFlow design, or
  • provided by the importing action of one, or both, of the above functions?

Scheduling is inherent to TVM compilation. First we describe the algorithm, we refer to this as the declaration step. Then we apply a schedule which defines how that algorithm is executed. This includes techniques like tiling, threading, vectorization etc. These scheduling optimizations should be tuned to the hardware target. For that there’s AutoTVM which will autotune a schedule to run fast on your target device.

The TVM compilation is for operators only: this means that this is done at the layer level (e.g. conv2d of a given shape). When it comes to tensorflow, what you are importing is a relay program which describes the high level compute. Each node in that graph can call into a specific operator implementation (like the conv2d previously mentioned) which is where scheduling plays an important role.