Explicit selection of TOPI schedules

I’m trying to learn more about the Topi schedules and the boilerplate around them.

From what I understand so far, topi/python/topi/ contains different schedules for various operations, specialised for different platforms.

E.g. in the case of Conv2D, there are implementations for x86, Arm, CUDA, etc. These might use different data layouts or other approaches that give a better default code for the target architecture.

The functions are registered using the a Python decorator, e.g. @generic.schedule_conv2d_nhwc.register("cpu").

However, I am not sure how to go about manually selecting these to use in my graphs.

Starting from an Onnx model loaded into tvm, how do I select which schedule to use for (for example) my Conv2D layers?

This could either be if I’ve made my own version, or say I wanted to see what the GPU performance of the x86 schedule would look like.

I’ve looking around the docs, and it seems that the target influences the path of execution. However this has side effects beyond choosing the topi schedule.

So my question is: given an Onnx model, how would I explicitly choose the TOPI schedule to use in an operation in a graph, while still maintaining my preferred target platform. Where can I learn more about the semantics of the registration decorator?

The short answer is no. If your model is converted by ONNX frontend then you have no way to assign a specific TOPI schedule for each layer (op). See this post for how TOPI schedule dispatching works and what we are working on to improve it.

Hello,

If the ONNX model is transformed into a Relay representation, why wouldn’t it be possible to traverse the Relay AST and replace those which one wants to replace the schedules for?