How does one create a new target, or context?

As I was investigating the potential usefulness of TVM as a high-level behavioral simulator for a new AI accelerator architecture, the tvm.relay.backend.interpreter.Interpreter class caught my eye.
Its optimize() function appears to be the first (only?) step in moving a generic computation graph towards a particular compute architecture.

I think I know how to generate its mod argument from a TensorFlow design, using the relay.frontend.from_tensorflow() function.
But, how do I create its ctx and target arguments?
I’m assuming that both of those should be customized to reflect the nature of my new architecture; is that correct?

Is there a tutorial for this available somewhere?

Thanks!
:slight_smile:

We don’t have a tutorial on this right now but we’re working on a simple compiler + accelerator tutorial so make it easier for people to plug in their own accelerator design in TVM. I think that from FCRC there is a clear demand for this and we’re working on addressing this.

@thierry We have a custom accelerator which uses OpenCL frontend. We want to include this as a custom type like the aocl within the TVM pipeline How do we go about introducing a new target for the same? (we aim to give our custom accelerator as the target parameter for tvm.context)

1 Like