Adding custom NNVM op from Python?


I have a custom op implemented in my MXNet fork and would like to deploy the model with NNVM. Currently, I am adding the custom operator in C++ backend:

/* nnvm/src/top/tensor/ */
.describe(R"code(Implements a ring buffer, in which a set number of past
inputs are internally cached. The output of the buffer operator is the latest
[length_buffer] outputs.)code" NNVM_ADD_FILELINE)
  "FMutateInputs", [](const NodeAttrs& attrs) {
    return std::vector<uint32_t>{1};
.set_attr<FInferShape>("FInferShape", RingBufferShape)
.set_attr<FInferType>("FInferType", ElemwiseType<2, 1>) 
.add_argument("data", "NDArray-or-Symbol", "Latest input")
.add_argument("buffer", "NDArray-or-Symbol",
              "Buffer storing latest [length_buffer] inputs")
  "FTVMCompute", [](const NodeAttrs& attrs,
                    const Array<Tensor>& inputs,
                    const Array<Tensor>& out_info) {
    /* dummy; will be replaced by a call to */
    LOG(FATAL) << "Reached a dummy implementation; "
               << "must supply a TVM implementation with";
    return Array<Tensor>{ inputs[0] };

and then inject the operator implementation (done in TVM) from Python side:

def compute_ring_buffer(attrs, inputs, _):
  return topi.nn.ring_buffer(inputs[0], inputs[1], axis=attrs.get_int("axis"))
         # this function returns the result of tvm.compute(...)

def schedule_ring_buffer(_, outs, target):
    return topi.generic.schedule_injective(outs)'ring_buffer', compute_ring_buffer, level=100)'ring_buffer', schedule_ring_buffer)

This is good enough for now, but I’m wondering if I can add to nnvm.symbol from Python side.


hi, do you succeed to do this?


@ColdCodeCool I ended up implementing a custom op in Relay instead. It is a lot easier to add a new op in Relay, although I had to write shape inference logic in C++.


@ColdCodeCool Take a look at for an example.