Adding custom NNVM op from Python?

I have a custom op implemented in my MXNet fork and would like to deploy the model with NNVM. Currently, I am adding the custom operator in C++ backend:

/* nnvm/src/top/tensor/buffer_op.cc */
NNVM_REGISTER_OP(ring_buffer)
.describe(R"code(Implements a ring buffer, in which a set number of past
inputs are internally cached. The output of the buffer operator is the latest
[length_buffer] outputs.)code" NNVM_ADD_FILELINE)
.set_num_inputs(2)
.set_num_outputs(1)
.set_attr_parser(ParamParser<RingBufferParam>)
.set_attr<FMutateInputs>(
  "FMutateInputs", [](const NodeAttrs& attrs) {
    return std::vector<uint32_t>{1};
  })  
.set_attr<FInferShape>("FInferShape", RingBufferShape)
.set_attr<FInferType>("FInferType", ElemwiseType<2, 1>) 
.add_argument("data", "NDArray-or-Symbol", "Latest input")
.add_argument("buffer", "NDArray-or-Symbol",
              "Buffer storing latest [length_buffer] inputs")
.add_arguments(RingBufferParam::__FIELDS__())
.set_attr<FTVMCompute>(
  "FTVMCompute", [](const NodeAttrs& attrs,
                    const Array<Tensor>& inputs,
                    const Array<Tensor>& out_info) {
    /* dummy; will be replaced by a call to nnvm.top.register_compute() */
    LOG(FATAL) << "Reached a dummy implementation; "
               << "must supply a TVM implementation with nnvm.top.register_compute()";
    return Array<Tensor>{ inputs[0] };
})
.set_support_level(1);

and then inject the operator implementation (done in TVM) from Python side:

def compute_ring_buffer(attrs, inputs, _):
  return topi.nn.ring_buffer(inputs[0], inputs[1], axis=attrs.get_int("axis"))
         # this function returns the result of tvm.compute(...)

def schedule_ring_buffer(_, outs, target):
  with tvm.target.create(target):
    return topi.generic.schedule_injective(outs)

nnvm.top.register_compute('ring_buffer', compute_ring_buffer, level=100)
nnvm.top.register_schedule('ring_buffer', schedule_ring_buffer)

This is good enough for now, but Iā€™m wondering if I can add to nnvm.symbol from Python side.

hi, do you succeed to do this?

@ColdCodeCool I ended up implementing a custom op in Relay instead. It is a lot easier to add a new op in Relay, although I had to write shape inference logic in C++.

@ColdCodeCool Take a look at https://github.com/dmlc/tvm/pull/4039 for an example.

1 Like

Thank you so much, I will try it out