nnvm._base.NNVMError: Cannot find argument 'auto_pad'

The following error occured when I tried to commpile an ONNX model. Is this a problem of NNVM or is the ONNX model I’m using somehow invalid? I noticed that auto_pad is deprecated (https://github.com/onnx/onnx/blob/master/docs/Operators.md).

But how can I fix the problem? Even a workaround would be fine I guess.

Traceback (most recent call last):
  File "compile_model.py", line 54, in <module>
    sym, params = nnvm.frontend.from_onnx(onnx_model)
  File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 967, in from_onnx
    sym, params = g.from_onnx(graph, opset)
  File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 822, in from_onnx
    op = self._convert_operator(op_name, inputs, attr, opset)
  File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 923, in _convert_operator
    sym = convert_map[op_name](inputs, attrs, self._params)
  File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 132, in _impl_v1
    custom_check=dimension_constraint())(inputs, attr, params)
  File "/usr/tvm/nnvm/python/nnvm/frontend/common.py", line 107, in __call__
    return get_nnvm_op(op_name)(*inputs, **new_attrs)
  File "/usr/tvm/nnvm/python/nnvm/_ctypes/symbol.py", line 181, in creator
    ctypes.byref(sym_handle)))
  File "/usr/tvm/nnvm/python/nnvm/_base.py", line 75, in check_call
    raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: Cannot find argument 'auto_pad', Possible Arguments:
----------------
channels : int, required
    The dimensionality of the output spacei.e. the number of output channels in the convolution.
kernel_size : , required
    Specifies the dimensions of the convolution window.
strides : , optional, default=[1,1]
    Specifies the strides of the convolution.
padding : , optional, default=[0,0]
    If padding is non-zero, then the input is implicitly zero-paddedon both sides for padding number of points
dilation : , optional, default=[1,1]
    Specifies the dilation rate to use for dilated convolution.
groups : int, optional, default='1'
    Controls the connections between inputs and outputs.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two convolutionlayers side by side, each seeing half the input channels, and producinghalf the output channels, and both subsequently concatenated.
layout : string, optional, default='NCHW'
    Dimension ordering of input data. Can be 'NCHW', 'NHWC', etc.'N', 'C', 'H', 'W' stands for batch, channel, height, and widthdimensions respectively. Convolution is applied on the 'H' and'W' dimensions.
out_layout : string, optional, default='__undef__'
    Dimension ordering of output. Can be 'NCHW', 'NHWC', etc.'N', 'C', 'H', 'W' stands for batch, channel, height, and widthdimensions respectively. Default to be same as input layout.
kernel_layout : string, optional, default='OIHW'
    Dimension ordering of weight. Can be 'OIHW', 'OIHW16o16i', etc.'O', 'I', 'H', 'W' stands for num_filter, input_channel, height, and widthdimensions respectively.
out_dtype : {'float16', 'float32', 'float64', 'int16', 'int32', 'int64', 'int8', 'same', 'uint16', 'uint32', 'uint64', 'uint8'},optional, default='same'
    Output data type, set to explicit type under mixed precision setting
use_bias : boolean, optional, default=1
    Whether the layer uses a bias vector.
, in operator conv2d(name="", auto_pad="b'VALID'", groups="1", kernel_size="(5, 5)", use_bias="True", strides="(1, 1)", dilation="(1, 1)", channels="12")

Check if the below patch works for you ?

diff --git a/nnvm/python/nnvm/frontend/onnx.py b/nnvm/python/nnvm/frontend/onnx.py
index ad0acc31..c984dff9 100644
--- a/nnvm/python/nnvm/frontend/onnx.py
+++ b/nnvm/python/nnvm/frontend/onnx.py
@@ -129,6 +129,7 @@ class Conv(OnnxOpConverter):
                 'group': ('groups', 1)
             },
             extras={'use_bias': len(inputs) == 3},
+            ignores=['auto_pad'],
             custom_check=dimension_constraint())(inputs, attr, params)

else we need to implement the auto padding scheme.

Unfortunately it doesn’t work, the same error appears.

surprise, same error shouldn’t come by ignoring. Did you install after the change ?

Actually, I modified the source code, so I thought I didn’t have to reinstall.

I had to add the ignore to the ‘Pool’ operator as well. Unfortunately, now this error comes up:

Traceback (most recent call last):
  File "/home/martin/Dev/s-tvm/compile_model.py", line 189, in <module>
    compile_model(model_name=model_name, model_path=model_path, output_dir=output_dir)
  File "/home/martin/Dev/s-tvm/compile_model.py", line 93, in compile_model
    dtype=input_dtype_dict)
  File "/home/martin/.local/lib/python3.6/site-packages/nnvm-0.8.0-py3.6.egg/nnvm/compiler/build_module.py", line 270, in build
    ishape, _ = graph_util.infer_shape(graph, **shape)
  File "/home/martin/.local/lib/python3.6/site-packages/nnvm-0.8.0-py3.6.egg/nnvm/compiler/graph_util.py", line 31, in infer_shape
    graph = graph.apply("InferShape")
  File "/home/martin/.local/lib/python3.6/site-packages/nnvm-0.8.0-py3.6.egg/nnvm/graph.py", line 234, in apply
    check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
  File "/home/martin/.local/lib/python3.6/site-packages/nnvm-0.8.0-py3.6.egg/nnvm/_base.py", line 75, in check_call
    raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: Error in operator elemwise_add0: [07:37:27] /home/martin/Dev/tvm/nnvm/src/top/nn/../elemwise_op_common.h:38: Check failed: assign(&dattr, (*vec)[i]) Incompatible attr in node elemwise_add0 at 1-th input: expected [1,180], got [180]

Which seems to be the same problem as here: Incompatible attr in node elemwise_add0 at 1-th input: expected [1,256], got [256]

This indicates some issue with frontend as there is a difference we represent a zero dimensional vectors in TVM compared other frameworks.

And by frontend you mean the part I used to create/export the onnx?

No, the NNVM frontend for ONNX.

My github issue was closed and I was told to start a discussion here. Should I reopen the issue, since it seems to be a bug in NNVM?

Yes, you can open a new issue with steps to reproduce.
Please try attaching the onnx model file if possible to help developers to reproduce it quickly.