The following error occured when I tried to commpile an ONNX model. Is this a problem of NNVM or is the ONNX model I’m using somehow invalid? I noticed that auto_pad is deprecated (https://github.com/onnx/onnx/blob/master/docs/Operators.md).
But how can I fix the problem? Even a workaround would be fine I guess.
Traceback (most recent call last):
File "compile_model.py", line 54, in <module>
sym, params = nnvm.frontend.from_onnx(onnx_model)
File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 967, in from_onnx
sym, params = g.from_onnx(graph, opset)
File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 822, in from_onnx
op = self._convert_operator(op_name, inputs, attr, opset)
File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 923, in _convert_operator
sym = convert_map[op_name](inputs, attrs, self._params)
File "/usr/tvm/nnvm/python/nnvm/frontend/onnx.py", line 132, in _impl_v1
custom_check=dimension_constraint())(inputs, attr, params)
File "/usr/tvm/nnvm/python/nnvm/frontend/common.py", line 107, in __call__
return get_nnvm_op(op_name)(*inputs, **new_attrs)
File "/usr/tvm/nnvm/python/nnvm/_ctypes/symbol.py", line 181, in creator
ctypes.byref(sym_handle)))
File "/usr/tvm/nnvm/python/nnvm/_base.py", line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: Cannot find argument 'auto_pad', Possible Arguments:
----------------
channels : int, required
The dimensionality of the output spacei.e. the number of output channels in the convolution.
kernel_size : , required
Specifies the dimensions of the convolution window.
strides : , optional, default=[1,1]
Specifies the strides of the convolution.
padding : , optional, default=[0,0]
If padding is non-zero, then the input is implicitly zero-paddedon both sides for padding number of points
dilation : , optional, default=[1,1]
Specifies the dilation rate to use for dilated convolution.
groups : int, optional, default='1'
Controls the connections between inputs and outputs.At groups=1, all inputs are convolved to all outputs.At groups=2, the operation becomes equivalent to having two convolutionlayers side by side, each seeing half the input channels, and producinghalf the output channels, and both subsequently concatenated.
layout : string, optional, default='NCHW'
Dimension ordering of input data. Can be 'NCHW', 'NHWC', etc.'N', 'C', 'H', 'W' stands for batch, channel, height, and widthdimensions respectively. Convolution is applied on the 'H' and'W' dimensions.
out_layout : string, optional, default='__undef__'
Dimension ordering of output. Can be 'NCHW', 'NHWC', etc.'N', 'C', 'H', 'W' stands for batch, channel, height, and widthdimensions respectively. Default to be same as input layout.
kernel_layout : string, optional, default='OIHW'
Dimension ordering of weight. Can be 'OIHW', 'OIHW16o16i', etc.'O', 'I', 'H', 'W' stands for num_filter, input_channel, height, and widthdimensions respectively.
out_dtype : {'float16', 'float32', 'float64', 'int16', 'int32', 'int64', 'int8', 'same', 'uint16', 'uint32', 'uint64', 'uint8'},optional, default='same'
Output data type, set to explicit type under mixed precision setting
use_bias : boolean, optional, default=1
Whether the layer uses a bias vector.
, in operator conv2d(name="", auto_pad="b'VALID'", groups="1", kernel_size="(5, 5)", use_bias="True", strides="(1, 1)", dilation="(1, 1)", channels="12")