[Error] [ONNX] Relay.frontend.from_onnx

Hi all,
I’m getting the following error while parsing the onnx model. Do I have to implement those operators, or is there something I’m missing?

Hi @cs18mtech11033

Some operators you mention is not implemented currently, and you can check the current supported operators from frontend/onnx.py

Hi @cchung100m
Can you suggest what should I do? I should write all those operators in that frontend/onnx.py which are not present?

Hi @cs18mtech11033

You could read the frontend/onnx.py and try to understand how the other contributors implement ops, after this, you could start to implement.

@cs18mtech11033
topk, try my code: [Frontend][ONNX][TopK] I've added a TopK op

Hi @zacario-li

Great work! Would you send a PR for adding TopK to frontend/onnx.py?

Hi @zacario-li
I’ve added your code in the file and build the TVM, but again I’m still getting the same error.

Hi @cs18mtech11033

I add the implementation of ‘Not’ operator for frontend/onnx.py, would you please help to verify the patch?(still under reviewed)

many thanks,

Hi, @cchung100m I’ve copied your code to frontend/onnx.py and build it again. I’m still getting the same error. Can you tell how did you test?

Hi @cs18mtech11033

It is normal that you will still get the OpNotImplemented error because I only make a patch for ‘Not’ operator.

Hi @cchung100m , @zacario-li ,
I just implemented a patch for the ConstantOfShape operator. How do we check if it is working or not?

refer this:
tests/python/frontend/onnx/test_forward.py

@Sharath

Please help add the test cases for ConstantOfShape operator in tests/python/frontend/onnx/test_forward.py and submit a PR, thanks.

@cchung100m @zacario-li, I tried for ConstantOfShape operator and the model looks like this


and when I execute it using the test_forward.py code, I’m getting the following error:
.
What should I do?

From your error, your input data ‘x’ may have some mistake.

Hi, I am also thinking of the same thing,

def test_ConstantOfShape():

x = np.array([4, 3, 2]).astype(np.int64)
tensor_value = onnx.helper.make_tensor("value", onnx.TensorProto.FLOAT,
	                                   [1], [1])
y = np.ones(x, dtype=np.int64)
ref_node = onnx.helper.make_node(
	'ConstantOfShape',
	inputs=['x'],
	outputs=['y'],
	value=tensor_value,
)



graph = helper.make_graph([ref_node],
	                      "ConstantOfShape_test",
	                      inputs =[helper.make_tensor_value_info("x",TensorProto.FLOAT, list(x.shape))],
	                      outputs =[helper.make_tensor_value_info("y",TensorProto.FLOAT, list(y.shape))]) 

model = helper.make_model(graph, producer_name='ConstantOfShape_test')
for target, ctx in ctx_list():
	x = np.random.uniform(size=[4, 3, 2]).astype('float32')
	tvm_out = get_tvm_output(model, x, target, ctx,[4, 3, 2], 'float32')

tvm.testing.assert_allclose(y.shape, tvm_out.shape)

Can you tell what the error in the input is?

Your first input x is x = np.array([4, 3, 2]).astype(np.int64), it is a array([4, 3, 2]) ; your second input x = np.random.uniform(size=[4, 3, 2]).astype(‘float32’),x.shape = (4, 3, 2). You can have a check.

I’m getting the same error.
Error/warning is:

x = np.array([4, 3, 2]).astype(np.int64)
tensor_value = onnx.helper.make_tensor("value", onnx.TensorProto.FLOAT,
	                                   [1], [1])
y = np.ones(x, dtype=np.int64)
ref_node = onnx.helper.make_node(
	'ConstantOfShape',
	inputs=['x'],
	outputs=['y'],
	value=tensor_value,
)



graph = helper.make_graph([ref_node],
	                      "ConstantOfShape_test",
	                      inputs =[helper.make_tensor_value_info("x",TensorProto.FLOAT, list(x.shape))],
	                      outputs =[helper.make_tensor_value_info("y",TensorProto.FLOAT, list(y.shape))]) 

model = helper.make_model(graph, producer_name='ConstantOfShape_test')
for target, ctx in ctx_list():
	tvm_out = get_tvm_output(model, x, target, ctx,[4, 3, 2], 'float32')

tvm.testing.assert_allclose(y.shape, tvm_out.shape)

and
inputs=[‘x’], the ‘x’ may change to ‘T1’ , there is a code like:

    graph = helper.make_graph([y],
                              'squeeze_test',
                              inputs = [helper.make_tensor_value_info("in",
                                            TensorProto.FLOAT, list(in_shape))],
                              outputs = [helper.make_tensor_value_info("out",
                                            TensorProto.FLOAT, list(out_shape))])

the ‘in’ and ‘out’ are the same to your op registered input and output name.
You can have a try.
Good luck!