[Error] [ONNX] Relay.frontend.from_onnx


#1

Hi all,
I’m getting the following error while parsing the onnx model. Do I have to implement those operators, or is there something I’m missing?


#2

Hi @cs18mtech11033

Some operators you mention is not implemented currently, and you can check the current supported operators from frontend/onnx.py


#3

Hi @cchung100m
Can you suggest what should I do? I should write all those operators in that frontend/onnx.py which are not present?


#4

Hi @cs18mtech11033

You could read the frontend/onnx.py and try to understand how the other contributors implement ops, after this, you could start to implement.


#5

@cs18mtech11033
topk, try my code: [Frontend][ONNX][TopK] I've added a TopK op


#6

Hi @zacario-li

Great work! Would you send a PR for adding TopK to frontend/onnx.py?


#7

Hi @zacario-li
I’ve added your code in the file and build the TVM, but again I’m still getting the same error.


#8

Hi @cs18mtech11033

I add the implementation of ‘Not’ operator for frontend/onnx.py, would you please help to verify the patch?(still under reviewed)

many thanks,


#9

Hi, @cchung100m I’ve copied your code to frontend/onnx.py and build it again. I’m still getting the same error. Can you tell how did you test?


#10

Hi @cs18mtech11033

It is normal that you will still get the OpNotImplemented error because I only make a patch for ‘Not’ operator.


#11

Hi @cchung100m , @zacario-li ,
I just implemented a patch for the ConstantOfShape operator. How do we check if it is working or not?


#12

refer this:
tests/python/frontend/onnx/test_forward.py


#13

@Sharath

Please help add the test cases for ConstantOfShape operator in tests/python/frontend/onnx/test_forward.py and submit a PR, thanks.


#14

@cchung100m @zacario-li, I tried for ConstantOfShape operator and the model looks like this


and when I execute it using the test_forward.py code, I’m getting the following error:
.
What should I do?


#15

From your error, your input data ‘x’ may have some mistake.


#16

Hi, I am also thinking of the same thing,

def test_ConstantOfShape():

x = np.array([4, 3, 2]).astype(np.int64)
tensor_value = onnx.helper.make_tensor("value", onnx.TensorProto.FLOAT,
	                                   [1], [1])
y = np.ones(x, dtype=np.int64)
ref_node = onnx.helper.make_node(
	'ConstantOfShape',
	inputs=['x'],
	outputs=['y'],
	value=tensor_value,
)



graph = helper.make_graph([ref_node],
	                      "ConstantOfShape_test",
	                      inputs =[helper.make_tensor_value_info("x",TensorProto.FLOAT, list(x.shape))],
	                      outputs =[helper.make_tensor_value_info("y",TensorProto.FLOAT, list(y.shape))]) 

model = helper.make_model(graph, producer_name='ConstantOfShape_test')
for target, ctx in ctx_list():
	x = np.random.uniform(size=[4, 3, 2]).astype('float32')
	tvm_out = get_tvm_output(model, x, target, ctx,[4, 3, 2], 'float32')

tvm.testing.assert_allclose(y.shape, tvm_out.shape)

Can you tell what the error in the input is?


#17

Your first input x is x = np.array([4, 3, 2]).astype(np.int64), it is a array([4, 3, 2]) ; your second input x = np.random.uniform(size=[4, 3, 2]).astype(‘float32’),x.shape = (4, 3, 2). You can have a check.


#18

I’m getting the same error.
Error/warning is:


#19
x = np.array([4, 3, 2]).astype(np.int64)
tensor_value = onnx.helper.make_tensor("value", onnx.TensorProto.FLOAT,
	                                   [1], [1])
y = np.ones(x, dtype=np.int64)
ref_node = onnx.helper.make_node(
	'ConstantOfShape',
	inputs=['x'],
	outputs=['y'],
	value=tensor_value,
)



graph = helper.make_graph([ref_node],
	                      "ConstantOfShape_test",
	                      inputs =[helper.make_tensor_value_info("x",TensorProto.FLOAT, list(x.shape))],
	                      outputs =[helper.make_tensor_value_info("y",TensorProto.FLOAT, list(y.shape))]) 

model = helper.make_model(graph, producer_name='ConstantOfShape_test')
for target, ctx in ctx_list():
	tvm_out = get_tvm_output(model, x, target, ctx,[4, 3, 2], 'float32')

tvm.testing.assert_allclose(y.shape, tvm_out.shape)

and
inputs=[‘x’], the ‘x’ may change to ‘T1’ , there is a code like:

    graph = helper.make_graph([y],
                              'squeeze_test',
                              inputs = [helper.make_tensor_value_info("in",
                                            TensorProto.FLOAT, list(in_shape))],
                              outputs = [helper.make_tensor_value_info("out",
                                            TensorProto.FLOAT, list(out_shape))])

the ‘in’ and ‘out’ are the same to your op registered input and output name.
You can have a try.
Good luck!