Tensorflow compilation error

Traceback (most recent call last):
File “x.py”, line 142, in
graph, lib, params = nnvm.compiler.build(sym, shape=shape_dict, target=target, target_host=target_host, dtype=dtype_dict, params=params)
File “/homed/lidw/tvm/nnvm/python/nnvm/compiler/build_module.py”, line 270, in build
ishape, _ = graph_util.infer_shape(graph, **shape)
File “/homed/lidw/tvm/nnvm/python/nnvm/compiler/graph_util.py”, line 31, in infer_shape
graph = graph.apply(“InferShape”)
File “/homed/lidw/tvm/nnvm/python/nnvm/graph.py”, line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File “/homed/lidw/tvm/nnvm/python/nnvm/_base.py”, line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: Error in operator model_with_buckets/embedding_attention_decoder_6/attention_decoder/attention_decoder/MatMul: [21:57:07] /homed/lidw/tvm-zx/nnvm/src/top/nn/nn.cc:58: Operator dense(use_bias=False, units=128, name=model_with_buckets/embedding_attention_decoder_6/attention_decoder/attention_decoder/MatMul) expects weight’s shape to be [128,128], but got [128,384].

When running an tensorflow OCR model I got this shape error, anyone got ideas please comment directly, appreciated.

Importing was done, this happened in compilation process.

This seems like the data layout is getting wires crossed somewhere. Are you using a NHWC or NCHW data layout?

Thank you for reply, I am using NHWC, is there anything to do with it? and I added old tensorflow version (1.4) ‘gather’ op support, which is based on inbuilt ‘take’, and this error occurs exactly after this operator. I directly made axis=0 to support this ‘gather’, does it affect?

Can you first make sure the gather impl is perfect by adding the same test case as in your model ?

Hi, if I passed checking during importing, does it mean my gather impl is good? Because testing it with my model would be very complicated. here is my gather:
def _gather():
def _impl(inputs, attr, params):
new_input = []
new_input.append(inputs.pop(0))
new_input.append(inputs.pop(0))
axis = 0
return AttrCvt(op_name=‘take’,
extras={‘axis’:axis},
ignores=[‘Tindices’, ‘Tparams’, ‘validate_indices’,
‘Taxis’, ‘_class’])(new_input, attr)
return _impl

No, import only maps the Tensorflow operator to an NNVM symbol.

What we need to do is to write a forward test case for Gather implementation and verify (compare tf output with TVM output.).

You may refer to ./nnvm/tests/python/frontend/tensorflow/test_forward.py .

Does no-error means it passed all test cases? If so, what might be the reason of this problem? Thanks.

Not all existing test cases. After adding Gather implementation to frontend write a forward testcase and compare TF and TVM outputs. I this pass then we could test the model.

Thanks, I did so using test_forward.py with tensorflow 1.4, now it could pass all test as below:
def test_forward_gather():
‘’‘test gather layer’’’
_test_gather((4,), (1,), 1, ‘int32’)
_test_gather((4,), (1,), 1, ‘float32’)
_test_gather((1,4), (1,), [0], ‘int32’)
_test_gather((4,), (1,2,2), [[[1,0],[0,1]]], ‘float32’)
_test_gather((2,2), (1,2,2), [[[1,0],[0,1]]], ‘int32’)
_test_gather((2,2), (1,2,2), [[[1,0],[0,1]]], ‘int32’)
_test_gather((2,2), (1,2,2), [[[1,0],[0,1]]], ‘float32’)
_test_gather((3,3,3), (1,1,2), [[[1,0]]], ‘int32’)
_test_gather((3,3,3), (1,1,2), [[[1,0]]], ‘int32’)
_test_gather((4,3,5,6), (1,4), [[2,1,0,0]], ‘float32’)

By the way, as we seen in traceback, ‘ishape, _ = graph_util.infer_shape(graph, **shape)’, it infers shape from input ‘graph’, which should be a nnvm graph, thus how can I check the information inside of it?

Not sure I understood your question, but infer_shape is a pass implemented @ nnvm/src/pass/infer_shape_type.cc