[transform.ConvertLayout] transform inception_v1 model from from_tensorflow.py failed

CPU:intel x86 TVM:0.7 dev

After run the official code from_tensorflow ,I try to use auto-tuning. But auto-tuning only support “NCHW” , so I need transform inceptionv1 model from ‘NHWC’ to “NCHW” .

Error:

Traceback (most recent call last):
  File "from_tensorflow2.py", line 148, in <module>
    mod = relay.transform.ConvertLayout('NCHW')(mod)

  File "/tvm/relay/transform.py", line 194, in __call__
    return _transform.RunPass(self, mod)

  File "/tvm/python/tvm/_ffi/_ctypes/function.py", line 207, in __call__
    raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) 9   libtvm.dylib                        0x000000011adf2e61 tvm::NodeFunctor<tvm::RelayExpr (tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)>::operator()(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*) const + 305
  [bt] (7) 8   libtvm.dylib                        0x000000011adf4938 tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::'lambda4'(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*)::__invoke(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>*) + 24
  [bt] (6) 7   libtvm.dylib                        0x000000011adf1e4b tvm::relay::ForwardRewriter::VisitExpr_(tvm::relay::CallNode const*) + 1627
  [bt] (5) 6   libtvm.dylib                        0x000000011adf5394 tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void>&, tvm::runtime::ObjectRef>(tvm::relay::Call const&&&, tvm::Array<tvm::RelayExpr, void>&&&, tvm::runtime::ObjectRef&&) const + 260
  [bt] (4) 5   libtvm.dylib                        0x000000011ad9bd2d std::__1::__function::__func<void tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::AssignTypedLambda<tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>(tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&))::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*), std::__1::allocator<void tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::AssignTypedLambda<tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>(tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&))::'lambda'(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 109
  [bt] (3) 4   libtvm.dylib                        0x000000011ad9bdc2 void tvm::runtime::detail::unpack_call_dispatcher<tvm::RelayExpr, 0, 3, tvm::RelayExpr (*)(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::run<tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue, tvm::runtime::TVMArgValue>(tvm::RelayExpr (* const&)(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&), tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&, tvm::runtime::TVMArgValue&&) + 82
  [bt] (2) 3   libtvm.dylib                        0x000000011adaf54c tvm::RelayExpr tvm::relay::LayoutRewriter<tvm::relay::convert_op_layout::ConvertTransformMemorizer>(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&) + 2396
  [bt] (1) 2   libtvm.dylib                        0x000000011ad979ff tvm::TensorTypeNode const* tvm::RelayExprNode::type_as<tvm::TensorTypeNode>() const + 399
  [bt] (0) 1   libtvm.dylib                        0x000000011a712c89 dmlc::LogMessageFatal::~LogMessageFatal() + 57
  File "/Users/heliqi/learn/tvm/tvm/include/tvm/ir/expr.h", line 412
TVMError: Check failed: node != nullptr: Expected type to be relay.TensorType, but get relay.TypeCall

transform code:

#load model 
with tf_compat_v1.gfile.GFile(model_path, 'rb') as f:
    graph_def = tf_compat_v1.GraphDef()
    graph_def.ParseFromString(f.read())
    graph = tf.import_graph_def(graph_def, name='')
    # Call the utility to import the graph definition into default graph.
    graph_def = tf_testing.ProcessGraphDefParam(graph_def)
    # Add shapes to the graph.
    with tf_compat_v1.Session() as sess:
        graph_def = tf_testing.AddShapesToGraphDef(sess, 'softmax')

shape_dict = {'DecodeJpeg/contents': shape_size}
dtype_dict = {'DecodeJpeg/contents': dtype}
mod, params = relay.frontend.from_tensorflow(graph_def,
                                             layout=layout,
                                             shape=shape_dict)
#transform
mod = relay.transform.ConvertLayout('NCHW')(mod)

Is this from the official Tutorial? I will take a look today.

load model code from the official Tutorial — from_tensorflow (https://docs.tvm.ai/tutorials/frontend/from_tensorflow.html#sphx-glr-tutorials-frontend-from-tensorflow-py)

I get errors using Inceptionv1 (model from the official Tutorial) or Inceptionv3 (https://github.com/dmlc/web-data/tree/master/tensorflow/models/InceptionV3)

Thanks for pointing out the issue. This happens to Tensor Array objects. I will send a PR to update the docs.

Here is the patch

diff --git a/tutorials/frontend/from_tensorflow.py b/tutorials/frontend/from_tensorflow.py
index 55eb3d0..f6b8f0e 100644
--- a/tutorials/frontend/from_tensorflow.py
+++ b/tutorials/frontend/from_tensorflow.py
@@ -132,6 +132,11 @@ mod, params = relay.frontend.from_tensorflow(graph_def,
                                              layout=layout,
                                              shape=shape_dict)

+seq = relay.transform.Sequential([relay.transform.RemoveUnusedFunctions(),
+                                  relay.transform.ConvertLayout('NCHW')])
+with relay.transform.PassContext(opt_level=3):
+    mod = seq(mod)
+
 print("Tensorflow protobuf imported to relay frontend.")
 ######################################################################
 # Relay Build

PR for doc fix - https://github.com/apache/incubator-tvm/pull/4834

I tried it and it‘s ok.Thank you very much.