TensorFlow model to tvm, outputs lost

I have got a savedmodel resent v2, http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v2_fp32_savedmodel_NCHW.tar.gz

I convert it to frozen GraphDef , and then import it to tvm relay, and then run, but got some problem.

The Resnet V2 has 2 outputs, but after import to tvm, I can only get one.

Here is the code, and error log.

savedmodel_path = "./resnet"
output_node_names = "ArgMax,softmax_tensor"
input_node = "input_tensor"
    
target = "llvm"
target_host = "llvm"
layout = None
ctx = tvm.cpu(0)

with tf.Session(graph=tf.Graph()) as sess:
  tf.saved_model.loader.load(sess, ["serve"], savedmodel_path)
  graph = tf.get_default_graph()
  output_graph_def = tf.graph_util.convert_variables_to_constants(
      sess,
      sess.graph_def,
      output_node_names.split(",")
  )

mod, params = relay.frontend.from_tensorflow(output_graph_def,
                                             layout=layout)
with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(mod,
                                     target=target,
                                     target_host=target_host,
                                     params=params)
from tvm.contrib import graph_runtime
import numpy as np
m = graph_runtime.create(graph, lib, ctx)
# set inputs
m.set_input(input_node, tvm.nd.array(np.zeros(shape=(64, 224, 224, 3), dtype="float32")))
m.set_input(**params)

# execute
m.run()
# get outputs
tvm_output = m.get_output(0, tvm.nd.empty(((64,1001)), 'float32'))
print("finish")
print(tvm_output)
tvm_output_2 = m.get_output(1, tvm.nd.empty(((64,)), 'int64'))

here is the error log:

TVMError Traceback (most recent call last) in 12 print(“finish”) 13 print(tvm_output) —> 14 tvm_output_2 = m.get_output(1, tvm.nd.empty(((64,)), ‘int64’))

~/codes/github/tvm/python/tvm/contrib/graph_runtime.py in get_output(self, index, out) 206 “”" 207 if out: –> 208 self._get_output(index, out) 209 return out 210

~/codes/github/tvm/python/tvm/_ffi/_ctypes/function.py in call(self, *args) 205 self.handle, values, tcodes, ctypes.c_int(num_args), 206 ctypes.byref(ret_val), ctypes.byref(ret_tcode)) != 0: –> 207 raise get_last_ffi_error() 208 _ = temp_args 209 _ = args

TVMError: Traceback (most recent call last): [bt] (5) 6 ??? 0x00007ffee8c6fbb0 0x0 + 140732803775408 [bt] (4) 5 _ctypes.cpython-37m-darwin.so 0x0000000108d2a367 ffi_call_unix64 + 79 [bt] (3) 4 libtvm.dylib 0x000000011527e0f6 TVMFuncCall + 70 [bt] (2) 3 libtvm.dylib 0x00000001152d5ca1 std::__1::__function::__func<tvm::runtime::GraphRuntime::GetFunction(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, tvm::runtime::ObjectPtrtvm::runtime::Object const&)::$_6, std::__1::allocator<tvm::runtime::GraphRuntime::GetFunction(std::__1::basic_string<char, std::__1::char_traits, std::__1::allocator > const&, tvm::runtime::ObjectPtrtvm::runtime::Object const&)::blush:6>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 161 [bt] (1) 2 libtvm.dylib 0x00000001152cb322 tvm::runtime::GraphRuntime::CopyOutputTo(int, DLTensor*) + 274 [bt] (0) 1 libtvm.dylib 0x00000001149a6299 dmlc::LogMessageFatal::~LogMessageFatal() + 57 File “/Users/yuweilong/codes/github/tvm/src/runtime/graph/graph_runtime.cc”, line 171 TVMError: Check failed: static_cast<size_t>(index) < outputs.size() (1 vs. 1) :

the log seems to say there is only 1 ouput but I’m trying to get the second.

But in tf, I get the two outputs both.

And here is the savedmodel signature.

signature_def['predict']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input'] tensor_info:
        dtype: DT_FLOAT
        shape: (64, 224, 224, 3)
        name: input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['classes'] tensor_info:
        dtype: DT_INT64
        shape: (64)
        name: ArgMax:0
    outputs['probabilities'] tensor_info:
        dtype: DT_FLOAT
        shape: (64, 1001)
        name: softmax_tensor:0
  Method name is: tensorflow/serving/predict

from_tensorflow takes an optional argument, outputs. It’s a list of names of your output tensors. That’s how you can let TVM know there is more than one output, otherwise it will just guess that the last node is the output.

Btw, this is because a TensorFlow graph_def doesn’t actually save any information about which nodes are inputs or outputs. Other frameworks, such as ONNX, have this information saved within the graph. Because of this, it’s also safer to pass the shape_dict argument to guarantee that TVM gets the inputs correct too.

Thank you! It works!

hello! another question, I’m using the get_output and set_input api in C++.

I noticed that I can use set_input(0, data) and also set_input(‘data’, data),

what is the mapping of the index to the node?

and also, what is the output index mapping to the node names ?

for example, I have 2 outputs, which is output 0 and which is output1 ,is there a strategy? thank you!