I don’t understand how to convert Tensorflow graph from NHWC
format to NCHW
format to be used in TVM software stack.
-
Following this tutorial, I use
nnvm.frontend.from_tensorflow(graph_def, layout='NCHW')
when importing Tensorflow graph definition to nnvm. Does this mean that the nnvm will generate the graph according to the 'NCHW` layout? -
Then, when we call
nnvm.compiler.build
, it seems to me that the document wasn’t quite clear about whatshape
is exactly. If the original input to the graph is (1, 224, 224, 3) inNHWC
layout, which one should we pass to the function:
-
shape={input: (1, 224, 224, 3)}
or -
shape={input: (1, 3, 224, 224)}
?
Example code:
x = tf.placeholder(dtype=tf.float32, shape=(None, 224, 224, 3), name='input')
w = tf.constant(kernel, dtype=tf.float32, shape=(3, 3, 3, 1), name='w')
y = tf.nn.conv2d(input=x, filter=w, strides=[1, 1, 1, 1],
padding='SAME',
data_format='NHWC',
name='output')
# Assume we train the graph, and save the graph in '.pb' format.
# Load the graph from '.pb' file.
...
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
sym, params = nnvm.frontend.from_tensorflow(graph_def, layout='NCHW')
shape_dict = {'input': (1, 3, 224, 224)}
dtype_dict = {'input': 'float32'}
graph, lib, params = nnvm.compiler.build(
graph=sym,
shape=shape_dict,
dtype=dtype_dict,
target=target,
params=params,
target_host=target_host)
m = graph_runtime.create(graph, lib, ctx)
m.set_input(**params)