[Resolved] Problem with ResizeBilinear in TensorFlow

Hi,

I have a model in TF with a ResizeBilinear operator. When I try to generate the Relay IR I get the following error:

  File "/home/user/tvm/python/tvm/relay/frontend/tensorflow.py", line 2473, in from_tensorflow
    mod, params = g.from_tensorflow(graph, layout, shape, outputs)
  File "/home/user/tvm/python/tvm/relay/frontend/tensorflow.py", line 2117, in from_tensorflow
    op = self._convert_operator(node.op, inputs, attr, graph)
  File "/home/user/tvm/python/tvm/relay/frontend/tensorflow.py", line 2436, in _convert_operator
    sym = convert_map[op_name](inputs, attrs, self._params)
  File "/home/user/tvm/python/tvm/relay/frontend/tensorflow.py", line 605, in _impl
    size = attr['_output_shapes'][0][1:3]
TypeError: 'NoneType' object is not subscriptable

It seems that the _output_shapes param has the value of None.

I also found the following comments in the tensorflow.py script:

-> _output_shapes : Graph should be frozen with add_shapes=True.
                                 Or user can pass input shape dictionary optionally.
-> DecodeJpeg, ResizeBilinear: These are dummy operators.
                                  Hence user should handle preprocessing outside.

Maybe this is related to what the comments say.

@srkreddy1238 maybe you have an idea?

Thanks

Can you share the build command arguments ?

Hi @srkreddy1238, by adding the add_shapes=True the issue was solved. The input pb file was not generated with that option.

BTW, could you please look at the following post:

I would appreciate if you can give me some hints with this other issue.

Thanks

Hi, @tico, could you please tell me where to add the add_shapes=True command line? Is it in creating .pb file or somewhere else? Thanks a lot.

Hi,

add_shapes=True should be used when you freeze the model. You can see how to do this in the following example:

@tico, thanks very much! I also have a question, if we add add_shapes=True, is the result of tvm converted model is equal with tf .pb file’s result? Thanks~

@murdockhou I dont quite get your question if both results are equal and how is this correlated to add_shapes=True?. In general, TVM keeps the functional correctness of the models of course.

What add_shapes=True adds are the shapes of the output tensors in nodes of the model, and this information is relevant to compile a model with TVM. That said add_shapes=True by itself does not defines if the results are equal or not.

Sorry, my mistake understand. Anyway, thank you at all!

@tico, hi, sorry for troubing you, i have another question asking for your help. I use tensorflow 1.13.1(cpu) and the newest version of tvm. I tried to convert a simple model with 2 regular conv layers and one resize_bilinear layer model. I have saved this model to a .pb file and tried to convert this pb file to tvm format(three files, .json, .params, .so), i set the layout='NCHW', but when i check out the json file, the 'shape' is still 'NHWC' format, what’s wrong with it? Could you give me some advice for this? Thanks a lot.

@murdockhou Is your model layout NHWC? Check the Relay IR, if you set the layout to NCHW and your model is NHWC, then there will be transpose operators all over the place. For me personally, I dont think this is a good solution from TVM, since this transpose operators imply significant overhead

the model is to deploy on cuda device, so does it mean all operators will have a transposition from nhwc=>nchw?