Deployment of converted .pb file to .tflite using relay.frontend.from_tflite

I tried to convert my tensorflow .pb model into .tflite format using TensorFlow lite but it was unable to compile on TVM. To debug this I created a single layer convolutional network then created a .pb file and .tflite file from it using TF lite. TVM was unable to compile even a single layer convolutional network. I went into https://github.com/dmlc/tvm/blob/master/python/tvm/relay/frontend/tflite.py then I found the operator conversion function and tried to print out the converted node name in the below code.

import os
import tflite
from tflite.Model import Model

tflite_model_buf = open(‘single.tflite’, “rb”).read()
tflite_model = Model.GetRootAsModel(tflite_model_buf, 0)

def build_str_map(obj):
“”“Build string map of TFLite enum int value
Parameters
----------
obj:
TFLite class which contains enum int value, such as BuiltInOptions
Returns
-------
String representation map of TFLite class enum int value
“””
ret = {}
for field_name in dir(obj):
if not field_name.startswith(’_’):
field_value = getattr(obj, field_name)
if isinstance(field_value, int):
ret[field_value] = field_name
return ret

subgraph = tflite_model.Subgraphs(0)

for op_idx in range(1):
op = subgraph.Operators(op_idx)

try:
    from tflite.BuiltinOperator import BuiltinOperator
except ImportError:
    raise ImportError("The tflite package must be installed")

op_code_list_idx = op.OpcodeIndex()
op_code_id = tflite_model.OperatorCodes(op_code_list_idx).BuiltinCode()
builtin_op_code =build_str_map(BuiltinOperator())
op_code_str = builtin_op_code[op_code_id]
print(op_code_str)

Print(op_code_str) prints “DEPTHWISE_CONV_2D”. But I created only Conv2d tensorflow layer. This error indicates that there is a mismatch of Schema but I am using the Tensorflow version 1.13.1 and installed the tflite-1.13.1 from https://docs.tvm.ai/tutorials/frontend/from_tflite.html#sphx-glr-tutorials-frontend-from-tflite-py for TVM. Can anyone please comment here to get this conversion from .pb to .tflite to TVM working?

Pls share the tflite file.

@FrozenGene Please find the tflite file in the link https://drive.google.com/file/d/1CRSniHJudUIw0nLl-gjvK9XMO0FvSUTO/view?usp=sharing.

This single layer convolutional network has depth channel of 1. When I changed the depth channel to 2 then the compilation on TVM worked. I guess that TVM is considering the channels with depth = 1 always as “DEPTHWISE_CONV_2D”.

I test this model. I can not reproduce your issue. Your model is not “DEPTHWISE_CONV_2D”, is normal convolution.

I suspect it maybe your tflite package is not correct. Consider installing the prebuilt wheel: https://docs.tvm.ai/tutorials/frontend/from_tflite.html#sphx-glr-tutorials-frontend-from-tflite-py

@FrozenGene By mistake I sent the link to the tflite file which is working on TVM since it has input depth channel =2 for the convolutional layer. Try https://drive.google.com/open?id=1H_NPWpsQKRG-YQ2AZgjKXuULrJVkEAiI . This single layer convolutional network has input channel depth=1 and for this TVM doesn’t compile.

Thanks for reporting this problem. I will fix it. However, I am curious why your model need depthwise multiplier > 1?

I just looked at it. Seems that our code has many places restrict the depthwise multiplier is 1. For example:
https://github.com/dmlc/tvm/blob/master/python/tvm/relay/op/nn/_nn.py#L131 And our Conv2dRel also has problem. we should modify many places. Do you really want to this feature support? I really see depthwise multiplier > 1 firstly and set it be 0.25 / 0.35 is common (which will be converted into 1 in training framework). If you really need this feature, I will try to implement it when I have spare time.

I think there is some misunderstanding. I am using a single convolutional layer which takes as input grey-scale image instead of 3 channel RGB. When I change my convolutional layer input channel to 3 or 2 then my model compiles on TVM but it doesn’t compile with input having channel =1 (like grey scale input image).

It shouldn’t be. If the model accept NHWC like 1x20x10x1 and convolution layer is not the condition I mentioned before(i.e. depthwise convolution’s depth multiplier > 1 like your shared model before), it should work well.

Hi @FrozenGene ,

I also have a TFLite model with a depthwiseConv2d with depth_multiplier of 32. Do you know why is the TFLite frontend restricted to depth multiplier of 1?. What can be done in this case to fix this?

Thanks

Because whether the depthwise multiplier 0.35 0.25 and so on, it always be 1 transformed by tf. It is very rare to see multiplier greater than 1 and I even can not imagine this situation. However, we could support it and be careful the TFlite’s representation. I looked at it before and seems that has some bug if we just simply remove it. I could consider supporting it if we really need this functionality.

I see, but it does not seem that uncommon to have a depth_multiplier greater than 1 given that also @akssieg at the beginning of this post experimented the same issue. Please note that in both cases the depth_multiplier value is automatically set to a value >1 during the tf to tflite conversion of a model quantized using the FakeQuantWithMinMaxVars operator.

What would you suggest to fix this issue? Change the model to enforce a depth_multiplier of 1? or fix this limitation in TVM? if so how much effort would that require?

Thanks

Shouldn’t be > 1 when to use FakeQuantWithMinMaxVars. We support the quantized model well internally.

The effort is not very much. Let us wait this PR: https://github.com/dmlc/tvm/pull/3676/files be merged then I could support it in TFLite. This PR fixes the issue I said in this post before.

Sounds good! I am will keep an eye on the PR and also let me know once you added this support in TFLite. Thanks!

Ok. You could also remind me. Because I am very busy this period time and maybe omit the status.

PR to support it: https://github.com/dmlc/tvm/pull/3922

1 Like