Hi,
While converting the tflite model to relay module using below line of code :
mod, params = relay.frontend.from_tflite(tflite_model,
shape_dict={input_tensor: input_shape},
dtype_dict={input_tensor: input_dtype})
i am getting this error: KeyError: ‘InceptionResnetV1/Logits/Flatten/flatten/Reshape/shape/1’
The tflite model (vgg facenet) is an quantized int8 model which i converted from frozzen pb model to tflite model using tf lite python API(with help of representaive dataset) in tensorflow 2.3. This is the line of code i used for converting to quantized int8 tflite model
converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(model,input_arrays={"input":[160,160,3]},output_arrays={"embeddings"})
converter.inference_type = tf.int8
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# Enforce full-int8 quantization (except inputs/outputs which are always float)
converter.representative_dataset = rep_data_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8
converter.inference_output_type = tf.int8
quantized_model = converter.convert()
But in tensorflow 2.2 while converting to tflite model i am getting a quantized float model instead of quantized int8 ( using tflite python api.)
Please comment on this issue
Thank you