It is so long time when TVM parses Tensorflow MaskRCNN model

I download a TF model from “http://download.tensorflow.org/models/object_detection/mask_rcnn_resnet50_atrous_coco_2018_01_28.tar.gz” and use “relay.frontend.from_tensorflow” to parser it. but the cost time is so long. it uses about one hour! how can I to optimize it?

tks very much!

This is due to some VM memory related passes. There will be some work to improve them.

@LakeFeiLiu I am trying to compile the same model with the link you have mentioned but I am facing some issues. Did you face this issue? is there any workarounds? or am i missing something here?

  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tvm-0.8.dev1859+g627e92e7c-py3.6-linux-x86_64.egg/tvm/relay/frontend/tensorflow.py", line 1025, in _convert_operator
    sym = convert_map[op_name](inputs, attrs, self._params, self._mod)
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tvm-0.8.dev1859+g627e92e7c-py3.6-linux-x86_64.egg/tvm/relay/frontend/tensorflow_ops.py", line 1075, in _impl
    size = _infer_value(inputs[1], params, mod).numpy().reshape([-1]).tolist()
  File "/opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tvm-0.8.dev1859+g627e92e7c-py3.6-linux-x86_64.egg/tvm/relay/frontend/common.py", line 529, in infer_value
    ), "All inputs to infer must be available in params."
AssertionError: All inputs to infer must be available in params.

i notice [Tensorflow][CropAndResize] Potential bug or limitation -> AttributeError: '<class 'tvm.relay.expr.Call'>' object has no attribute 'name_hint' - #4 by tico here @yongwww has fixed this issue long time back. hope I am using his fixes. May be its broken?

(vitis-ai-tensorflow) Vitis-AI /workspace/my_workspace/MaskRCNN > pip show tvm
Name: tvm
Version: 0.8.dev1859+g627e92e7c
Summary: TVM: An End to End Tensor IR/DSL Stack for Deep Learning Systems
Home-page: https://github.com/apache/tvm
Author:
Author-email:
License: UNKNOWN
Location: /opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/tvm-0.8.dev1859+g627e92e7c-py3.6-linux-x86_64.egg
Requires: attrs, cloudpickle, decorator, numpy, psutil, scipy, synr, tornado
Required-by:
(vitis-ai-tensorflow) Vitis-AI /workspace/my_workspace/MaskRCNN > pip show pyxir
Name: pyxir
Version: 0.3.2
Summary: # PyXIR  PyXIR is an Neural Network Intermediate Representation (IR) for deep learning. It is designed to be an interface between deep learning frameworks and neural network hardware accelerators, specifically Xilinx Vitis-AI FPGA based accelerators like [DPU](https://www.xilinx.com/products/intellectual-property/dpu.html).   At the moment PyXIR integrates with following frameworks: * [ONNXRuntime](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/Vitis-AI-ExecutionProvider.md) * [TVM Open Deep Learning compiler stack](https://tvm.apache.org/docs/deploy/vitis_ai.html)  and with following Vitis-AI accelerators: * DPUCADX8G (formerly DPUv1) * DPUCZDX8G (formerly DPUv2)  Note that not all accelerators are enabled through all frameworks at the moment. For example, through the ONNXRuntime framework only the DPUCADX8G accelerator is supported for now.
Home-page: https://github.com/Xilinx/pyxir
Author: Xilinx Inc
Author-email: jornt@xilinx.com
License: UNKNOWN
Location: /opt/vitis_ai/conda/envs/vitis-ai-tensorflow/lib/python3.6/site-packages/pyxir-0.3.2-py3.6-linux-x86_64.egg
Requires: numpy, packaging, pydot, h5py
Required-by:
(vitis-ai-tensorflow) Vitis-AI /workspace/my_workspace/MaskRCNN >

@kevinthesun Any hints on the issues above?