Almost 30% of time of overall inference is spend on RESIZE?

Hi All,

I have noticed that (I am debugging using graph_runtime_debug.cc) that the RESIZE operation takes almost 30% of my inference time. I started debugging this to understand why the resize operation takes almost 30% of inference time, and I am little bit lost and need some expert help.

  1. Which template is used and how that template is selected and scheduled during the inference?
    I know there is a number of RESIZE “templates” (this is my understanding that there are pre-defined templates for the any kind of operations). I believe I located the RESIZE TEMPLATES here: https://docs.tvm.ai/doxygen/resize_8h_source.html Please correct me if I am wrong. It looks like there are number of templates for RESIZE, and how these templates are selected? I’d like to understand how these templates are selected and how we can accelerate the RESIZE eventually?

  2. How can I best debug this and figure out why resize operation takes 30% of inference time?

Any suggestions regarding the RESIZE? It takes almost 30% if inference time, and I’d like to understand why?

Thanks in advance.

Hi All,

Sorry for repeated post. I was wondering if anyone has seen this problem and any suggestions that TVM experts might have?

Thanks in advance.