Incremental Type Propagation

Hi,

I’m still looking a bit at types in the PyTorch frontend and wondered if there already is or it would be reasonable to create a “type propagation” operator complementing the type inference pass with the following semantics:

  • You pass in a relay node without type info,
  • it goes backward to nodes with checked input types and then propagates types to the previously untyped nodes.

The reason I want to something like this is that I’m thinking

  • repeatedly running type inference while building the graph in the frontend is O(N^2) complexity (and I have the vague feeling it shows for large models, too),
  • I’d like to rely more on Relay’s inference for type information at the nodes, so this would increase the effect.

What do you think?

Best regards

Thomas

2 Likes

On the related front, this might be related to the recent effort around dynamic operator support, which allows us to defer the constant evaluation of the shape/types by running a pass later.

cc @jroesch @masahi @haichen

A similar problem of repeated calls to infer_value in the onnx frontend was addressed in the PR below, and now it became a bottom up process (reusing previous result of infer_value)

I think a similar approach can be taken?

Ha. Thank you @masahi, @tqchen. I think @mbrookhart 's patch is very much what I had in mind except that I’m wondering if we should make it more generally available, either in the common frontend bits or even move it close to the InferType pass in C++. What do you think?

We’re currently working on implementing operations in relay that infer shapes dyamically, which I believe would address your issue. The goal of this is to mirror how dynamic shapes work in onnx and enable us to import onnx graphs properly. The first few PRs of this effort are up, but we don’t have coverage of all the relay ops yet (PRs #6080, #6008 and #6007 (I can’t put a 3rd link because I’m a new contributor to this forum, sorry!)). More dynamic ops should be coming soon – please let us know if there are any specific ops you would like to see become dynamic (I can’t guarantee that it’ll be added, though).

2 Likes

Hi @electriclilies, thank you for the pointers. I think that the #5755 is closer to what I had in mind here. (Of course, it’ll be interesting how it lines up with dynamic shapes, but I think the problem I’m currently having is that I want something incremental like #5755 rather than in one go (and my use-case is just like #5755, too, just for the PyTorch frontend).

Reusable bottom up infer_type/value/shape pass sounds good. We can certainly have them in common frontends.

But implementing them in C++ sounds like a much bigger proposal. It is technically an interesting problem, but I’m not sure how it would be useful outside of frontends.

OK, I’ll aim for generalizing to the common frontend code for now. Thank you for your input!

1 Like

Interesting! I think it can be useful for FoldConstant in C++.

I made this, in the end I stayed in the PyTorch frontend and solved it locally given that inplace things don’t look like TVM that much.

Also find this issue: https://github.com/apache/tvm/issues/7008