[Relay] Stability?

How stable is each frontend in Relay?

By stability I’m assuming you mean maturity?

We should have harder data for this in the next few weeks as we are looking to measure model importer coverage across a large number of models and framework formats in a continuously updated public dashboard. Please stay tuned!

CC @thierry.

1 Like

Just polling interest, but would it help to have an up to date webpage that indicates importer success into Relay for a collection of workloads expressed in different frameworks?

The page would also keep track of which versions of TF, ONNX, Pytorch etc. are working with the Relay importers.

+1

I think that’s a great idea. Along with supported ops and versions, we could also have some “best practices”. For example, changes in the graph so the model will run better on TVM (like substituting a dynamic RNN for a static RNN, or removing dropout and assert). These docs will have to evolve over time as ops are added and best practices change.

If the community thinks this would be useful, I’d be willing to take a stab at some TensorFlow docs, since that’s what I’m most familiar with.

2 Likes

@jonso that would be very useful!

1 Like

I just encounter a Tensorflow model which all Ops seem to have been supported by TVM , but TVM tensorflow front end still fails to import it. If you are interested, just let me know, i would like to share the freeze pb file to help better improve the TVM tensorflow frontend.

Perfect, thanks! What license could you share the model under? We’re pretty flexible, but the more permissive the better and quicker we can get to it.

I just sent out a PR to update the TensorFlow frontend docs here. Let me know what you think!