On my way to support NLP models in TVM and Relay, I encountered some problems that probably requires to some fundamental change or redesign in Relay. So I want to discuss them on the forum.

The problem is how to represent the following examples in Relay.

```
# Suppose data is a Tensor of shape L x N, where L is sequence length, and N is hidden size
length = data.shape_array()[0]
x = arange(length)
```

Another example which I found online

```
inputs_ = tf.placeholder(tf.float32, shape=(None, None, None, None))
depth = tf.shape(inputs_)[-1]
with tf.control_dependencies([
tf.Assert(
tf.logical_or(tf.equal(depth, 3), tf.equal(depth, 1)), [depth])
]):
inputs = tf.cond(
tf.equal(tf.shape(inputs_)[-1], 3), lambda: inputs_,
lambda: tf.image.grayscale_to_rgb(inputs_))
```

We can find that both examples need to extract the shape value from a tensor and use it in further computation. It might be trivial when input type is constant, since we can use type inference and constant folding to solve this. But a more interesting and common case is when the input shape is unknown during the compilation time. In order to represent these examples, certain things are missing in Relay:

- Convert relay type node to value node, and potentially from value node to type node again
- Be able to use relay expr in the attribute (for the first example)
- (minor) Extract one element from a tensor into a scalar

I think these changes are necessary as we want to support more general RNN models and dynamic shapes in TVM and Relay. Iâ€™d like to hear what community thinks about this.