How to support tf.TensorArray()

Document is at https://www.tensorflow.org/api_docs/python/tf/TensorArray

This code snippet generates several tensorflow ops:
[TensorArrayV3, TensorArrayGatherV3, TensorArraySizeV3, TensorArrayWriteV3]

tensor_array = tf.TensorArray(dtype=tf.float32, size=1, dynamic_size=True)

tensor_array = tensor_array.write(tf.constant(0), [2.0, 3.0])
tensor_array = tensor_array.write(tf.constant(1), [4.0, 5.0])

out = tensor_array.stack()

One possibility to support these is to enhance relay.Tuple. relay.Tuple is dynamic, but it lacks ability to update items in it and it’s size can’t be get in runtime.

Any suggestion about this issue?

I have the same problem. Can it be solved now?

If we can get Any(https://github.com/dmlc/tvm/issues/3042) merged, I think we can support TensorArray as follows:

type dynamic_tensor =
    Tensor0 of TensorType(shape=())
  | Tensor1 of TensorType(shape=(Any))
  | Tensor2 of TensorType(shape=(Any, Any))
  | Tensor3 of TensorType(shape=(Any, Any, Any))
  | Tensor4 of TensorType(shape=(Any, Any, Any, Any))
  | Tensor5 of TensorType(shape=(Any, Any, Any, Any, Any))
  | Tensor6 of TensorType(shape=(Any, Any, Any, Any, Any, Any))

type tensor_array = dynamic_tensor list

We define an data type dynamic_tensor that supports tensors up to 6(we can grow the rank of cause but might not be necessary). Then tensor array is just a dynamic_tensor list.

Then we can implement TensorArray ops as relay functions. Most of them are trivial to implement. Some are tricky( but I think doable with expand_dims):

  • TensorArrayConcat
  • TensorArrayStack
  • TensorArrayUnstack

We are close to supporting this in the IR, and then we just need to perform code generation.

Look for it in master in the coming weeks.

  • Jared

Sorry to bother you, but I wonder if there is any great progress on TensorArray now. Thanks a lot. @jroesch @wweic

It looks like we have about “any” pr. so how can i do to support the tensorarray. @jroesch @wweic

@ydy Any is not complete yet. Right now we are able to represent model with dynamic shape in relay. We still need to finish the codegen and runtime change in order to execute the model.

Is there any timeline for this?

I have sent a draft PR(https://github.com/dmlc/tvm/pull/3798) with some tensor array ops, will finish the remaining in the next couple of days. The Any codegen/runtime PR is also making progress(https://github.com/dmlc/tvm/pull/3606).

1 Like

Thank you very much for your work on this, I read this PR,and I think TensorArrayScatterV3 also might be a common operator.