Tensor arrays in TIR

Is there any effort to support tensor arrays in TIR? That would be something to represent operations like stack or unstack from TF.

Let’s say we want to write an op that does a concatenation of a variable number of tensors, but without actually copying any data. Instead, it would create some sort of a representation of a sequence of tensors (buffers). Is this currently possible in TIR?

Would there be any interest in having this functionality?

It is necessary for many usecases (like AOT), and I believe @tqchen has some idea on this too.

1 Like

TensorArray is supported in Relay and TF TensorArray ops can be converted now. Did you mean something more than these?

What relay expands to is memory copy. I want to avoid that. I want to have a copy-less representation in TIR.

This should really be a no-op, but ends up copying everything.

import tensorflow as tf
import tvm
import tvm.relay

g = tf.Graph()
with g.as_default():
  u = tf.unstack(tf.placeholder(dtype = 'int32', shape = (100,100)))
  s = tf.stack(u)

  m, p = tvm.relay.frontend.tensorflow.from_tensorflow(g.as_graph_def(add_shapes=True))
  g, m, p = tvm.relay.build_module.build(m, target='llvm')
  m.save('pack.ll')