Can TVM do reshape without any extra effort?

I found “reshape” is a transform compute tensor in TVM, this will bring extra effort because reshape is not a real “Transform” operation, it should only change the shape attribute.

AFAIK although reshape is a tvm::compute, it’s also injective, which means it’s easy to fuse with adjacent operators. If you’re not doing fusion, you could probably just write a pass (if it doesn’t already exist) that turns them into _nops. You do still need the operators somewhere, though, to hold the DLTensors–even if they point to the same storage.