Zero copy DLManagedTensor for "set_input"

Any thoughts on the best way to do this?

I gave it a shot here: https://gist.github.com/bwasti/a9d3b82d06f0de7df34a04e0c1ae4764
but didn’t make much headway – I hit a hard to decipher error in the Bind logic:

RuntimeError: [23:10:17] /home/bwasti/local/pytorch_tvm/tvm/src/runtime/module_util.cc:54: Check failed: ret == 0 (-1 vs. 0) : Assert fail: (int32(arg0.shape[0]) ==
 33554432), Argument arg0.shape[0] has an unsatisfied constraint

This is one of the errors from the code produced by compiling TVM’s LLVM output. It looks like you are most likely corrupting the DLTensor pointer in some way.

int32(arg0.shape[0]) == 33554432, this part is complaining that the first dimension of the first argument’s shape is wrong.

What are the dimensions of the tensor before you pass it in? and what is the expected shape of the operator?

Those are the expected dimensions, a flat vector of that size 2**25. How can I print out the value stored at int32(arg0.shape[0])?

As a debug point the deleters are not being called before the exception is raised, so the corruption might be happening from a misuse of NDArray::FromDLPack