I am interested in using PyTorch’s PixelShuffle layer in my neural network design. I’ve been trying to run this layer (on its own) through the TVM compiler stack.
The model in PyTorch is defined as
model = nn.Sequential(nn.PixelShuffle(2),)
It is exported via ONNX and imported into NNVM using the ONNX frontend. Compiled using a host machine (x86_64-linux-gnu
) targeting the CPU on a Jetson TX2 (target="llvm -target=aarch64-linux-gnu"
). Using latest TVM source code and llvm-4.0
.
If I use an input data shape that has a batch size of one, for example, (1, 32, 14, 14), I get the following compile error:
nnvm._base.NNVMError: Error in operator strided_slice1: [23:02:18] /home/dwofk/tvm/nnvm/src/top/tensor/transform.cc:919: Check failed: stride_vec[i] < 0 ? (end < begin) : (begin < end) : Input [Begin=1, End=2] is invalid for axis=0
If I increase the batch size to 16 and feed in an input data shape of (16, 32, 14, 14), I get this compile error:
nnvm._base.NNVMError: Error in operator concatenate0: [23:05:12] /home/dwofk/tvm/nnvm/src/top/tensor/transform.cc:120: Operator concatenate(axis=0, name=concatenate0) expects data1's shape to be [0,32,14,14], but got [1].
Are there any specific modifications that need to be made to the TVM source code to resolve these compiler errors for PixelShuffle?
I can provide the exported ONNX graph if that would be helpful.