Sorry for the late reply.
I made a script that runs a conv2d_transpose with pytorch and relay to compare:
import numpy as np
strides = (2, 1)
padding = (1, 1)
dilation = (1, 1)
groups = 1
output_padding = (1, 0)
kernel_size = (2, 2)
in_channels = 1
grad_shp = (1, 1, 2, 3)
w_shp = (1, 1, 2, 2)
expected_out_shp = (1, 1, 3, 2)
grad_val = np.random.rand(*grad_shp)
w_val = np.random.rand(*w_shp)
## TORCH ##
import torch
grad_t = torch.tensor(grad_val)
w_t = torch.tensor(w_val)
torch_out = torch.nn.functional.conv_transpose2d(
grad_t, w_t,
stride=strides,
padding=padding,
dilation=dilation,
groups=groups,
output_padding=output_padding).numpy()
print("Torch result:\n", torch_out)
## RELAY ##
import tvm
from tvm import relay
grad_c = relay.const(grad_val)
w_c = relay.const(w_val)
out_node = relay.nn.conv2d_transpose(
grad_c, w_c,
strides=strides,
padding=padding,
dilation=dilation,
groups=groups,
output_padding=output_padding,
kernel_size=kernel_size,
channels=in_channels)
ctx = tvm.ndarray.context('cpu', 0)
mod = relay.Module({})
exec = relay.create_executor(mod=mod, ctx=ctx, target='llvm')
relay_out = exec.evaluate(out_node)
print("Relay result:\n", relay_out)
Running this script gets this output:
Torch result:
[[[[0.10655919 0.26356774]
[0.3575391 0.16530708]
[0.98032029 0.42839234]]]]
... Some tvm debug output ...
Relay result:
[[[[0.10655919 0.26356772]
[0.35753912 0.16530707]
[0. 0. ]]]]
I would expect that the last row in the relay matrix would not be just zeroes and correspond to the torch output. This is the padding that I refer to.