Why is padding calculated differently in topi test conv2d_nhwc_python.py and in topi test depthwise_conv2d_python.py

The padding calculation for SAME in conv2d_nhwc_python.py is:

else: # 'SAME'
    pad_h = kernel_h - 1
    pad_w = kernel_w - 1
pad_top = int(np.ceil(float(pad_h) / 2))
pad_bottom = pad_h - pad_top
pad_left = int(np.ceil(float(pad_w) / 2))
pad_right = pad_w - pad_left
# compute the output shape
out_channel = num_filter
out_height = (in_height - kernel_h + pad_h) // stride_h + 1
out_width = (in_width - kernel_w + pad_w) // stride_w + 1

And the padding calculation for SAME in depthwise_conv2d_python.py is:

if padding == 'SAME':
    print("SAME")
    out_channel = in_channel * channel_multiplier
    out_height = np.int(np.ceil(float(in_height) / float(stride_h)))
    out_width = np.int(np.ceil(float(in_width) / float(stride_w)))
    output_np = np.zeros((batch, out_height, out_width, out_channel))
    pad_along_height = np.int(np.max((out_height - 1) * stride_h + filter_height - in_height, 0))
    pad_along_width = np.int(np.max((out_width - 1) * stride_w + filter_width - in_width, 0))
    pad_top_tvm = np.int(np.ceil(float(pad_along_height) / 2))
    pad_left_tvm = np.int(np.ceil(float(pad_along_width) / 2))
    pad_top_scipy = np.int(np.ceil(float(filter_height - 1) / 2))
    pad_left_scipy = np.int(np.ceil(float(filter_width - 1) / 2))
    index_h = pad_top_scipy - pad_top_tvm
    index_w = pad_left_scipy - pad_left_tvm
    for i in range(batch):
        for j in range(out_channel):
            print(signal.convolve2d(input_np[i, :, :, j//channel_multiplier], \
                np.rot90(filter_np[:, :, j//channel_multiplier, j%channel_multiplier], 2), \
                mode='same'))
            output_np[i, :, :, j] = signal.convolve2d(input_np[i, :, :, j//channel_multiplier], \
                np.rot90(filter_np[:, :, j//channel_multiplier, j%channel_multiplier], 2), \
                mode='same')[index_h:in_height:stride_h, index_w:in_width:stride_w]

Now suppose input height, width is 3 and filter/kernel height, width is 5 and stride is 3. For conv2d_nhwc_python.py, pad_top will be 2 so the padded tensor will have height, width 7. But for depthwise_conv2d_python.py pad_top_tvm is 1 which gives me the intuition that padded tensor is interpreted to be of height width 6.

Since the difference between conv2d and dw_conv2d is the calculation across channel, I don’t understand why padding is done differently in these two cases. Can anyone explain?