[vta] Cannot pass tests for conv2d_transpose

Conv2d_transpose operator has been added before by

However, if we change the shape or vta log block, the program won’t find its config and shows error as below:

AttributeError: IfThenElse object has no attributed args

During handling of the above exception, another exception occurred:

AttributeError: <class ‘tvm.stmt.IfThenElse’> has no attribute args

During handling of the above exception, another exception occurred:

AttributeError: IfThenElse object has no attributed args

During handling of the above exception, another exception occurred:

AttributeError: <class ‘tvm.stmt.IfThenElse’> has no attribute args

Error during compile function
ir_pass.py code part:
def _do_fold(op):
if _match_pragma(op, “conv2d_transpose_gemm”):
is_init = “.init” in str(op)
tvm.ir_pass.PostOrderVisit(op, _find_basics)

            if is_init:
                # create inner most block
                irb = tvm.ir_builder.create()
                dev = env.dev
                irb.scope_attr(dev.vta_axis, "coproc_scope", dev.get_task_qid(dev.QID_COMPUTE))
                irb.scope_attr(dev.vta_axis, "coproc_uop_scope", dev.vta_push_uop)
                irb.emit(tvm.call_extern("int32", "VTAUopPush",
                                         0, 1,
                                         dout.access_ptr("rw", "int32"),
                                         0, 0,
                                         0, 0, 0))
                inner = irb.get()
                args = op.body.body.args
                res_tensor = op.body.body.func.output(0)
                tpl = (args[0], 1, args[1], 1, args[2], 1, args[3], 1, 0, 1, 0, env.BLOCK_OUT)

Should fix that to give the better support for conv2d_transpose operator.
Thank you.

Thanks for reporting the issue @xwang186

Can you share your VTA config for reference?

Original configure:

“TARGET” : “sim”, “HW_VER” : “0.0.1”, “LOG_INP_WIDTH” : 3, “LOG_WGT_WIDTH” : 3, “LOG_ACC_WIDTH” : 5, “LOG_BATCH” : 0, “LOG_BLOCK” : 4, “LOG_UOP_BUFF_SIZE” : 15, “LOG_INP_BUFF_SIZE” : 15, “LOG_WGT_BUFF_SIZE” : 18, “LOG_ACC_BUFF_SIZE” : 17

To debug you can keep the configure and just comment the auto tune line, which would be like: In your vta/tests/python/integration/test_benchmark_topi_conv2d_transpose.py:

def test_conv2d_transpose(device="vta"):
    def _run(env, remote):
        if device == "vta":
            target = env.target
            if env.TARGET not in ["sim", "tsim"]:
                assert tvm.module.enabled("rpc")
                program_fpga(remote, bitstream=None)
                reconfig_runtime(remote)
        elif device == "arm_cpu":
            target = env.target_vta_cpu
        #with autotvm.tophub.context(target): # load pre-tuned schedule parameters
        for _, wl in dcgan_wklds:
            print(wl)
            run_conv2d_transpose(env, remote, wl, target)
    vta.testing.run(_run)

When you see “Can not find configure”…, I think you can see the same error message. Thank you very much!

I think that the issue comes from the fact that the schedule is illegal, but after tuning can be made legal. This is an issue with the way that TOPHUB defaults to a given schedule. If you have a chance to run schedule autotuning it should find a valid schedule…