How to get the schedule in vta.build in the tutorial "deploy_resnet_on_vta.py" ?"

In the demo “vta_get_started.py”, the schedule can be obtained via tvm.lower as following:

Let’s take a look at the generated schedule
s = tvm.create_schedule(C.op)
print(tvm.lower(s, [A, B, C], simple_mode=True))

// attr [A_buf] storage_scope = “global”
allocate A_buf[int32 * 1024]
// attr [B_buf] storage_scope = “global”
allocate B_buf[int32 * 1024]
produce A_buf {
for (i1, 0, 64) {
for (i3, 0, 16) {
A_buf[((i116) + i3)] = A[((i116) + i3)]
}
}
}
produce B_buf {
for (i1, 0, 64) {
for (i3, 0, 16) {
B_buf[((i116) + i3)] = B[((i116) + i3)]
}
}
}

But in this tutorial “deploy_resnet_on_vta.py”, how can get the schedule via vta.build?

# Compile Relay program with AlterOpLayout disabled
with relay.build_config(opt_level=3, disabled_pass={"AlterOpLayout"}):
    if target.device_name != "vta":
        graph, lib, params = relay.build(
            relay_prog, target=target,
            params=params, target_host=env.target_host)
    else:
        with vta.build_config():
            graph, lib, params = relay.build(
                relay_prog, target=target,
                params=params, target_host=env.target_host)

There are multiple operators in a ResNet-18 (10 different shapes of conv2d) - each having their own schedules. I recommend starting here: https://github.com/dmlc/tvm/blob/master/vta/tests/python/integration/test_benchmark_topi_conv2d.py to get a layer-by-layer schedule breakdown.

The schedules are cached in TOPHUB, which can be found here: https://github.com/uwsampl/tvm-distro/tree/master/tophub

Thanks.I will try it!

1 Like

Thanks you for useful information @thierry, I have the same problem.
After some exploration, though tvm.build() and relay.build() has very similar functionalities,
it seems relay doesn’t have any debug method equivalent to tvm.lower() or vta.lower() now.

So, I’ve tried $TVMROOT/vta/python/vta/top/vta_conv2d.py as a starting point.
The topi function for conv2d has very similar code structures to the matmul example (or the code thierry shows)
And it’s called during every each compilation of conv2d layer.

@autotvm.register_topi_schedule(topi.generic.schedule_conv2d_nchw, 'vta', 'direct')
def _schedule_conv2d(cfg, outs):
    _traverse(output.op)
    s = tvm.create_schedule(output.op)
    ...
    data, kernel = conv2d_stage.op.input_tensors

    ######
    import vta
    vta.lower(s, [cdata, ckernel, output], simple_mode=True)
    ######

    return s

So, I thought the marked line (###) is the right place to add vta.lower() to debug the schedule.
But, I got following errors,

TVMError: Check failed: it != buf_map_.end(): Cannot find allocated buffer for placeholder(placeholder, 0x113bdf50)

@thierry, could you give me some guidance?

Before calling vta.lower you have to massage the schedule to map down to VTA hardware intrinsics. If you don’t massage the code right, the IR passes will most likely error out.

While it would be nice to have a more automated scheduling template that works out of the box, or even better reporting, this is what we have to deal with for now.

In order to understand how to get there, you can try the following tutorial: https://docs.tvm.ai/vta/tutorials/optimize/convolution_opt.html#sphx-glr-vta-tutorials-optimize-convolution-opt-py

Hope this helps

Thierry