"Cannot find config for target" after changing vta_config.json

Hi,

I am trying to run test_benchmark_topi_conv2d.py with larger VTA, but got the following message:

Cannot find config for target=ext_dev -device=vta -keys=cpu -model=sim_2x32_i8w8a32_15_15_18_17

Here’s my vta_config.json:

1 {
2 “TARGET” : “sim”,
3 “HW_VER” : “0.0.1”,
4 “LOG_INP_WIDTH” : 3,
5 “LOG_WGT_WIDTH” : 3,
6 “LOG_ACC_WIDTH” : 5,
7 “LOG_BATCH” : 1,
8 “LOG_BLOCK” : 5,
9 “LOG_UOP_BUFF_SIZE” : 15,
10 “LOG_INP_BUFF_SIZE” : 15,
11 “LOG_WGT_BUFF_SIZE” : 18,
12 “LOG_ACC_BUFF_SIZE” : 17
13 }

Can any one tell me how to fix this issue ?

Thanks very much!

Kevin

Hi @kevinyuan, this seems to indicate that we’re missing the autoTVM schedule for the conv2d operator on that specific hardware (2x32_i8w8a32_15_15_18_17). This is common when running TVM on a new operator shape and a new device. The warning indicates that there isn’t an optimized schedule to run this efficiently and that consequently the benchmark might run slow. In the case of VTA, it can sometimes lead to a default schedule that is invalid and cause a crash. Have you encountered such cases?

As a solution, I recommend running the autotuner on VTA to produce an optimized schedule and work around the issue. To get it to run faster, I’d recommend going with the pynq target if you have an FPGA.

Hi @thierry ,

Fortunately the test case didn’t crash.

Howeve I have a few other questions:

  1. In order to tune the new VTA, can I use the tvm/vta/tutorials/autotvm/tune_relay_vta.py? If any modification is needed?

  2. Currently I don’t have a pynq FPGA, but I have a GTX 1080 GPU on the local PCIe slot. Can I use this device to accelerate the tuning and how? Or I must use the exactly the same VTA bitstream as a RPC tunner ?

  3. How the tunning result is stored on the file system with which file(s)? Is it a lib*.so so that when I run the test_benchmark_topi_conv2d.py again, it will find the updated VTA model ?

Thank you very much :slight_smile:

Best regards.

Kevin

  1. The autotune script should work on new VTA configurations
  2. the GTX 1080 won’t really accelerate tuning since the bottleneck is in compiling the TVM kernel (CPU bound), measuring the kernels on FPGA (FPGA bound), and updating the performance model if using XGBoost (CPU bound)
  3. The tuning results are stored in a local .log file that contains the best schedule settings for your operator and the flavor of hardware you’re using. You can then submit a PR to update the TOPHUB entries: https://github.com/uwsampl/tvm-distro/blob/master/tophub/vta_v0.07.log with your new schedule settings.

Thanks @thierry for answering my questions :slight_smile: