When auto-tuning “dense” operators by replacing “conv2d” in the tutorial “Auto-tuning a convolutional network for x86 CPU” (https://docs.tvm.ai/tutorials/autotvm/tune_relay_x86.html), the tuning fails with ValueError “Cannot find infer layout for task”.
Steps to reproduce the issue
- Prepare hardware and environment that meet the requirements for TVM auto-tuning on an x86 CPU
- Replace “conv2d” with “dense” in the code in the auto-tuning tutorial for x86 (
- Execute the code as modified in step 2, in the environment prepared in step 1
What’s the expected result?
- Auto-tuning of “dense” operators according to the x86 tutorial succeeds without errors
What’s the actual result?
- Auto-tuning of “dense” operators according to the x86 tutorial fails with ValueError “Cannot find infer layout for task”
- Replacing “conv2d” with “dense” in the tutorial for NVIDIA GPU produces no errors and the tuning succeeds
- The tutorial for x86 uses graph tuner (
- The tutorials for ARM CPU and NVIDIA GPU use
autotvm.record.pick_best()instead of graph tuner (ARM CPU:
https://tvm.apache.org/docs/tutorials/autotvm/tune_relay_arm.html#begin-tuning, NVIDIA GPU:
- Comments in the source code of graph tuner appear to imply that it supports dense operators (https://github.com/apache/incubator-tvm/blob/afc239aeb870d5c0a25a3e3e8e8c838f7122d9cf/python/tvm/autotvm/graph_tuner/base_graph_tuner.py#L161)
get_infer_layout()throws the aforementioned ValueError for everything except
Any or all of the following:
- Amend x86 auto-tuning tutorial to include
pick_best()option and comments about tuning operators other than
- Amend source code comment in graph tuner to make clear what operators are actually supported
- Add dense operator support to graph tuner