KeyError: 'tile_oh' when tuning Resnet50

Hi,

I’m trying to tune Resnet50 on x86 using the tutorials from the tvm site.

Random tuner works well. But when I try to use GATuner or XGBTuner I get the following error after tuning:

tvm.ffi.base.TVMError: Traceback (most recent call last):
[bt] (8) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr
(tvm::relay::CallNode const*)+0xb18) [0x7f51321$
[bt] (7) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::Expr const&)+0x566) [0x7f5132117d16]
[bt] (6) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode const*)+0xb18) [0x7f51321$
[bt] (5) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr(tvm::relay::Expr const&)+0x566) [0x7f5132117d16]
[bt] (4) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode const*)+0x6a9) [0x7f51321$
[bt] (3) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(+0xa5690c) [0x7f513214190c]
[bt] (2) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&)+0x469) [0x7f513214ec29]
[bt] (1) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function const&)+0x10fe) [0x7f513214e05e]
[bt] (0) /root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/libtvm.so(+0xbc300b) [0x7f51322ae00b]
File “tvm/_ffi/_cython/./function.pxi”, line 56, in tvm._ffi._cy3.core.tvm_callback
File “/root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/relay/op/nn/_nn.py”, line 543, in schedule_contrib_conv2d_NCHWc
return topi.generic.schedule_conv2d_NCHWc(outs)
File “</usr/local/lib/python3.5/dist-packages/decorator-4.4.0-py3.5.egg/decorator.py:decorator-gen-63>”, line 2, in schedule_conv2d_NCHWc
File “/root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/target.py”, line 372, in dispatch_func
return dispatch_dict[k](*args, **kwargs)
File “</usr/local/lib/python3.5/dist-packages/decorator-4.4.0-py3.5.egg/decorator.py:decorator-gen-108>”, line 2, in config_dispatcher
File “/root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/autotvm/task/dispatcher.py”, line 220, in dispatch_func
return dispatch_dict[cfg.template_key](cfg, *args, **kwargs)
File “/root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/autotvm/task/topi_integration.py”, line 469, in template_call
return f(cfg, outs, *args, **kwargs)
File “/root/.local/lib/python3.5/site-packages/topi-0.6.dev0-py3.5.egg/topi/x86/conv2d.py”, line 683, in _schedule_conv2d_NCHWc
traverse(outs[0].op)
File “/root/.local/lib/python3.5/site-packages/topi-0.6.dev0-py3.5.egg/topi/x86/conv2d.py”, line 652, in traverse
traverse(outs[0].op)
File “/root/.local/lib/python3.5/site-packages/topi-0.6.dev0-py3.5.egg/topi/x86/conv2d.py”, line 652, in traverse
traverse(tensor.op)
File “/root/.local/lib/python3.5/site-packages/topi-0.6.dev0-py3.5.egg/topi/x86/conv2d.py”, line 652, in traverse
traverse(tensor.op)
File “/root/.local/lib/python3.5/site-packages/topi-0.6.dev0-py3.5.egg/topi/x86/conv2d.py”, line 677, in traverse
conv2d_avx_1x1._schedule_conv_NCHWc(*args)
File “/root/.local/lib/python3.5/site-packages/topi-0.6.dev0-py3.5.egg/topi/x86/conv2d_avx_1x1.py”, line 129, in _schedule_conv_NCHWc
oh_factor, ow_factor = cfg[“tile_oh”].val, cfg[“tile_ow”].size[-1]
File “/root/.local/lib/python3.5/site-packages/tvm-0.6.dev0-py3.5-linux-x86_64.egg/tvm/autotvm/task/space.py”, line 773, in getitem
return self._entity_map[name]
KeyError: ‘tile_oh’

The resulting logfile looks like this, in case that’s of any help. It doesn’t include values for tile_oh:

{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 227, “e”: [[“tile_ic”, “sp”, [64, 4]], [“tile_oc”, “sp”, [128, 8]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.004771567], 0, 0.4046483039855957, 1568272975.4405458], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 5, “e”: [[“tile_ic”, “sp”, [8, 32]], [“tile_oc”, “sp”, [1024, 1]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.031062679], 0, 0.4681541919708252, 1568272975.814941], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 284, “e”: [[“tile_ic”, “sp”, [8, 32]], [“tile_oc”, “sp”, [2, 512]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.00666422], 0, 0.7278034687042236, 1568272976.1375163], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 50, “e”: [[“tile_ic”, “sp”, [8, 32]], [“tile_oc”, “sp”, [32, 32]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.002382864], 0, 0.47310304641723633, 1568272977.3384428], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 72, “e”: [[“tile_ic”, “sp”, [256, 1]], [“tile_oc”, “sp”, [4, 256]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.008011008], 0, 0.39221906661987305, 1568272977.660917], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 211, “e”: [[“tile_ic”, “sp”, [16, 16]], [“tile_oc”, “sp”, [512, 2]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.015606046], 0, 0.4685971736907959, 1568272977.9968486], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 95, “e”: [[“tile_ic”, “sp”, [8, 32]], [“tile_oc”, “sp”, [1, 1024]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.00862297], 0, 0.8996334075927734, 1568272978.2833753], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 305, “e”: [[“tile_ic”, “sp”, [1, 256]], [“tile_oc”, “sp”, [1024, 1]], [“tile_ow”, “sp”, [1, 7]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.006390476], 0, 0.4513721466064453, 1568272978.5855134], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 228, “e”: [[“tile_ic”, “sp”, [32, 8]], [“tile_oc”, “sp”, [128, 8]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.004838229], 0, 0.4091486930847168, 1568272978.900541], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 288, “e”: [[“tile_ic”, “sp”, [256, 1]], [“tile_oc”, “sp”, [1, 1024]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.006035349], 0, 0.6254701614379883, 1568272979.2173824], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 380, “e”: [[“tile_ic”, “sp”, [64, 4]], [“tile_oc”, “sp”, [2, 512]], [“tile_ow”, “sp”, [1, 7]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.002877239], 0, 0.9664630889892578, 1568272979.5351274], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 4, “e”: [[“tile_ic”, “sp”, [16, 16]], [“tile_oc”, “sp”, [1024, 1]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.0309987], 0, 0.45800018310546875, 1568272980.3875842], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 14, “e”: [[“tile_ic”, “sp”, [8, 32]], [“tile_oc”, “sp”, [512, 2]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.015629871], 0, 0.39853954315185547, 1568272980.7101276], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 213, “e”: [[“tile_ic”, “sp”, [4, 64]], [“tile_oc”, “sp”, [512, 2]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, false]], “t”: “direct”, “c”: null}], “r”: [[0.015604099], 0, 0.47773122787475586, 1568272981.0501344], “v”: 0.1}
{“i”: [“llvm -mcpu=core-avx2”, “topi_x86_conv2d_NCHWc”, [[“TENSOR”, [1, 256, 7, 7], “float32”], [“TENSOR”, [1024, 256, 3, 3], “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {}, [“conv2d”, [1, 256, 7, 7, “float32”], [1024, 256, 3, 3, “float32”], [1, 1], [1, 1], [1, 1], “NCHW”, “float32”], {“i”: 82, “e”: [[“tile_ic”, “sp”, [128, 2]], [“tile_oc”, “sp”, [2, 512]], [“tile_ow”, “sp”, [7, 1]], [“unroll_kw”, “ot”, true]], “t”: “direct”, “c”: null}], “r”: [[0.006380492], 0, 0.5086348056793213, 1568272981.3720565], “v”: 0.1}

Does anyone have an idea how to solve this error?

1 Like

Facing the same issue in Resnet 50 AutoTvm when I load .pb file in NHWC format using from_tensorflow(…, layout=‘NCHW’). Even random tunner gives the same error.

Facing the same issue in EAST model (based on VGG and upsample).