[SOLVED] KeyError: 'tile_ic' when tuning MobileNet

Hello,

I’m trying to auto-tune MobileNet using the following program which is mostly copied from this tutorial.

import os

import numpy as np
import tvm.relay.testing
from tvm import autotvm
from tvm import relay
from tvm.autotvm.tuner import XGBTuner

target = 'llvm'
ctx = tvm.context(str(target), 0)

# Tuning parameters
# n_trial = 2000
n_trial = 1
early_stopping = 600
logfile = "mobilenet.log"
measure_option = autotvm.measure_option(
    builder=autotvm.LocalBuilder(timeout=10),
    runner=autotvm.LocalRunner(number=20, repeat=3, timeout=4, min_repeat_ms=150))

# Model parameters
dtype = 'float32'
batch_size = 1
input_shape = (batch_size, 3, 224, 224)
output_shape = (batch_size, 1000)

mod, params = relay.testing.mobilenet.get_workload(batch_size=batch_size, dtype=dtype)
tasks = autotvm.task.extract_from_program(mod["main"], target=target, params=params,
                                          ops=(relay.op.nn.conv2d, relay.op.nn.dense))

if os.path.exists(logfile):
    os.remove(logfile)

for i, tsk in enumerate(reversed(tasks)):
    prefix = "[Task %2d/%2d] " % (i + 1, len(tasks))
    tuner = XGBTuner(tsk, loss_type='rank')

    tuner.tune(n_trial=min(n_trial, len(tsk.config_space)),
               early_stopping=early_stopping,
               measure_option=measure_option,
               callbacks=[
                   autotvm.callback.progress_bar(n_trial, prefix=prefix),
                   autotvm.callback.log_to_file(logfile)
               ])

with autotvm.apply_history_best(logfile):
    with relay.build_config(opt_level=3):
        executor = relay.build_module.create_executor('graph', mod, ctx, target)
        evaluate = executor.evaluate()
        image = np.ones(input_shape, np.float32)
        image = tvm.nd.array(image.astype(dtype))
        evaluate(image, **params)

The tuning itself seems to work but when it’s finished and I try to call executor.evaluate() I get the following error:

[Task  1/20]  Current/Best:   11.37/  11.37 GFLOPS | Progress: (1/1) | 2.16 s Done.
[Task  2/20]  Current/Best:   17.00/  17.00 GFLOPS | Progress: (1/1) | 1.86 s Done.
[Task  3/20]  Current/Best:   31.33/  31.33 GFLOPS | Progress: (1/1) | 1.91 s Done.
[Task  4/20]  Current/Best:   11.17/  11.17 GFLOPS | Progress: (1/1) | 1.82 s Done.
[Task  5/20]  Current/Best:   13.52/  13.52 GFLOPS | Progress: (1/1) | 1.89 s Done.
[Task  6/20]  Current/Best:   17.69/  17.69 GFLOPS | Progress: (1/1) | 1.74 s Done.
[Task  7/20]  Current/Best:   29.99/  29.99 GFLOPS | Progress: (1/1) | 2.14 s Done.
[Task  8/20]  Current/Best:   10.63/  10.63 GFLOPS | Progress: (1/1) | 1.83 s Done.
[Task  9/20]  Current/Best:   29.27/  29.27 GFLOPS | Progress: (1/1) | 2.81 s Done.
[Task 10/20]  Current/Best:   26.26/  26.26 GFLOPS | Progress: (1/1) | 1.64 s Done.
[Task 11/20]  Current/Best:   28.11/  28.11 GFLOPS | Progress: (1/1) | 3.41 s Done.
[Task 12/20]  Current/Best:   12.99/  12.99 GFLOPS | Progress: (1/1) | 1.84 s Done.
[Task 13/20]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (1/1) | 10.65 s Done.
[Task 14/20]  Current/Best:    8.03/   8.03 GFLOPS | Progress: (1/1) | 1.82 s Done.
[Task 15/20]  Current/Best:   18.30/  18.30 GFLOPS | Progress: (1/1) | 1.83 s Done.
[Task 16/20]  Current/Best:   13.78/  13.78 GFLOPS | Progress: (1/1) | 1.63 s Done.
[Task 17/20]  Current/Best:   25.38/  25.38 GFLOPS | Progress: (1/1) | 1.98 s Done.
[Task 18/20]  Current/Best:    8.63/   8.63 GFLOPS | Progress: (1/1) | 1.68 s Done.
[Task 19/20]  Current/Best:   12.88/  12.88 GFLOPS | Progress: (1/1) | 1.45 s Done.
[Task 20/20]  Current/Best:    9.12/   9.12 GFLOPS | Progress: (1/1) | 1.95 s Done.
Traceback (most recent call last):

  File ".../tvm_mobilenet_tuning.py", line 49, in <module>
    evaluate = executor.evaluate()

  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/backend/interpreter.py", line 240, in evaluate
    return self._make_executor()

  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py", line 239, in _make_executor
    graph_json, mod, params = build(self.mod, target=self.target)

  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py", line 207, in build
    graph_json, mod, params = bld_mod.build(func, target, target_host, params)

  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/build_module.py", line 108, in build
    self._build(func, target, target_host)

  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/_ffi/_ctypes/function.py", line 210, in __call__
    raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ForwardRewriter::GetTempExpr(tvm::relay::Expr const&)+0x14d) [0x7ff2a02f342d]
  [bt] (7) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::relay::Expr const&)+0x9e) [0x7ff2a0129c1e]
  [bt] (6) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>::VisitExpr(tvm::relay::Expr const&)+0xc5) [0x7ff2a0130075]
  [bt] (5) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(std::_Function_handler<tvm::relay::Expr (tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>*), tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>::InitVTable()::{lambda(tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>*)#6}>::_M_invoke(std::_Any_data const&, tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::relay::Expr (tvm::relay::Expr const&)>*&&)+0x2f) [0x7ff2a012baff]
  [bt] (4) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr_(tvm::relay::CallNode const*)+0x5d8) [0x7ff2a02f4468]
  [bt] (3) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::relay::Expr (tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)>::AssignTypedLambda<tvm::relay::Expr (*)(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)>(tvm::relay::Expr (*)(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xb0) [0x7ff2a02be980]
  [bt] (2) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::alter_op_layout::AlterOpLayoutRewrite(tvm::relay::Call const&, tvm::Array<tvm::relay::Expr, void> const&, tvm::NodeRef const&)+0x1000) [0x7ff2a02b9a50]
  [bt] (1) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(tvm::relay::alter_op_layout::CallAlter(tvm::relay::Call const&, std::vector<tvm::relay::Expr, std::allocator<tvm::relay::Expr> > const&)+0x864) [0x7ff2a02b8874]
  [bt] (0) /home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/libtvm.so(+0xb44e6b) [0x7ff2a040de6b]
  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/_ffi/_ctypes/function.py", line 72, in cfun
    rv = local_pyfunc(*pyargs)
  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/relay/op/nn/_nn.py", line 205, in alter_op_layout_conv2d
    return topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, op)
  File "</home/arne/.local/lib/python3.6/site-packages/decorator-4.4.0-py3.6.egg/decorator.py:decorator-gen-24>", line 2, in conv2d_alter_layout
  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/target.py", line 372, in dispatch_func
    return dispatch_dict[k](*args, **kwargs)
  File "/home/arne/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/x86/conv2d.py", line 464, in _alter_conv2d_layout
    ic_bn, oc_bn = cfg["tile_ic"].size[-1], cfg["tile_oc"].size[-1]
  File "/home/arne/.local/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-linux-x86_64.egg/tvm/autotvm/task/space.py", line 773, in __getitem__
    return self._entity_map[name]
KeyError: 'tile_ic'

I set num_trials=1 to reproduce the error but it happens with num_trials=2000 as well.

The resulting logfile looks like this, in case that’s of any help. It does include values for tile_ic.

{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 3, 224, 224], "float32"], ["TENSOR", [32, 3, 3, 3], "float32"], [2, 2], [1, 1], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 3, 224, 224, "float32"], [32, 3, 3, 3, "float32"], [2, 2], [1, 1], [1, 1], "NCHW", "float32"], {"i": 204, "t": "direct", "c": null, "e": [["tile_ic", "sp", [3, 1]], ["tile_oc", "sp", [32, 1]], ["tile_ow", "sp", [2, 56]], ["unroll_kw", "ot", false]]}], "r": [[0.0020006725163934425], 0, 1.3565866947174072, 1566210100.143439], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 32, 112, 112], "float32"], ["TENSOR", [32, 1, 3, 3], "float32"], [1, 1], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 32, 112, 112, "float32"], [32, 1, 3, 3, "float32"], [1, 1], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.0005962612872340425], 0, 1.0861918926239014, 1566210102.0591521], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 32, 112, 112], "float32"], ["TENSOR", [64, 32, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 32, 112, 112, "float32"], [64, 32, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 138, "t": "direct", "c": null, "e": [["tile_ic", "sp", [32, 1]], ["tile_oc", "sp", [16, 4]], ["tile_ow", "sp", [16, 7]], ["tile_oh", "ot", 1]]}], "r": [[0.0016398973741496598], 0, 1.1583552360534668, 1566210104.131863], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 64, 112, 112], "float32"], ["TENSOR", [64, 1, 3, 3], "float32"], [2, 2], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 64, 112, 112, "float32"], [64, 1, 3, 3, "float32"], [2, 2], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.0008446549763779528], 0, 1.0396904945373535, 1566210106.075094], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 64, 56, 56], "float32"], ["TENSOR", [128, 64, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 64, 56, 56, "float32"], [128, 64, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 457, "t": "direct", "c": null, "e": [["tile_ic", "sp", [16, 4]], ["tile_oc", "sp", [64, 2]], ["tile_ow", "sp", [56, 1]], ["tile_oh", "ot", 2]]}], "r": [[0.0038014127936507936], 0, 1.1446666717529297, 1566210108.0896251], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 128, 56, 56], "float32"], ["TENSOR", [128, 1, 3, 3], "float32"], [1, 1], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 128, 56, 56, "float32"], [128, 1, 3, 3, "float32"], [1, 1], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.0005787340282485875], 0, 0.9943633079528809, 1566210109.9836464], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 128, 56, 56], "float32"], ["TENSOR", [128, 128, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 128, 56, 56, "float32"], [128, 128, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 921, "t": "direct", "c": null, "e": [["tile_ic", "sp", [64, 2]], ["tile_oc", "sp", [16, 8]], ["tile_ow", "sp", [2, 28]], ["tile_oh", "ot", 2]]}], "r": [[0.003426708086956522], 0, 1.3976550102233887, 1566210112.203216], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 128, 56, 56], "float32"], ["TENSOR", [128, 1, 3, 3], "float32"], [2, 2], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 128, 56, 56, "float32"], [128, 1, 3, 3, "float32"], [2, 2], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.00045367376822429904], 0, 1.0605945587158203, 1566210114.1285105], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 128, 28, 28], "float32"], ["TENSOR", [256, 128, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 128, 28, 28, "float32"], [256, 128, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 627, "t": "direct", "c": null, "e": [["tile_ic", "sp", [16, 8]], ["tile_oc", "sp", [4, 64]], ["tile_ow", "sp", [7, 4]], ["tile_oh", "ot", 2]]}], "r": [[0.001755604953488372], 0, 2.067134141921997, 1566210117.046407], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 256, 28, 28], "float32"], ["TENSOR", [256, 1, 3, 3], "float32"], [1, 1], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 256, 28, 28, "float32"], [256, 1, 3, 3, "float32"], [1, 1], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.00019901406954689148], 0, 0.8972055912017822, 1566210118.7935557], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 256, 28, 28], "float32"], ["TENSOR", [256, 256, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 256, 28, 28, "float32"], [256, 256, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 946, "t": "direct", "c": null, "e": [["tile_ic", "sp", [128, 2]], ["tile_oc", "sp", [4, 64]], ["tile_ow", "sp", [1, 28]], ["tile_oh", "ot", 2]]}], "r": [[0.0036556534696969697], 0, 2.6669557094573975, 1566210122.2847552], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 256, 28, 28], "float32"], ["TENSOR", [256, 1, 3, 3], "float32"], [2, 2], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 256, 28, 28, "float32"], [256, 1, 3, 3, "float32"], [2, 2], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.00019364018270944742], 0, 1.0722415447235107, 1566210124.2563193], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 256, 14, 14], "float32"], ["TENSOR", [512, 256, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 256, 14, 14, "float32"], [512, 256, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 714, "t": "direct", "c": null, "e": [["tile_ic", "sp", [32, 8]], ["tile_oc", "sp", [1, 512]], ["tile_ow", "sp", [1, 14]], ["tile_oh", "ot", 2]]}], "r": [[1000000000.0], 6, 10, 1566210135.108564], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 512, 14, 14], "float32"], ["TENSOR", [512, 1, 3, 3], "float32"], [1, 1], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 512, 14, 14, "float32"], [512, 1, 3, 3, "float32"], [1, 1], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.00033902526347305394], 0, 1.0559625625610352, 1566210136.8925824], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 512, 14, 14], "float32"], ["TENSOR", [512, 512, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 512, 14, 14, "float32"], [512, 512, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 603, "t": "direct", "c": null, "e": [["tile_ic", "sp", [64, 8]], ["tile_oc", "sp", [512, 1]], ["tile_ow", "sp", [2, 7]], ["tile_oh", "ot", 2]]}], "r": [[0.0056157913125], 0, 1.065453052520752, 1566210138.8708098], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 512, 14, 14], "float32"], ["TENSOR", [512, 1, 3, 3], "float32"], [2, 2], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 512, 14, 14, "float32"], [512, 1, 3, 3, "float32"], [2, 2], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[9.936914162162162e-05], 0, 0.8887343406677246, 1566210140.5857086], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 512, 7, 7], "float32"], ["TENSOR", [1024, 512, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 512, 7, 7, "float32"], [1024, 512, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 32, "t": "direct", "c": null, "e": [["tile_ic", "sp", [128, 4]], ["tile_oc", "sp", [128, 8]], ["tile_ow", "sp", [7, 1]], ["tile_oh", "ot", 1]]}], "r": [[0.0020242687844827588], 0, 1.2413175106048584, 1566210142.6582541], "v": 0.1}
{"i": ["llvm", "topi_nn_depthwise_conv2d_nchw", [["TENSOR", [1, 1024, 7, 7], "float32"], ["TENSOR", [1024, 1, 3, 3], "float32"], [1, 1], [1, 1], [1, 1], "float32"], {}, ["depthwise_conv2d_nchw", [1, 1024, 7, 7, "float32"], [1024, 1, 3, 3, "float32"], [1, 1], [1, 1], [1, 1], "float32"], {"i": 0, "t": "direct", "c": null, "e": []}], "r": [[0.0001718663723887375], 0, 0.9298725128173828, 1566210144.474663], "v": 0.1}
{"i": ["llvm", "topi_nn_conv2d", [["TENSOR", [1, 1024, 7, 7], "float32"], ["TENSOR", [1024, 1024, 1, 1], "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {}, ["conv2d", [1, 1024, 7, 7, "float32"], [1024, 1024, 1, 1, "float32"], [1, 1], [0, 0], [1, 1], "NCHW", "float32"], {"i": 32, "t": "direct", "c": null, "e": [["tile_ic", "sp", [1, 1024]], ["tile_oc", "sp", [256, 4]], ["tile_ow", "sp", [7, 1]], ["tile_oh", "ot", 1]]}], "r": [[0.0079785429], 0, 0.7092256546020508, 1566210146.0702164], "v": 0.1}
{"i": ["llvm", "topi_nn_dense", [["TENSOR", [1, 1024], "float32"], ["TENSOR", [1000, 1024], "float32"], null, "float32"], {}, ["dense", [1, 1024, "float32"], [1000, 1024, "float32"], 0, "float32"], {"i": 43, "t": "direct", "c": null, "e": [["tile_x", "sp", [8, 125]], ["tile_y", "sp", [1, 1]], ["tile_k", "sp", [256, 4]]]}], "r": [[0.00022471214302325582], 0, 1.2047827243804932, 1566210148.0866144], "v": 0.1}

I ran the exact same code with ResNet-50 and VGG-16 and I don’t get any errors. So this seems to be specific to MobileNet.

Does anyone have an idea how to solve this error?

+1

I have the same problem.

Seems depthwise_conv2d doesn’t have any tuning config. I think someone has broken the depthwise convolution schedule on x86.

Maybe @kevinthesun could help to look at this issue. The latest change of x86 depthwise convolution is done by @kevinthesun

You need to copy this part:

# converting conv2d tasks to conv2d_NCHWc tasks
        op_name = tsk.workload[0]
        if op_name == 'conv2d':
            func_create = 'topi_x86_conv2d_NCHWc'
        elif op_name == 'depthwise_conv2d_nchw':
            func_create = 'topi_x86_depthwise_conv2d_NCHWc_from_nchw'
        else:
            raise ValueError("Tuning {} is not supported on x86".format(op_name))

Thanks! It’s working now.