[AutoTVM] Safety of transformations, KeyError: 'tile_x'

Hi,
I’m running AutoTVM on a network following tune_relay_x86.py tutorial. My target back-end is “llvm -mcpu=skylake-avx512”. I’m getting the following error which seems to be resulting from an invalid/incorrect tiling transformation(?). Can anyone please shed some light on a probable cause for this behavior?

Thanks,

Extract tasks...
Tuning...
[Task  1/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/1792) | 0.00 s^M[Task  1/21]  Current/Best:  426.68/1519.88 GFLOPS | Progress: (112/1792) | 65.57 s^M[Task  1/21]  Current/Best:  982.14/1519.88 GFLOPS | Progress: (224/1792) | 129.88 s Done.
[Task  2/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/1568) | 0.00 s^M[Task  2/21]  Current/Best:  172.81/2314.17 GFLOPS | Progress: (112/1568) | 49.12 s^M[Task  2/21]  Current/Best:  445.14/2413.36 GFLOPS | Progress: (224/1568) | 97.77 s^M[Task  2/21]  Current/Best: 2184.79/2413.36 GFLOPS | Progress: (336/1568) | 146.16 s Done.
[Task  3/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/1344) | 0.00 s^M[Task  3/21]  Current/Best:  791.93/1052.91 GFLOPS | Progress: (112/1344) | 62.96 s^M[Task  3/21]  Current/Best:  609.58/1080.43 GFLOPS | Progress: (224/1344) | 124.97 s^M[Task  3/21]  Current/Best:  581.62/1080.43 GFLOPS | Progress: (336/1344) | 186.35 s Done.
[Task  4/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/1344) | 0.00 s^M[Task  4/21]  Current/Best:  545.57/1938.11 GFLOPS | Progress: (112/1344) | 49.13 s Done.
[Task  5/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/672) | 0.00 s^M[Task  5/21]  Current/Best:  833.79/3069.22 GFLOPS | Progress: (112/672) | 119.20 s Done.
[Task  6/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/864) | 0.00 s^M[Task  6/21]  Current/Best:  165.53/1612.44 GFLOPS | Progress: (112/864) | 54.80 s^M[Task  6/21]  Current/Best:  846.99/1612.44 GFLOPS | Progress: (224/864) | 111.47 s Done.
[Task  7/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/1080) | 0.00 s^M[Task  7/21]  Current/Best:  263.02/2447.95 GFLOPS | Progress: (112/1080) | 104.88 s^M[Task  7/21]  Current/Best:  345.52/2728.81 GFLOPS | Progress: (224/1080) | 211.40 s^M[Task  7/21]  Current/Best: 1506.87/2754.32 GFLOPS | Progress: (336/1080) | 318.97 s^M[Task  7/21]  Current/Best:  352.18/2799.45 GFLOPS | Progress: (448/1080) | 424.12 s^M[Task  7/21]  Current/Best:  173.00/2916.56 GFLOPS | Progress: (560/1080) | 531.64 s Done.
[Task  8/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/540) | 0.00 s^M[Task  8/21]  Current/Best:  648.62/2119.03 GFLOPS | Progress: (112/540) | 54.39 s^M[Task  8/21]  Current/Best:  847.68/2119.03 GFLOPS | Progress: (224/540) | 107.89 s Done.
[Task  9/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/288) | 0.00 s^M[Task  9/21]  Current/Best: 1194.33/2394.96 GFLOPS | Progress: (112/288) | 81.02 s Done.
[Task 10/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/384) | 0.00 s^M[Task 10/21]  Current/Best:   97.17/1748.75 GFLOPS | Progress: (112/384) | 51.07 s^M[Task 10/21]  Current/Best:  231.97/2179.11 GFLOPS | Progress: (224/384) | 101.28 s Done.
[Task 11/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/480) | 0.00 s^M[Task 11/21]  Current/Best:  341.84/2004.96 GFLOPS | Progress: (112/480) | 80.21 s^M[Task 11/21]  Current/Best:  583.61/2038.62 GFLOPS | Progress: (224/480) | 162.08 s^M[Task 11/21]  Current/Best: 2014.37/2045.20 GFLOPS | Progress: (336/480) | 242.01 s^M[Task 11/21]  Current/Best:  968.78/2045.20 GFLOPS | Progress: (448/480) | 327.42 s Done.
[Task 12/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/400) | 0.00 s^M[Task 12/21]  Current/Best:  266.04/1670.84 GFLOPS | Progress: (112/400) | 51.43 s^M[Task 12/21]  Current/Best: 1142.98/1670.84 GFLOPS | Progress: (224/400) | 102.94 s Done.
[Task 13/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/480) | 0.00 s^M[Task 13/21]  Current/Best:  692.99/1727.97 GFLOPS | Progress: (112/480) | 81.05 s^M[Task 13/21]  Current/Best:  512.79/1905.13 GFLOPS | Progress: (224/480) | 160.98 s^M[Task 13/21]  Current/Best:  875.82/1905.13 GFLOPS | Progress: (336/480) | 236.61 s Done.
[Task 14/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/672) | 0.00 s^M[Task 14/21]  Current/Best:  366.51/1471.35 GFLOPS | Progress: (112/672) | 50.65 s^M[Task 14/21]  Current/Best:  537.15/1761.48 GFLOPS | Progress: (224/672) | 99.67 s^M[Task 14/21]  Current/Best:  257.13/1761.48 GFLOPS | Progress: (336/672) | 149.36 s Done.
[Task 15/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/896) | 0.00 s^M[Task 15/21]  Current/Best:  498.38/4570.76 GFLOPS | Progress: (112/896) | 159.49 s Done.
[Task 16/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/512) | 0.00 s^M[Task 16/21]  Current/Best: 1443.11/2383.85 GFLOPS | Progress: (112/512) | 56.77 s^M[Task 16/21]  Current/Best: 1015.61/2383.85 GFLOPS | Progress: (224/512) | 113.09 s Done.
[Task 17/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/768) | 0.00 s^M[Task 17/21]  Current/Best: 1938.22/3492.22 GFLOPS | Progress: (112/768) | 85.38 s^M[Task 17/21]  Current/Best:  463.29/3492.22 GFLOPS | Progress: (224/768) | 176.59 s Done.
[Task 18/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/1344) | 0.00 s^M[Task 18/21]  Current/Best:  142.00/2446.38 GFLOPS | Progress: (112/1344) | 63.59 s^M[Task 18/21]  Current/Best:  146.75/2446.38 GFLOPS | Progress: (224/1344) | 129.53 s Done.
[Task 19/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/784) | 0.00 s^M[Task 19/21]  Current/Best: 1266.64/5066.96 GFLOPS | Progress: (112/784) | 292.08 s^M[Task 19/21]  Current/Best: 2379.27/5144.13 GFLOPS | Progress: (224/784) | 564.19 s Done. 
[Task 20/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/392) | 0.00 s[Task 20/21]  Current/Best:  374.20/1225.31 GFLOPS | Progress: (112/392) | 62.85 s [Task 20/21]  Current/Best:  928.45/1240.02 GFLOPS | Progress: (224/392) | 124.82 s [Task 20/21]  Current/Best:  964.36/1240.02 GFLOPS | Progress: (336/392) | 187.03 s Done.
[Task 21/21]  Current/Best:    0.00/   0.00 GFLOPS | Progress: (0/252) | 0.00 s [Task 21/21]  Current/Best: 2018.10/2106.94 GFLOPS | Progress: (112/252) | 122.46 s [Task 21/21]  Current/Best: 1870.90/2109.51 GFLOPS | Progress: (224/252) | 242.01 s [Task 21/21]  Current/Best:  841.42/2109.51 GFLOPS | Progress: (252/252) | 276.23 s Done.
Compile...
Traceback (most recent call last):

  File "tune_relay_x86.py", line 235, in <module>
    tune_and_evaluate(tuning_option)

  File "tune_relay_x86.py", line 213, in tune_and_evaluate
    mod, target=target, params=params)

  File "/homes/tharindu/tvm-master/python/tvm/relay/build_module.py", line 207, in build
    graph_json, mod, params = bld_mod.build(func, target, target_host, params)

  File "/homes/tharindu/tvm-master/python/tvm/relay/build_module.py", line 108, in build
    self._build(func, target, target_host)

  File "tvm/_ffi/_cython/./function.pxi", line 310, in core.FunctionBase.__call__

  File "tvm/_ffi/_cython/./function.pxi", line 245, in core.FuncCall

  File "tvm/_ffi/_cython/./function.pxi", line 234, in core.FuncCall3

  File "tvm/_ffi/_cython/./base.pxi", line 171, in core.CALL

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /homes/tharindu/tvm-master/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr(tvm::relay::Expr const&)+0xae) [0x2b1f690d626e]
  [bt] (7) /homes/tharindu/tvm-master/build/libtvm.so(tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>::VisitExpr(tvm::relay::Expr const&)+0xa8) [0x2b1f690d5fd8]
  [bt] (6) /homes/tharindu/tvm-master/build/libtvm.so(std::_Function_handler<tvm::Array<tvm::Tensor, void> (tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>*), tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>::InitVTable()::{lambda(tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>*)#6}>::_M_invoke(std::_Any_data const&, tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>*)+0x1e) [0x2b1f690c9fce]
  [bt] (5) /homes/tharindu/tvm-master/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x161) [0x2b1f690d0f11]
  [bt] (4) /homes/tharindu/tvm-master/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr(tvm::relay::Expr const&)+0xae) [0x2b1f690d626e]
  [bt] (3) /homes/tharindu/tvm-master/build/libtvm.so(tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>::VisitExpr(tvm::relay::Expr const&)+0xa8) [0x2b1f690d5fd8]
  [bt] (2) /homes/tharindu/tvm-master/build/libtvm.so(std::_Function_handler<tvm::Array<tvm::Tensor, void> (tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>*), tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>::InitVTable()::{lambda(tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>*)#6}>::_M_invoke(std::_Any_data const&, tvm::NodeRef const&, tvm::relay::ExprFunctor<tvm::Array<tvm::Tensor, void> (tvm::relay::Expr const&)>*)+0x1e) [0x2b1f690c9fce]
  [bt] (1) /homes/tharindu/tvm-master/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode const*)+0x77c) [0x2b1f690d152c]
  [bt] (0) /homes/tharindu/tvm-master/build/libtvm.so(+0x11620a3) [0x2b1f694470a3]
  File "tvm/_ffi/_cython/./function.pxi", line 56, in core.tvm_callback
  File "/homes/tharindu/tvm-master/python/tvm/relay/op/nn/_nn.py", line 59, in compute_dense
    return [topi.nn.dense(inputs[0], inputs[1], None, out_dtype)]
  File "<decorator-gen-42>", line 2, in dense
  File "/homes/tharindu/tvm-master/python/tvm/target.py", line 289, in dispatch_func
    return generic_func_node(*args)
  File "/homes/tharindu/tvm-master/python/tvm/target.py", line 151, in __call__
    return _api_internal._GenericFuncCallFunc(self, *args)
  File "tvm/_ffi/_cython/./function.pxi", line 310, in core.FunctionBase.__call__
  File "tvm/_ffi/_cython/./function.pxi", line 255, in core.FuncCall
  File "tvm/_ffi/_cython/./base.pxi", line 171, in core.CALL
  [bt] (3) /homes/tharindu/tvm-master/build/libtvm.so(TVMFuncCall+0x46) [0x2b1f6944d0c6]
  [bt] (2) /homes/tharindu/tvm-master/build/libtvm.so(+0xbc48ed) [0x2b1f68ea98ed]
  [bt] (1) /homes/tharindu/tvm-master/build/libtvm.so(tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1a2) [0x2b1f68ea9702]
  [bt] (0) /homes/tharindu/tvm-master/build/libtvm.so(+0x11620a3) [0x2b1f694470a3]
  File "tvm/_ffi/_cython/./function.pxi", line 56, in core.tvm_callback
  File "<decorator-gen-115>", line 2, in config_dispatcher
  File "/homes/tharindu/tvm-master/python/tvm/autotvm/task/dispatcher.py", line 220, in dispatch_func
    return dispatch_dict[cfg.template_key](cfg, *args, **kwargs)
  File "/homes/tharindu/tvm-master/python/tvm/autotvm/task/topi_integration.py", line 367, in template_call
    node = f(cfg, *args, **kwargs)
  File "/homes/tharindu/tvm-master/topi/python/topi/x86/dense.py", line 37, in _declaration_dense
    return _declaration_dense_pack(cfg, data, weight, bias, out_dtype)
  File "/homes/tharindu/tvm-master/topi/python/topi/x86/dense.py", line 54, in _declaration_dense_pack
    packw_bn = cfg["tile_x"].size[-1]
  File "/homes/tharindu/tvm-master/python/tvm/autotvm/task/space.py", line 771, in __getitem__
    return self._entity_map[name]
KeyError: 'tile_x'

Did you use graph tuning? If so, usually this is caused by writing to the same log file multiple times. You can remove the original graph_opt_log_file and run graph tuning again.

1 Like

That seems to fix the issue.
Thank you,

It does fix the problem,
cheers