[AUTOTVM] Building SSD error

root@ubuntu:/tvm/tutorials/autotvm# python tune_nnvm_arm.py
Extract tasks...
Traceback (most recent call last):
  File "tune_nnvm_arm.py", line 347, in <module>
    tune_and_evaluate()
  File "tune_nnvm_arm.py", line 299, in tune_and_evaluate
    symbols=(nnvm.sym.conv2d,))
  File "/tvm/python/tvm/autotvm/task/nnvm_integration.py", line 180, in extract_from_graph
    nnvm.compiler.build(graph, target=dummy_target, shape=shape, dtype=dtype)
  File "/tvm/nnvm/python/nnvm/compiler/build_module.py", line 304, in build
    graph = graph.apply("GraphCompile")
  File "/tvm/nnvm/python/nnvm/graph.py", line 234, in apply
    check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
  File "/tvm/nnvm/python/nnvm/_base.py", line 75, in check_call
    raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: TVMCall CFunc Error:
Traceback (most recent call last):
  File "/tvm/python/tvm/_ffi/_ctypes/function.py", line 54, in cfun
    rv = local_pyfunc(*pyargs)
  File "/tvm/nnvm/python/nnvm/compiler/build_module.py", line 115, in _lower
    raise RuntimeError(msg)
RuntimeError: Traceback (most recent call last):
  File "/tvm/nnvm/python/nnvm/compiler/build_module.py", line 107, in _lower
    f = tvm.lower(sch, inputs, name=func_name)
  File "/tvm/python/tvm/build_module.py", line 340, in lower
    bounds = schedule.InferBound(sch)
  File "/tvm/python/tvm/_ffi/function.py", line 280, in my_api_func
    return flocal(*args)
  File "/tvm/python/tvm/_ffi/_ctypes/function.py", line 184, in __call__
    ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
  File "/tvm/python/tvm/_ffi/base.py", line 66, in check_call
    raise TVMError(py_str(_LIB.TVMGetLastError()))
TVMError: [20:45:13] /tvm/src/schedule/message_passing.cc:36: Check failed: match iter_var(threadIdx.x, Range(min=0, extent=5), threadIdx.x) domain already inferred, cannot prove their extents are the same 4 vs 5

Stack trace returned 10 entries:
[bt] (0) /tvm/build/libtvm.so(dmlc::StackTrace[abi:cxx11]()+0x5b) [0x7f499c71f82b]
[bt] (1) /tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x28) [0x7f499c720078]
[bt] (2) /tvm/build/libtvm.so(tvm::schedule::Update(std::unordered_map<tvm::IterVar, tvm::Range, std::hash<tvm::IterVar>, std::equal_to<tvm::IterVar>, std::allocator<std::pair<tvm::IterVar const, tvm::Range> > >*, tvm::IterVar const&, tvm::Range)+0x330) [0x7f499c941e50]
[bt] (3) /tvm/build/libtvm.so(tvm::schedule::PassDownDomain(tvm::Stage const&, std::unordered_map<tvm::IterVar, tvm::Range,std::hash<tvm::IterVar>, std::equal_to<tvm::IterVar>, std::allocator<std::pair<tvm::IterVar const, tvm::Range> > >*, bool)+0x47a) [0x7f499c9423ba]
[bt] (4) /tvm/build/libtvm.so(tvm::schedule::InferBound(tvm::Schedule const&)+0xed8) [0x7f499c965a18]
[bt] (5) /tvm/build/libtvm.so(+0x221748) [0x7f499c74c748]
[bt] (6) /tvm/build/libtvm.so(TVMFuncCall+0x5e) [0x7f499cb32ede]
[bt] (7) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call_unix64+0x4c) [0x7f4a226dbe40]
[bt] (8) /usr/lib/x86_64-linux-gnu/libffi.so.6(ffi_call+0x2eb) [0x7f4a226db8ab]
[bt] (9) /usr/lib/python2.7/lib-dynload/_ctypes.x86_64-linux-gnu.so(_ctypes_callproc+0x48f) [0x7f4a228eb3df]


Error during compile graph
--------------------------
Graph(%input0,
      %input1,
      %input2,
      %input3,
      %input4) {
  %input0, shape=[1,128,20,20]
  %input1, shape=[128,128,1,1]
  %input2, shape=[128]
  %input3, shape=[128]
  %input4, shape=[128]
  %2 = conv2d(%input0, %input1, kernel_size='[1L, 1L]', use_bias='False', dilation='(1, 1)', channels='128', kernel_layout='OIHW', groups='1', padding='(0, 0)', layout='NCHW', strides='(1, 1)'), shape=[1,128,20,20]
  %4 = expand_dims(%input2, num_newaxis='2', axis='1'), shape=[128,1,1]
  %5 = broadcast_mul(%2, %4), shape=[1,128,20,20]
  %7 = negative(%input3), shape=[128]
  %8 = elemwise_mul(%7, %input2), shape=[128]
  %10 = elemwise_add(%8, %input4), shape=[128]
  %11 = expand_dims(%10, num_newaxis='2', axis='1'), shape=[128,1,1]
  %12 = broadcast_add(%5, %11), shape=[1,128,20,20]
  %13 = relu(%12), shape=[1,128,20,20]
  ret %13
}
graph_attr_keys = [shape, shape_num_unknown_nodes, dtype, dtype_num_unknown_nodes]

It seems you used a custom graph. Can you share the script?

The above mistakes appear

It seems your model is incompatible with nnvm compiler.
I don’t think you can compile this model to any other backend either.

If you can share the model, maybe we can take a deeper look.

Thanks! It also the first mistake. @merrymercy @Laurawly

This is a bug of opencl backend. @Laurawly

AutoTVM extracts tasks by compiling the model with opencl backend and tracing the calls to TOPI, so the task extraction part triggers this error.

You can fix this by replacing all “opencl” with “llvm” in tvm/python/tvm/autotvm/task/nnvm_integration.py

--- a/python/tvm/autotvm/task/nnvm_integration.py
+++ b/python/tvm/autotvm/task/nnvm_integration.py
@@ -86,7 +86,7 @@ class TaskExtractEnv:
                             not in self.task_collection:
                         self.task_collection.append((self.topi_to_task[local_func],
                                                      serialize_args(args)))
-                    with _target.create("opencl"):
+                    with _target.create("llvm"):
                         return local_func(*args)
 
             _local_scope(func)
@@ -186,7 +186,7 @@ def extract_from_graph(graph, shape, dtype, target, symbols, target_host=None):
     logger.disabled = True
 
     # use a dummy target to do a fake compile for collecting topi calls
-    dummy_target = _target.create("opencl -device=dummy")
+    dummy_target = _target.create("llvm -device=dummy")
     with ApplyHistoryBest([], allow_fallback=True):
         nnvm.compiler.build(graph, target=dummy_target, shape=shape, dtype=dtype)

@merrymercy if we set x86 be auto-tvm, llvm -device=dummy will be invalid. I think it is better to add one target named as dummy in the topi folder? which is just for auto tvm training.

Yes, I will send the patch!

This is fixed by https://github.com/dmlc/tvm/pull/1615

Thanks! It run well.