tutorial SSD model inference error when set target to mali



I follow the instruction Deploy Single Shot Multibox Detector(SSD) model on rk3399 ( I build a full version (include nnvm compiler and tvm) on my rk3399 and run code on it ).

I can get a success result when I set compiler.build( … target=tvm.target.arm_cpu()…), but why I change target to tvm.target.mali() , it pop up follow error message

Traceback (most recent call last):
File “deploy_ssd.py”, line 118, in
graph, lib, params = compiler.build(net, tvm.target.mali(), {“data”: dshape}, params=params, target_host=target)
File “/home/tvm/nnvm/python/nnvm/compiler/build_module.py”, line 305, in build
graph = graph.apply(“GraphCompile”)
File “/home/tvm/nnvm/python/nnvm/graph.py”, line 234, in apply
check_call(_LIB.NNGraphApplyPasses(self.handle, npass, cpass, ctypes.byref(ghandle)))
File “/home/tvm/nnvm/python/nnvm/_base.py”, line 75, in check_call
raise NNVMError(py_str(_LIB.NNGetLastError()))
nnvm._base.NNVMError: TVMCall CFunc Error:
Traceback (most recent call last):
File “/home/tvm/python/tvm/_ffi/_ctypes/function.py”, line 55, in cfun
rv = local_pyfunc(*pyargs)
File “/home/tvm/nnvm/python/nnvm/top/vision.py”, line 77, in compute_multibox_transform_loc
clip, threshold, variance)
File “”, line 2, in multibox_transform_loc
File “/home/tvm/python/tvm/target.py”, line 356, in dispatch_func
return dispatch_dict[k](*args, **kwargs)
File “/home/tvm/topi/python/topi/cuda/ssd/multibox.py”, line 394, in multibox_transform_loc_gpu
File “/home/tvm/python/tvm/api.py”, line 467, in extern
body = fcompute(input_placeholders, output_placeholders)
File “/home/tvm/topi/python/topi/cuda/ssd/multibox.py”, line 391, in
variances, batch_size, num_classes, num_anchors),
File “/home/tvm/topi/python/topi/cuda/ssd/multibox.py”, line 316, in transform_loc_ir
variances[1], variances[2], variances[3])
File “/home/tvm/topi/python/topi/cuda/ssd/multibox.py”, line 281, in transform_loc
return tvm.select(clip, tvm.make.Max(0, tvm.make.Min(1, ox - ow)), ox - ow),
File “/home/tvm/python/tvm/_ffi/_ctypes/function.py”, line 185, in call
ctypes.byref(ret_val), ctypes.byref(ret_tcode)))
File “/home/tvm/python/tvm/_ffi/base.py”, line 68, in check_call
raise TVMError(py_str(_LIB.TVMGetLastError()))
tvm._ffi.base.TVMError: [07:30:10] /home/tvm/3rdparty/HalideIR/src/ir/./IR.h:111: Check failed: a.type() == b.t


I meet the same problem as you . waitting …


Currently multibox operators only support cpu. One solution for gpu is fallback multibox operators to cpu. This requires graph annotation and community is working on it.