My code as follows:
A = tvm.placeholder((batchsize, N, L), name='A', dtype=dtype)
B = tvm.placeholder((L, M), name='B', dtype=dtype)
k = tvm.reduce_axis((0, L), name='k')
C = tvm.compute((batchsize, N, M), lambda b, i, j: tvm.sum(A[b, i, k] * B[k, j], axis=k), name='C')
s = tvm.create_schedule(C.op)
# schedule
bb, yy, x = s[C].op.axis
y = s[C].fuse(bb,yy)
k = s[C].op.reduce_axis[0]
##### define space begin #####
cfg = autotvm.get_config()
cfg.define_split("tile_y", y, num_outputs=2)
cfg.define_split("tile_x", x, num_outputs=2)
Here y is a fused axis, and erros occur as follows:
return self._add_new_transform(SplitSpace, name, axes, policy, **kwargs)
File “/home/work/venv/tf1.6_newop/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/autotvm/task/space.py”, line 721, in _add_new_transform
space = space_class(axes, policy, **kwargs)
File “/home/work/venv/tf1.6_newop/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/autotvm/task/space.py”, line 170, in init
factors = get_factors(length)
File “/home/work/venv/tf1.6_newop/lib/python2.7/site-packages/tvm-0.5.dev0-py2.7-linux-x86_64.egg/tvm/autotvm/task/space.py”, line 148, in get_factors
list.add, ([i, n//i] for i in range(1, int(math.sqrt(n)) + 1, step)
ValueError: math domain error
y is a type of iter_var(b.i.fused, ), not iter_var(j, Range(min=0, extent=4096)). It seems that fused axis can’t be auto tuned directly.