[Relay]Error when using tvm.compute together with importing relay

This issue comes from this ci failure: http://ci.tvm.ai:8080/job/tvm/job/PR-2184/16/display/redirect

I added two unit tests which import relay under tvm python unit tests, causing test_codegen_llvm to fail. Importing relay and call tvm.compute at the same time will cause error:

import tvm
# Comment out this line will resolve the problem.
from tvm import relay

n = 10
A = tvm.placeholder((n, ), name='A')
scale = tvm.placeholder((), name='scale')
k = tvm.reduce_axis((0, n), name="k")
C = tvm.compute((), lambda : tvm.sum(A[k] * scale, axis=k), name="C")

Error:

  File "test_codegen_llvm.py", line 479, in <module>
    test_rank_zero()
  File "test_codegen_llvm.py", line 368, in test_rank_zero
    check_llvm(64)
  File "test_codegen_llvm.py", line 354, in check_llvm
    C = tvm.compute((), lambda : tvm.sum(A[k] * scale, axis=k), name="C")
  File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-macosx-10.11-x86_64.egg/tvm/api.py", line 309, in compute
    body = fcompute(*[v.var for v in dim_var])
  File "test_codegen_llvm.py", line 354, in <lambda>
    C = tvm.compute((), lambda : tvm.sum(A[k] * scale, axis=k), name="C")
  File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-macosx-10.11-x86_64.egg/tvm/api.py", line 819, in reducer
    return _make_reduce(expr, axis, where)
  File "/usr/local/Cellar/python3/3.6.1/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/tvm-0.6.dev0-py3.6-macosx-10.11-x86_64.egg/tvm/api.py", line 795, in _make_reduce
    assert isinstance(expr, _expr.Expr)

@tqchen @yzhliu

Looks like it fails only when scalar is introduced

The reason for this issue comes from https://github.com/dmlc/tvm/blob/master/topi/python/topi/generic_op_impl.py#L89. This overloading causes A[k] * scale to use topi broadcast op, and will return a tensor, while in tvm.compute an expr is required. This is a problem only when zero-rank tensor is involved in fcompute.

@yzhliu @tqchen