Thank you for the response @srkreddy1238.
I’ll try that and let you know.
In the mean while, does opt_level in relay have any importance similar to NNVM, because even after giving opt_level=10, the code is compiling fine and generating the library.
Below Code:
with relay.build_config(opt_level=10):
graph, lib, params = relay.build(sym, target=target, target_host=target_host, params=params)
Issue 2:
and also failing at opt_level=1 for CUDA target.
below is the Error:
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 132, in schedule_reduce
traverse_after_reduce(outs[0].op)
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 115, in traverse_after_reduce
traverse_after_reduce(tensor.op)
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 120, in traverse_after_reduce
traverse_before_reduce(tensor.op)
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 103, in traverse_before_reduce
traverse_before_reduce(tensor.op)
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 103, in traverse_before_reduce
traverse_before_reduce(tensor.op)
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 103, in traverse_before_reduce
traverse_before_reduce(tensor.op)
[Previous line repeated 4 more times]
File "/home/ubuntu/.local/lib/python3.6/site-packages/topi-0.6.dev0-py3.6.egg/topi/cuda/reduction.py", line 105, in traverse_before_reduce
raise RuntimeError("Unsupported operator: %s" % operator.tag)
RuntimeError: Unsupported operator:
Error during compile func
v0.0.1
%22 = fn (%p0: Tensor[(1, 8, 8, 1024), float32], %p1: Tensor[(1024,), float32], %p2: Tensor[(1024,), float32], %p3: Tensor[(1024,), float32], %p4: Tensor[(1024,), float32], %p5: Tensor[(1, 1, 1, 1024), float32], %p6: Tensor[(1, 8, 8, 1024), float32], %p7: Tensor[(1024,), float32], %p8: Tensor[(1024,), float32], %p9: Tensor[(1024,), float32], %p10: Tensor[(1024,), float32], __dict__=meta[StrMap][0]) -> Tensor[(1, 1024), float32] {
%0 = add(%p1, 0.001f) // ty=Tensor[(1024,), float32]
%1 = sqrt(%0) // ty=Tensor[(1024,), float32]
%2 = divide(1f, %1) // ty=Tensor[(1024,), float32]
%3 = multiply(%2, %p2) // ty=Tensor[(1024,), float32]
%4 = multiply(%p0, %3) // ty=Tensor[(1, 8, 8, 1024), float32]
%5 = negative(%p3) // ty=Tensor[(1024,), float32]
%6 = multiply(%5, %3) // ty=Tensor[(1024,), float32]
%7 = add(%6, %p4) // ty=Tensor[(1024,), float32]
%8 = add(%4, %7) // ty=Tensor[(1, 8, 8, 1024), float32]
%9 = multiply(%8, %p5) // ty=Tensor[(1, 8, 8, 1024), float32]
%10 = add(%p7, 0.001f) // ty=Tensor[(1024,), float32]
%11 = sqrt(%10) // ty=Tensor[(1024,), float32]
%12 = divide(1f, %11) // ty=Tensor[(1024,), float32]
%13 = multiply(%12, %p8) // ty=Tensor[(1024,), float32]
%14 = multiply(%p6, %13) // ty=Tensor[(1, 8, 8, 1024), float32]
%15 = negative(%p9) // ty=Tensor[(1024,), float32]
%16 = multiply(%15, %13) // ty=Tensor[(1024,), float32]
%17 = add(%16, %p10) // ty=Tensor[(1024,), float32]
%18 = add(%14, %17) // ty=Tensor[(1, 8, 8, 1024), float32]
%19 = add(%9, %18) // ty=Tensor[(1, 8, 8, 1024), float32]
%20 = nn.relu(%19) // ty=Tensor[(1, 8, 8, 1024), float32]
%21 = mean(%20, axis=[1, 2]) // ty=Tensor[(1, 1024), float32]
%21
}
%22
// meta data omitted. you can use show_meta_data=True to include meta data--------------------------
Any help here?