Can I use topi operators in hybrid script?

As title.
Besides, I also want to know if a python list can be used in hybrid script.

1 Like

CC: @were if you have bandwidth

Generally speaking, in TVM, we want small ops instead of large ones, because the compiler could automatically do optimization for you (e.g. fuse some operators if possible).

In your particular case (use topi operators in hybrid script), I would suggest to built up a Relay IR.

Python tuple should be supported.

Yes, you can use TOPI as an upstream op and it can be passed to hybrid script’s argument list.

Refer tests/python/unittest/test_hybrid_script's test_upstream function for more details.

For full language feature implemented by hybrid script, refer language manual for more details.

Generally speaking, yes and no.
If you want to use a Python list for software emulation, yes. You can pass it, as long as your code will not exceed the index.
If you want to use Python list as a constant tensor in compilation, no. You need to use tvm.convert to change it to tvm.container.Array.

I use the hybrid function as an internal function of a python function, a python list is used to be a parameter of python function, and directly used in hybrid function without pass it in.
The function is to implement space_to_batch operator. Code is here. Hope it can help you.

import numpy as np
import tvm
import topi
def hybrid_space_to_batch3(data, pad_before, pad_after, block_shape):
    data = topi.nn.pad(data, pad_before, pad_after)
    out_batch = data.shape[0] * block_shape[0] * block_shape[1]
    out_height = data.shape[1]/block_shape[0]
    out_width = data.shape[2]/block_shape[1]
    out_channel = data.shape[3]
    out_shape = (out_batch, out_height, out_width, out_channel)
    @tvm.hybrid.script
    def _space_to_batch(data):
        out_tensor = output_tensor(out_shape,data.dtype)
        # space to batch
        for i in range(out_tensor.shape[0]):
            for j in range(out_tensor.shape[1]):
                for k in range(out_tensor.shape[2]):
                    for l in range(out_tensor.shape[3]):
                        # map output tensor index to raw data index
                        ibatch = int32(i%data.shape[0])
                        coef = int32(i//data.shape[0])
                        iwidth = int32(coef%block_shape[1] + k * block_shape[1])
                        coef = int32(coef//block_shape[1])
                        iheight = int32(coef%block_shape[0] + j *  block_shape[0])
                        ichannel = l # last dim in data is as same as the output last dim index
                        out_tensor[i,j,k,l] = data[ibatch,iheight,iwidth,ichannel]
        return out_tensor
    res = _space_to_batch(data)
    return res

def test_hybrid_space_to_batch():
    target = ‘llvm’
    ctx = tvm.context(target, 0)
    dtype = ‘float32’
    data = tvm.placeholder((1,2,2,1), dtype=dtype)
    pad_before = [0,2,2,0]
    pad_after = [0,2,2,0]
    block_shape = [2, 2]
    Result = hybrid_space_to_batch3(data, pad_before, pad_after, block_shape)
    sch = tvm.create_schedule(Result.op)
    module = tvm.build(sch, [data, Result], target=target)
    print(tvm.lower(sch, [data, Result], simple_mode=True))
    np.random.seed(12306)
    data = tvm.nd.array(np.random.randint(1,20,size=[1,2,2,1]).astype(dtype), ctx)
    out = tvm.nd.array(np.zeros((4,3,3,1), dtype=dtype), ctx)
    module(data, out)
    print(out.asnumpy())

if __name__=='__main__':
    test_hybrid_space_to_batch()
1 Like