How to merge several ops/schedules?

hello!

i want to merge all ops/schedules when using TOPI not Relay.

for example if i have a simple network like below.

with tvm.target.create("cuda"):
    Layer_1 = topi.nn.conv2d(data, param_1, strides = [1,1] , padding=[0,0] , dilation=[1,1] )
    Layer_2 = tvm_bias_add(Layer_1,param_2)
    Layer_3 = topi.nn.relu(Layer_2)
    Layer_4 = topi.nn.pool(Layer_3,kernel=[2,2],stride=[2,2],padding=[0,0,0,0],pool_type = "avg") # Note kernel is pool_size

and to schedule each ops, make function like below.

def conv_bias_relu(input_data, param_1, param_2, output_data):
    with tvm.target.create('cuda'):
        L1 = topi.nn.conv2d(input_data, param_1, strides=[1,1], padding=[0,0], dilation=[1,1] )
        L2 = tvm_bias_add(L1,param_2) # custom bias_add function
        L3 = topi.nn.relu(L2)
        output_data = L3
        sch1 = topi.generic.schedule_conv2d_nchw(L3)
    return sch1

def pool(input_data):
    with tvm.target.create("cuda"):
        L1 = topi.nn.pool(input_data,kernel=[2,2],stride=[2,2],padding=[0,0,0,0],pool_type = "avg")
        sch2 = topi.generic.schedule_pool(L1,layout="NCHW")
    return sch2

sch1 = conv_bias_relu(data, param_1,param_2,data2)
sch2 = pool(data2)
## any solution to fuse sch1 and sch2 ???
Conv_Bias_Relu = tvm.build(sch1, [ data,param_1,param_2,data2 ] ,tgt , name="CBR")
AvgPool = tvm.build(sch2, [ data2 ] , tgt, name="AvgPool")

ctx = tvm.context(tgt,0) # tgt is a 'cuda' 
test_data = tvm.nd.array( np.zeros((batch,1,28,28)).astype('float32'), ctx )
test_p1 = tvm.nd.array( np.zeros((6,1,3,3,)).astype('float32'), ctx )
test_p2 = tvm.nd.array( np.zeros((6,)).astype('float32'), ctx )
test_data2 = tvm.nd.array( np.zeros((batch,6,26,26)).astype('float32'), ctx)
## inference
Conv_Bias_Relu( test_data, test_p1, test_p2, test_data2 )
AvgPool( test_data2 )

i want to merge the “Conv_bias_Relu” and “AvgPool” with one Module. but i didn’t find solution that merge or fuse ops/schedule/Modules.

is there any solution to fuse all ops/schedule/Modules??

thank you.

Topi automatically fuses convolution with any number of injective operations that go after it, as you use it in conv_bias_relu function. I don’t think fusion with pool operations is implemented.

You could try writing your own schedule function similar to schedule_conv2d_nchw_cuda found in tvm/topi/python/topi/cuda/conv2d.py

1 Like

Thank you for reply!

I will refer to that you are mention.

Thanks!