Batch Size is too Large

Are all the models obtained via the from_mxnet function in nnvm.frontend, scheduled for single image inference? I am running my model with the target ‘cuda’.

When I try to input batches larger than 1, the program crases with the error.
RuntimeError("Batch size: %d is too large for this schedule" % batch_size)

Could someone point me towards how I can carry out inference over multiple images?

1 Like

convolution schedules now support inference one input at time (N=1).
You should try one at a time.

Oh all right. Thank you for taking out time to answer the query.

Could someone give me pointers on how I could contribute to support multi-input? Maybe I could take it up as a project and submit a PR.

have you solved your problem? I encounter the same problem as yours.

has this problem been solved ? is there any better way?

could convolution schedule support inference more than one input at time?is there any progess?

Could we have a discussion on how to support this widely used feature? I saw some posts before said that in master branch templates have supported minibatch, but I don’t know more about that.