Are all the models obtained via the from_mxnet
function in nnvm.frontend
, scheduled for single image inference? I am running my model with the target ‘cuda’.
When I try to input batches larger than 1, the program crases with the error.
RuntimeError("Batch size: %d is too large for this schedule" % batch_size)
Could someone point me towards how I can carry out inference over multiple images?