I’ve just moved from v0.6
to the development branch. HEAD at time of writing.
I’m trying to autotune one of my benchmarks, a CNN. It autotunes fine in v0.6, however with the dev branch, even with a couple of iterations it crashes with error:
[XX:XX:XX] ../src/printer/doc.cc:55: text node: ' an internal invariant was violated while typechecking your program [XX:XX:XX] ../src/relay/op/nn/convolution.cc:561: Check failed: param->ker
nel_size.defined() && param->channels.defined(): The kernel size and channels of a Conv must be set or infered by previous pass
and with other possibly relevant information:
%15 = nn.contrib_conv2d_winograd_without_weight_transform(%11, %14, tile_size=4, padding=[1, 1, 1, 1], kernel_size=[3, 3]) an internal invariant was violated while typechecking your program
[XX:XX:XX] ../src/relay/op/nn/convolution.cc:561: Check failed: param->kernel_size.defined() && param->channels.defined(): The kernel size and channels of a Conv must be set or infered by previous pass
The model runs fine when not autotuning. I’ve tried a couple of other models and they seem to work. The debug doesn’t mention any other layers failing, but it could be this is just the first to fail.
My uninformed guess from the message is that somewhere along the way with nn.contrib_conv2d_winograd_without_weight_transform
the kernel size or channels information is lost somewhere. However I’m not yet familiar enough with the C++ backend of tvm to know a good approach to start tracing.
I can provide more information about my setup if needed.
I have not jumped backwards through the commit history to see if there’s an earlier commit of this version that works.
(off-topic, but does anyone have a recommended nice automated way of doing this beyond bash scripting, I guess doing some fort of binary search in a range of commits?)