Pull request created https://github.com/dmlc/tvm/pull/3322
From my personal experience, if there is an error in DAG (or relay), I would prefer to see the exact position of first error instead of all potential errors across the graph.
For example, current error trace
fn (%gpu_0/data_0: Tensor[(1, 1, 224, 224), float32], %gpu_0/conv1_w_0: Tensor[(64, 3, 7, 7), float32], %gpu_0/res_conv1_bn_s_0: Tensor[(64,), float32], %gpu_0/res_conv1_bn_b_0: Tensor[(64,), float32], %gpu_0/res_conv1_bn_rm_0: Tensor[(64,), float32], %gpu_0/res_conv1_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2a_w_0: Tensor[(64, 64, 1, 1), float32], %gpu_0/res2_0_branch2a_bn_s_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2a_bn_b_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2a_bn_rm_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2a_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2b_w_0: Tensor[(64, 64, 3, 3), float32], %gpu_0/res2_0_branch2b_bn_s_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2b_bn_b_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2b_bn_rm_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2b_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_0_branch2c_w_0: Tensor[(256, 64, 1, 1), float32], %gpu_0/res2_0_branch2c_bn_s_0: Tensor[(256,), float32], %gpu_0/res2_0_branch2c_bn_b_0: Tensor[(256,), float32], %gpu_0/res2_0_branch2c_bn_rm_0: Tensor[(256,), float32], %gpu_0/res2_0_branch2c_bn_riv_0: Tensor[(256,), float32], %gpu_0/res2_0_branch1_w_0: Tensor[(256, 64, 1, 1), float32], %gpu_0/res2_0_branch1_bn_s_0: Tensor[(256,), float32], %gpu_0/res2_0_branch1_bn_b_0: Tensor[(256,), float32], %gpu_0/res2_0_branch1_bn_rm_0: Tensor[(256,), float32], %gpu_0/res2_0_branch1_bn_riv_0: Tensor[(256,), float32], %gpu_0/res2_1_branch2a_w_0: Tensor[(64, 256, 1, 1), float32], %gpu_0/res2_1_branch2a_bn_s_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2a_bn_b_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2a_bn_rm_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2a_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2b_w_0: Tensor[(64, 64, 3, 3), float32], %gpu_0/res2_1_branch2b_bn_s_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2b_bn_b_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2b_bn_rm_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2b_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_1_branch2c_w_0: Tensor[(256, 64, 1, 1), float32], %gpu_0/res2_1_branch2c_bn_s_0: Tensor[(256,), float32], %gpu_0/res2_1_branch2c_bn_b_0: Tensor[(256,), float32], %gpu_0/res2_1_branch2c_bn_rm_0: Tensor[(256,), float32], %gpu_0/res2_1_branch2c_bn_riv_0: Tensor[(256,), float32], %gpu_0/res2_2_branch2a_w_0: Tensor[(64, 256, 1, 1), float32], %gpu_0/res2_2_branch2a_bn_s_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2a_bn_b_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2a_bn_rm_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2a_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2b_w_0: Tensor[(64, 64, 3, 3), float32], %gpu_0/res2_2_branch2b_bn_s_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2b_bn_b_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2b_bn_rm_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2b_bn_riv_0: Tensor[(64,), float32], %gpu_0/res2_2_branch2c_w_0: Tensor[(256, 64, 1, 1), float32], %gpu_0/res2_2_branch2c_bn_s_0: Tensor[(256,), float32], %gpu_0/res2_2_branch2c_bn_b_0: Tensor[(256,), float32], %gpu_0/res2_2_branch2c_bn_rm_0: Tensor[(256,), float32], %gpu_0/res2_2_branch2c_bn_riv_0: Tensor[(256,), float32], %gpu_0/res3_0_branch2a_w_0: Tensor[(128, 256, 1, 1), float32], %gpu_0/res3_0_branch2a_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2a_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2a_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2a_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2b_w_0: Tensor[(128, 128, 3, 3), float32], %gpu_0/res3_0_branch2b_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2b_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2b_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2b_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_0_branch2c_w_0: Tensor[(512, 128, 1, 1), float32], %gpu_0/res3_0_branch2c_bn_s_0: Tensor[(512,), float32], %gpu_0/res3_0_branch2c_bn_b_0: Tensor[(512,), float32], %gpu_0/res3_0_branch2c_bn_rm_0: Tensor[(512,), float32], %gpu_0/res3_0_branch2c_bn_riv_0: Tensor[(512,), float32], %gpu_0/res3_0_branch1_w_0: Tensor[(512, 256, 1, 1), float32], %gpu_0/res3_0_branch1_bn_s_0: Tensor[(512,), float32], %gpu_0/res3_0_branch1_bn_b_0: Tensor[(512,), float32], %gpu_0/res3_0_branch1_bn_rm_0: Tensor[(512,), float32], %gpu_0/res3_0_branch1_bn_riv_0: Tensor[(512,), float32], %gpu_0/res3_1_branch2a_w_0: Tensor[(128, 512, 1, 1), float32], %gpu_0/res3_1_branch2a_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2a_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2a_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2a_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2b_w_0: Tensor[(128, 128, 3, 3), float32], %gpu_0/res3_1_branch2b_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2b_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2b_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2b_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_1_branch2c_w_0: Tensor[(512, 128, 1, 1), float32], %gpu_0/res3_1_branch2c_bn_s_0: Tensor[(512,), float32], %gpu_0/res3_1_branch2c_bn_b_0: Tensor[(512,), float32], %gpu_0/res3_1_branch2c_bn_rm_0: Tensor[(512,), float32], %gpu_0/res3_1_branch2c_bn_riv_0: Tensor[(512,), float32], %gpu_0/res3_2_branch2a_w_0: Tensor[(128, 512, 1, 1), float32], %gpu_0/res3_2_branch2a_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2a_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2a_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2a_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2b_w_0: Tensor[(128, 128, 3, 3), float32], %gpu_0/res3_2_branch2b_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2b_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2b_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2b_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_2_branch2c_w_0: Tensor[(512, 128, 1, 1), float32], %gpu_0/res3_2_branch2c_bn_s_0: Tensor[(512,), float32], %gpu_0/res3_2_branch2c_bn_b_0: Tensor[(512,), float32], %gpu_0/res3_2_branch2c_bn_rm_0: Tensor[(512,), float32], %gpu_0/res3_2_branch2c_bn_riv_0: Tensor[(512,), float32], %gpu_0/res3_3_branch2a_w_0: Tensor[(128, 512, 1, 1), float32], %gpu_0/res3_3_branch2a_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2a_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2a_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2a_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2b_w_0: Tensor[(128, 128, 3, 3), float32], %gpu_0/res3_3_branch2b_bn_s_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2b_bn_b_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2b_bn_rm_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2b_bn_riv_0: Tensor[(128,), float32], %gpu_0/res3_3_branch2c_w_0: Tensor[(512, 128, 1, 1), float32], %gpu_0/res3_3_branch2c_bn_s_0: Tensor[(512,), float32], %gpu_0/res3_3_branch2c_bn_b_0: Tensor[(512,), float32], %gpu_0/res3_3_branch2c_bn_rm_0: Tensor[(512,), float32], %gpu_0/res3_3_branch2c_bn_riv_0: Tensor[(512,), float32], %gpu_0/res4_0_branch2a_w_0: Tensor[(256, 512, 1, 1), float32], %gpu_0/res4_0_branch2a_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2a_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2a_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2a_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2b_w_0: Tensor[(256, 256, 3, 3), float32], %gpu_0/res4_0_branch2b_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2b_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2b_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2b_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_0_branch2c_w_0: Tensor[(1024, 256, 1, 1), float32], %gpu_0/res4_0_branch2c_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch2c_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch2c_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch2c_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch1_w_0: Tensor[(1024, 512, 1, 1), float32], %gpu_0/res4_0_branch1_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch1_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch1_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_0_branch1_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res4_1_branch2a_w_0: Tensor[(256, 1024, 1, 1), float32], %gpu_0/res4_1_branch2a_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2a_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2a_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2a_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2b_w_0: Tensor[(256, 256, 3, 3), float32], %gpu_0/res4_1_branch2b_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2b_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2b_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2b_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_1_branch2c_w_0: Tensor[(1024, 256, 1, 1), float32], %gpu_0/res4_1_branch2c_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_1_branch2c_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_1_branch2c_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_1_branch2c_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res4_2_branch2a_w_0: Tensor[(256, 1024, 1, 1), float32], %gpu_0/res4_2_branch2a_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2a_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2a_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2a_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2b_w_0: Tensor[(256, 256, 3, 3), float32], %gpu_0/res4_2_branch2b_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2b_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2b_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2b_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_2_branch2c_w_0: Tensor[(1024, 256, 1, 1), float32], %gpu_0/res4_2_branch2c_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_2_branch2c_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_2_branch2c_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_2_branch2c_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res4_3_branch2a_w_0: Tensor[(256, 1024, 1, 1), float32], %gpu_0/res4_3_branch2a_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2a_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2a_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2a_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2b_w_0: Tensor[(256, 256, 3, 3), float32], %gpu_0/res4_3_branch2b_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2b_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2b_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2b_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_3_branch2c_w_0: Tensor[(1024, 256, 1, 1), float32], %gpu_0/res4_3_branch2c_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_3_branch2c_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_3_branch2c_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_3_branch2c_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res4_4_branch2a_w_0: Tensor[(256, 1024, 1, 1), float32], %gpu_0/res4_4_branch2a_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2a_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2a_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2a_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2b_w_0: Tensor[(256, 256, 3, 3), float32], %gpu_0/res4_4_branch2b_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2b_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2b_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2b_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_4_branch2c_w_0: Tensor[(1024, 256, 1, 1), float32], %gpu_0/res4_4_branch2c_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_4_branch2c_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_4_branch2c_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_4_branch2c_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res4_5_branch2a_w_0: Tensor[(256, 1024, 1, 1), float32], %gpu_0/res4_5_branch2a_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2a_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2a_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2a_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2b_w_0: Tensor[(256, 256, 3, 3), float32], %gpu_0/res4_5_branch2b_bn_s_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2b_bn_b_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2b_bn_rm_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2b_bn_riv_0: Tensor[(256,), float32], %gpu_0/res4_5_branch2c_w_0: Tensor[(1024, 256, 1, 1), float32], %gpu_0/res4_5_branch2c_bn_s_0: Tensor[(1024,), float32], %gpu_0/res4_5_branch2c_bn_b_0: Tensor[(1024,), float32], %gpu_0/res4_5_branch2c_bn_rm_0: Tensor[(1024,), float32], %gpu_0/res4_5_branch2c_bn_riv_0: Tensor[(1024,), float32], %gpu_0/res5_0_branch2a_w_0: Tensor[(512, 1024, 1, 1), float32], %gpu_0/res5_0_branch2a_bn_s_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2a_bn_b_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2a_bn_rm_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2a_bn_riv_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2b_w_0: Tensor[(512, 512, 3, 3), float32], %gpu_0/res5_0_branch2b_bn_s_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2b_bn_b_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2b_bn_rm_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2b_bn_riv_0: Tensor[(512,), float32], %gpu_0/res5_0_branch2c_w_0: Tensor[(2048, 512, 1, 1), float32], %gpu_0/res5_0_branch2c_bn_s_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch2c_bn_b_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch2c_bn_rm_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch2c_bn_riv_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch1_w_0: Tensor[(2048, 1024, 1, 1), float32], %gpu_0/res5_0_branch1_bn_s_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch1_bn_b_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch1_bn_rm_0: Tensor[(2048,), float32], %gpu_0/res5_0_branch1_bn_riv_0: Tensor[(2048,), float32], %gpu_0/res5_1_branch2a_w_0: Tensor[(512, 2048, 1, 1), float32], %gpu_0/res5_1_branch2a_bn_s_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2a_bn_b_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2a_bn_rm_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2a_bn_riv_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2b_w_0: Tensor[(512, 512, 3, 3), float32], %gpu_0/res5_1_branch2b_bn_s_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2b_bn_b_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2b_bn_rm_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2b_bn_riv_0: Tensor[(512,), float32], %gpu_0/res5_1_branch2c_w_0: Tensor[(2048, 512, 1, 1), float32], %gpu_0/res5_1_branch2c_bn_s_0: Tensor[(2048,), float32], %gpu_0/res5_1_branch2c_bn_b_0: Tensor[(2048,), float32], %gpu_0/res5_1_branch2c_bn_rm_0: Tensor[(2048,), float32], %gpu_0/res5_1_branch2c_bn_riv_0: Tensor[(2048,), float32], %gpu_0/res5_2_branch2a_w_0: Tensor[(512, 2048, 1, 1), float32], %gpu_0/res5_2_branch2a_bn_s_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2a_bn_b_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2a_bn_rm_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2a_bn_riv_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2b_w_0: Tensor[(512, 512, 3, 3), float32], %gpu_0/res5_2_branch2b_bn_s_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2b_bn_b_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2b_bn_rm_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2b_bn_riv_0: Tensor[(512,), float32], %gpu_0/res5_2_branch2c_w_0: Tensor[(2048, 512, 1, 1), float32], %gpu_0/res5_2_branch2c_bn_s_0: Tensor[(2048,), float32], %gpu_0/res5_2_branch2c_bn_b_0: Tensor[(2048,), float32], %gpu_0/res5_2_branch2c_bn_rm_0: Tensor[(2048,), float32], %gpu_0/res5_2_branch2c_bn_riv_0: Tensor[(2048,), float32], %gpu_0/pred_w_0: Tensor[(1000, 2048), float32], %gpu_0/pred_b_0: Tensor[(1000,), float32]) {
%0 = nn.conv2d(%gpu_0/data_0, %gpu_0/conv1_w_0, strides=[2, 2], padding=[3, 3], kernel_size=[7, 7])an internal invariant was violated while typechecking your program [21:01:01] /Users/ligeng/Workspace/tvm/src/relay/op/nn/convolution.cc:107: Check failed: reporter->AssertEQ(dshape_nchw[1] / param->groups, wshape[1]):
;
%1 = nn.batch_norm(%0, %gpu_0/res_conv1_bn_s_0, %gpu_0/res_conv1_bn_b_0, %gpu_0/res_conv1_bn_rm_0, %gpu_0/res_conv1_bn_riv_0, epsilon=1e-05)
%2 = %1.0
%3 = nn.relu(%2)
%4 = nn.max_pool2d(%3, pool_size=[3, 3], strides=[2, 2], padding=[1, 1])an internal invariant was violated while typechecking your program [21:01:01] /Users/ligeng/Workspace/tvm/src/relay/op/nn/pooling.cc:73: Check failed: data != nullptr:
...... # > 200 lines
......
......
%230 = multiply(1f, %gpu_0/pred_b_0)
%231 = nn.bias_add(%229, %230)
nn.softmax(%231, axis=1)
}
Even with errors highlighted with color, this is kinda too long to read. I would prefer error message like
An error occurs when executing the following layer
%0 = nn.conv2d(%gpu_0/data_0, %gpu_0/conv1_w_0, strides=[2, 2], padding=[3, 3], kernel_size=[7, 7])
INFO: an internal invariant was violated while typechecking your program [21:01:01] /Users/ligeng/Workspace/tvm/src/relay/op/nn/convolution.cc:107: Check failed: reporter->AssertEQ(dshape_nchw[1] / param->groups, wshape[1]): ;
It makes locating the problem easier and avoids potential ambiguity (pooling layer in my case).