Unable to unify: static_tensor_float32_1_64_t and static_tensor_float32_scalar_t

I transfrom keras lstm to tvm meet this error, lstm in tensorflow does not raise this error. I can not understand what is saying. I guess may be something wrong with while_loop and tensor_array op.

logs: %133 = strided_slice(%tf_bi_lstm_layer_1/tf_bi_lstm_layer_1/rnn_cell_f/Shape_1, begin=[0], end=[1], strides=[1]); %134 = squeeze(%133); %135 = @tensor_array_float32_scalar(%134); %136 = full(0f, shape=[1, 64], dtype=“float32”); %137 = strided_slice(%tf_bi_lstm_layer_1/kernel, begin=[0, 0], end=[1000, 64], strides=[1, 1]); %138 = reshape(%137, meta[relay.Constant][0], newshape=[1000, 64]); %139 = strided_slice(%tf_bi_lstm_layer_1/recurrent_kernel, begin=[0, 192], end=[64, 256], strides=[1, 1]); %140 = reshape(%139, meta[relay.Constant][1], newshape=[64, 64]); %141 = strided_slice(%tf_bi_lstm_layer_1/bias, begin=[192], end=[256], strides=[1]); %142 = reshape(%141, meta[relay.Constant][2], newshape=[64]); %143 = strided_slice(%tf_bi_lstm_layer_1/recurrent_kernel, begin=[0, 64], end=[64, 128], strides=[1, 1]); %144 = reshape(%143, meta[relay.Constant][3], newshape=[64, 64]); %145 = strided_slice(%tf_bi_lstm_layer_1/bias, begin=[128], end=[192], strides=[1]); %146 = reshape(%145, meta[relay.Constant][4], newshape=[64]); %147 = strided_slice(%tf_bi_lstm_layer_1/recurrent_kernel, begin=[0, 128], end=[64, 192], strides=[1, 1]); %148 = reshape(%147, meta[relay.Constant][5], newshape=[64, 64]); %149 = strided_slice(%tf_bi_lstm_layer_1/kernel, begin=[0, 192], end=[1000, 256], strides=[1, 1]); %150 = reshape(%149, meta[relay.Constant][6], newshape=[1000, 64]); %151 = strided_slice(%tf_bi_lstm_layer_1/kernel, begin=[0, 128], end=[1000, 192], strides=[1, 1]); %152 = reshape(%151, meta[relay.Constant][7], newshape=[1000, 64]); %153 = strided_slice(%tf_bi_lstm_layer_1/recurrent_kernel, begin=[0, 0], end=[64, 64], strides=[1, 1]); %154 = reshape(%153, meta[relay.Constant][8], newshape=[64, 64]); %155 = strided_slice(%tf_bi_lstm_layer_1/kernel, begin=[0, 64], end=[1000, 128], strides=[1, 1]); %156 = reshape(%155, meta[relay.Constant][9], newshape=[1000, 64]); %157 = @tensor_array_float32_1_1000(%134); %158 = arange(0, 250, 1, start=meta[relay.Constant][10], stop=meta[relay.Constant][11], step=meta[relay.Constant][12], dtype=“int32”); %159 = transpose(%input_1, axes=[1, 0, 2]); %160 = @tensor_array_unstack_float32_250_1_1000(%159); %161 = @tensor_array_scatter_float32_1_1000(%157, %158, %160); %162 = strided_slice(%tf_bi_lstm_layer_1/bias, begin=[64], end=[128], strides=[1]); %163 = reshape(%162, meta[relay.Constant][13], newshape=[64]); %164 = strided_slice(%tf_bi_lstm_layer_1/bias, begin=[0], end=[64], strides=[1]); %165 = reshape(%164, meta[relay.Constant][14], newshape=[64]); %while_loop(0, 0, %135, %136, %136, %134, %138, %140, %142, %144, %146, %148, %150, %152, %154, %156, %161, %163, %165) unable to unify: static_tensor_float32_scalar_t and static_tensor_float32_1_64_t; ) unable to unify: static_tensor_float32_1_64_t and static_tensor_float32_scalar_t; ;

So what should i do, git clone new code and rebuild tvm again?

This PR is mostly done but not 100% completed. After it is merged, your issue should be resolved.

1 Like

when i use the ssd model:ssd_mobilenet_v1_coco, I found this error:

fn (%Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_loop_var: List[tensor_float32_t[]], %Preprocessor/map/Const_loop_var: int32) { %0 = take(%Preprocessor/map/Const_loop_var, 0); %1 = @tensor_array_read_float32(%Preprocessor/map/TensorArrayUnstack/TensorArrayScatter/TensorArrayScatterV3_loop_var, %0); %2 = expand_dims(%1, axis=0) an internal invariant was violated while typechecking your program [15:56:45] /home/wangxihui/tvm_nms/src/relay/op/tensor/transform.cc:181: Check failed: types[0].as(): expand_dims: expect input type to be TensorType but get TypeCallNode(GlobalTypeVar(tensor_float32_t, 5), []) ; ; %3 = image.resize(%2, size=[300, 300], layout=“NHWC”, coordinate_transformation_mode=“asymmetric”); squeeze(%3, axis=[0]) }

in your code, for tf op TensorArray, you use attr["shape’] to get static_tensor_array, but in above model, TensorArrayV3 do not has shape attr, so tvm create GlobalTypeVar, Does this ssd model can run correct ?

In this case, it is not ‘static’ tensor array, which means tensor array doesn’t have ‘shape’ information. When we retrieve tensor from tensor array (List), there is no way to construct a function to do that because relay hasn’t supported dynamic rank.

This is the function to get tensor from static tensor array, as you can see, it needs data_shape to contruct

It looks like this model is saved with older version of tf. I tried to compile with this model and a lot of expected attributes are missing. Maybe you can try a newer version of pre-trained ssd.

This model should be fine.

when i run the test about ssd:https://github.com/apache/incubator-tvm/blob/3e72be58f362a21dbcc1de36f9dbed216e854baf/tests/python/frontend/tensorflow/test_forward.py#L3828 i meet the error:Segmentation fault (core dumped) and the same error in: https://github.com/apache/incubator-tvm/blob/3e72be58f362a21dbcc1de36f9dbed216e854baf/tests/python/frontend/tflite/test_forward.py#L2691

i use gdb bt to debug, the error is: Thread 1 “python3” received signal SIGSEGV, Segmentation fault. 0x00007ff22169acd4 in llvm::EVT::getExtendedVectorNumElements() const () from /home/tvm/build/libtvm.so

I also test the llvm9.0 and 10.0 and meet the same problem, how to slove this Segmentation fault?

git clone latest code , old error disappear, new error

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /root/tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6b) [0x7fa68cb7e86b]
  [bt] (7) /root/tvm/build/libtvm.so(tvm::relay::fold_scale_axis::ForwardPrep::VisitExpr_(tvm::relay::CallNode const*)+0x11) [0x7fa68ca59521]
  [bt] (6) /root/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr_(tvm::relay::CallNode const*)+0x13c) [0x7fa68cbbeeec]
  [bt] (5) /root/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x7b) [0x7fa68cbc1afb]
  [bt] (4) /root/tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6b) [0x7fa68cb7e86b]
  [bt] (3) /root/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::RelayExpr const&)+0x7b) [0x7fa68cbc1afb]
  [bt] (2) /root/tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x6b) [0x7fa68cb7e86b]
  [bt] (1) /root/tvm/build/libtvm.so(tvm::relay::fold_scale_axis::ForwardPrep::VisitExpr_(tvm::relay::LetNode const*)+0x3b) [0x7fa68ca55b4b]
  [bt] (0) /root/tvm/build/libtvm.so(+0x294f350) [0x7fa68ca4c350]
  File "/root/tvm/src/relay/transforms/fold_scale_axis.cc", line 246
TVMError: FoldScaleAxis only accept dataflow-form

Disable FoldScaleAxis pass. You can refer to tf test_forward_ssd.

How can I Disable FoldScaleAxis pass. If possible please add a link regrading this

i found a link regrading this but it seems to be for older version of TVM