How to use dynamic shape Ops like argwhere by relay.build instead of relay.create_executor

I saw there are some Ops support dealing with dynamic input or output tensor shape like argwhere,test_any.py.

But all of these examples use relay.create_executor to get a runtime optimized model. when I use relay.build to get a serialized graph and lib, I got following errors

TVMError: Traceback (most recent call last): [bt] (8) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::StorageAllocator::Plan(tvm::relay::Function const&)+0x6a4) [0x7fe042f8f8f4] [bt] (7) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::StorageAllocaBaseVisitor::GetToken(tvm::relay::Expr const&)+0x28) [0x7fe042f8b538] [bt] (6) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::ExprVisitor::VisitExpr(tvm::relay::Expr const&)+0x83) [0x7fe04303d763] [bt] (5) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::ExprFunctor<void (tvm::relay::Expr const&)>::VisitExpr(tvm::relay::Expr const&)+0x67) [0x7fe042e7d407] [bt] (4) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::NodeFunctor<void (tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<void (tvm::relay::Expr const&)>)>::operator()(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<void (tvm::relay::Expr const&)>) const+0x57) [0x7fe042e73237] [bt] (3) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::StorageAllocator::VisitExpr_(tvm::relay::CallNode const*)+0x196) [0x7fe042f8cc36] [bt] (2) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::StorageAllocator::CreateToken(tvm::relay::ExprNode const*, bool)+0x14d) [0x7fe042f8be0d] [bt] (1) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(tvm::relay::StorageAllocator::GetMemorySize(tvm::relay::StorageToken*)+0xa36) [0x7fe042f8aa96] [bt] (0) /home/liulingzhi1/notespace/tvm/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x32) [0x7fe042930b92] File “/home/liulingzhi1/notespace/tvm/tvm/src/relay/backend/graph_plan_memory.cc”, line 298 TVMError: Check failed: pval != nullptr: Cannot allocate memory symbolic tensor shape [?, 2]

So how can I use relay.argwhere by realy.build?

relay.build is only used for static models. Memory planning pass expects static information of each tensor. GraphRuntime first needs to setup/reserve memory space for execution. No dynamic behavior is allowed in either of them. So you need to use either relay interpreter or VM for execution.

Thanks for you reply, and sorry for that I spend some time to understand your explanation,but there are still some questions。

in compile_engine.cc>MakeShapeFunc, I found

Array<Tensor> inputs;
int count_tuple = 0;
for (Expr arg : call_node->args) {
  if (arg->checked_type().as<TupleTypeNode>()) {
    ++count_tuple;
  }
  for (Tensor tensor : VisitExpr(arg)) {
    inputs.push_back(tensor);
  }
}
if (count_tuple) {
  CHECK_EQ(call_node->args.size(), 1U)
    << "Only allow function with a single tuple input";
}
// Get output ndims
auto ret_type = call_node->checked_type();
Array<IndexExpr> out_ndims;
if (const auto* ttype = ret_type.as<TensorTypeNode>()) {
  out_ndims.push_back(IntImm::make(Int(32), ttype->shape.size()));
} else {
  auto rtype = ret_type.as<TupleTypeNode>();
  // TODO(@icemelon): Allow recursive tuple
  CHECK(rtype);
  for (size_t i = 0; i < rtype->fields.size(); ++i) {
    auto ttype = rtype->fields[i].as<TensorTypeNode>();
    CHECK(ttype);
    out_ndims.push_back(IntImm::make(Int(32), ttype->shape.size()));
  }
}
// Call shape function
auto outputs = fshape_func[op](call_node->attrs, inputs, out_ndims);

at the bottom of above code, it call fshape_func, and the argwhere Op’s fshape_func is define in python/tvm/relay/op/_transform.py

@script
def _argwhere_shape_func_5d(condition):
    out = output_tensor((2, ), "int64")
    out[0] = int64(0)
    out[1] = int64(5)
    for i1 in range(condition.shape[0]):
        for i2 in range(condition.shape[1]):
            for i3 in range(condition.shape[2]):
                for i4 in range(condition.shape[3]):
                    for i5 in range(condition.shape[4]):
                        if condition[i1, i2, i3, i4, i5] != 0:
                            out[0] += int64(1)
    return out

@_reg.register_shape_func("argwhere", True)
def argwhere_shape_func(attrs, inputs, out_ndims):
    if len(inputs[0].shape) == 1:
        return [_argwhere_shape_func_1d(inputs[0])]
    elif len(inputs[0].shape) == 2:
        return [_argwhere_shape_func_2d(inputs[0])]
    elif len(inputs[0].shape) == 3:
        return [_argwhere_shape_func_3d(inputs[0])]
    elif len(inputs[0].shape) == 4:
        return [_argwhere_shape_func_4d(inputs[0])]
    elif len(inputs[0].shape) == 5:
        return [_argwhere_shape_func_5d(inputs[0])]
    return ValueError("Does not support rank higher than 5 in argwhere")

the shape_func return the output tensor’s shape according to the Input(so the output tensor have a dynamic shape). In this process, how can tvm make a static model

I have same issue over here. May I ask did you solve this problem? Thank you!!!