[PyTorch] Returning ListConstruct is not handled in PyTorch parser

I have a graph which returns the output of ListConstruct. I get the following error when trying to convert to relay that.

Traceback (most recent call last):

....

  File "pytorch_to_relay.py", line 125, in compile_
    mod, params = relay.frontend.from_pytorch(trace_model, inp_shape)

  File "/tvm/python/tvm/relay/frontend/pytorch.py", line 2190, in from_pytorch
    outputs, ret_name, convert_map, prelude)

  File "/tvm/python/tvm/relay/frontend/pytorch.py", line 2078, in convert_operators
    elif operator == "prim::ListConstruct" and _should_construct_dynamic_list(op_node):

  File "/tvm/python/tvm/relay/frontend/pytorch.py", line 112, in _should_construct_dynamic_list
    if is_used_by_list_add(filter(lambda use: use.user.kind() != "prim::Loop", uses)):

  File "/tvm/python/tvm/relay/frontend/pytorch.py", line 85, in is_used_by_list_add
    output_type = _get_node_type(use.user)

  File "/tvm/python/tvm/relay/frontend/pytorch.py", line 1652, in _get_node_type
    assert node.outputsSize() == 1

AssertionError

the node is a “prim::Return” node whose output will always be 0. Is this an expected behavior when you return a List[Tensor]?

Handling ListConstruct is tricky. In particular, returning ListConstruct as the output is not supported.

Since Relay cannot return a Python list, you shouldn’t expect to be able to get the python list as output. If the output list is truly a variable length, dynamic list, we can return Relay List VM object. This requires using VM runtime instead of graph runtime. I haven’t met this use case, so it is not implemented.

@masahi: Thanks for the response. So only supported return type when you have multiple outputs is a Tuple?

Actually our test cases don’t cover multiple output case at all. For TupleConstruct we always return Relay Tuple (see below), so I hope it would work.

If you have a good use case for multiple output models, we can add tests for them.

I think huggingface bert-base-uncased returns multiple outputs? Like a Tuple()?

ok, if the number of outputs is fixed, a tuple should be used.

The reason handling ListConstruct is tricky is because it is used for creating both “static” list like padding=[1, 1] and also a “dynamic” list that can be appended variable number of items. For the former case, we should pass the static list to relay ops directly, but the latter case requires creating a Relay List ADT. The tricky part is how to distinguish two cases.

Cool. Thanks :slight_smile: . I will try Tuples as the number of outputs are defined at the graph parsing time. Just out of curiosity, do you have an example of how to use the Relay VM Object (for dynamic list case in PT).

Our LSTM tests cover dynamic models.

I realized that models there returns a tuple (like return torch.stack(outputs), state), so the multiple output case is actually tested.

Below is an example of how you can retrieve output tensors from Relay Tuple object.

Also, if you want to support returning a list, you can modify _should_construct_dynamic_list function (from which you got the error above) below.

For example, if this ListConstruct node is consumed by prim::Return, we should return True from this function. Then you’d get a List VM object as output.

This is helpful :slight_smile: . In the mean time, I tried tuple with two inputs. I get the following error. Just curious to know if this is right?

  File "tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 216, in __call__
    raise get_last_ffi_error()
  [bt] (8) 9   libtvm.dylib                        0x000000012f92fc44 std::__1::__function::__func<tvm::$_5, std::__1::allocator<tvm::$_5>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 84
  [bt] (7) 8   libtvm.dylib                        0x000000012f92fda7 void std::__1::__invoke_void_return_wrapper<void>::__call<tvm::$_5&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*>(tvm::$_5&&&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 167
  [bt] (6) 7   libtvm.dylib                        0x000000012f92fe62 tvm::$_5::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 178
  [bt] (5) 6   libtvm.dylib                        0x000000012f921c68 tvm::GenericFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 1944
  [bt] (4) 5   libtvm.dylib                        0x000000012f0c0dd5 tvm::runtime::PackedFunc::CallPacked(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 85
  [bt] (3) 4   libtvm.dylib                        0x000000012f0c1cfb std::__1::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 155
  [bt] (2) 3   libtvm.dylib                        0x0000000130743149 std::__1::__function::__func<TVMFuncCreateFromCFunc::$_2, std::__1::allocator<TVMFuncCreateFromCFunc::$_2>, void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 73
  [bt] (1) 2   libtvm.dylib                        0x00000001307434b7 void std::__1::__invoke_void_return_wrapper<void>::__call<TVMFuncCreateFromCFunc::$_2&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*>(TVMFuncCreateFromCFunc::$_2&&&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&) + 167
  [bt] (0) 1   libtvm.dylib                        0x0000000130743588 TVMFuncCreateFromCFunc::$_2::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const + 200
  File "tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 78, in cfun
    rv = local_pyfunc(*pyargs)
  File "tvm/python/tvm/relay/op/strategy/x86.py", line 236, in dense_strategy_cpu
    m, _ = inputs[0].shape
ValueError: too many values to unpack (expected 2)

The error seems to be coming from during relay.build, which means PyTorch -> Relay conversion itself didn’t raise any error. So yeah, it seems to be working.