Runtime GetFunction(...): How to differentiate tensors in TVMArgs?

I’m attempting an implementation of my own runtime module. My understanding of GetFunction is as follows

  1. Module is queried for some code corresponding to “func_name”
  2. A PackedFunc is created which uses this code

The PackedFunc has access to, at least, TVMArgs and TVMRetValue. I see that TVMArgs contains pointers to both input & output DLTensors, where the beginning tensors are inputs and the last tensors are outputs.

The implementation will perform some operation using “func_name” code on the input tensors which produces output that is stored at the output DLTensor data pointers.

My questions:

  1. What is the mechanism for differentiating between the input tensors and outputs (consider more than 1 output tensor from the op)? That is, how would my backend know that I have (2 inputs + 1 output) rather than (1 input + 2 outputs) or something else. My idea is that I can CodeGen some meta data to go along with my backend code that is returned by the query.
  2. What is the mechanism for differentiating between input tensors with variable data and input tensors with static data (weight) tensors?