# Semantics of Provide / Realize nodes

#1

I don’t entirely understand the semantics of the Provide / Realize IR nodes, and can’t find the docs for them. Is there an explanation somewhere I could read?

I’m also confused about the distinction between a ComputeOp and a ProducerConsumer node / For node. Is the For node the thing that computes the ComputeOp?

#2

They are part of HalideIR originate from Halide, basically Provide means store into an N dimensional array and Realize roughly means allocate the tensor on that region(the inferencer calculates the necessarily region of the array)

#3

I can not understand what is the point of have these pairs of class:Realize/Allocate;Provide/Store;Call::Halide/Load… I do think the later one is only a special case for one dimension of former one.

#4

Hi Cherry,
Can you help to explain “ProducerConsumer” a little more?
Thank you

#5

Hi Qiu,
From my point of view, ProducerConsumer is a feature first used in Halide. In TVM, each tensor(or buffer) has exactly three stage(except for the output) : Allocate->Producer->Consumer. It’s straight forward. In Allocate phase, it help allocate the memory used to store that buffer, and in Producer phase, it will calculate the value of each element in that buffer, and then, used them in consumer. In Producer phase, the buffer is WRITE ONLY, and in Consumer phase, is READ ONLY.
For example, for the following TVM code

a = tvm.placeholder([10])
b = tvm.compute((10,),lambda i:a[i],name=‘b’)
c = tvm.compute((10,),lambda i:b[i],name=‘c’)

and it willgenerate the following stmt:

// attr [b] storage_scope = “global”
allocate b[float32 * 10]
produce b {
for (i, 0, 10) {
b[i] = placeholder[i]
}
}
produce c {
for (i, 0, 10) {
b[i] = b[i]
}
}

for the `produce b` part, it’s produce for tensor(buffer) b, and `produce c` is produce for tensor c, but also consume for tensor b.

#6

Hi,
For Realize/Allocate;Provide/Store, I’m not clear about their relation. But I cannot agree that later is only special case.
Take Realize/Allocate for example, the defination is different.
class Realize : public StmtNode {
public:
/*! \brief The function to be realized. /
FunctionRef func;
/
! \brief The output value index if func’s value is a tuple. /
int value_index;
/
! \brief The data type of the array. /
DataType type;
/
! \brief Bounds to be realized. /
Region bounds;
/
! \brief Only realize if condition holds. /
Expr condition;
/
! \brief The body of realization. */
Stmt body;
}

class Allocate : public StmtNode {
public:
/*! \brief The buffer variable. /
Var buffer_var;
/
! \brief The type of the buffer. /
DataType type;
/
! \brief The extents of the buffer. /
Array extents;
/
! \brief Only allocate buffer when condition is satisfied. /
Expr condition;
/
! \brief The body to be executed. */
Stmt body;
// The following two fields are deprecated
// kept for backward compatibility and will be refactored later.
Expr new_expr;
std::string free_function;
}

And are they related with function BuildRealize()?
/*!

• \brief Build the Realize statement that realizes
• the op’s output tensors.
*/
virtual Stmt BuildRealize()

The description of BuildRealize is hard to understand.

Can you help me to clarify these relations?
Thanks a lot

#7

Hi,
Yes I believe you are right! Now I think Realize and Provide is on the level of Tensor(which is only a concept, and have not exactly relationship with true memory on device), and Allocate and Store is for true memory, on the level of Buffer.
There is a really simple case which can help us understand the difference between Tensor and memory.
I have a tensor for pytorch, and I do like to set all its elements to zero, how?
In naive TVM using case, if I write

a = tvm.placeholder((n,))
b = tvm.compute((a.shape),lambda i:0)

It will create a new buffer. Because in default, each tensor should correspond to a different buffer. But if I bind tensor a and b to the same buffer, I can do inplace operation.
So in the above case, a,b are conceptions for tensor level, and their memory is a buffer conception for memory level.
TVM will first represente the program on tensor level, and lower it to memory level. So after some lower process, the Realize and Provide will be replaced by Allocate and Store