Comparison between Tiramisu and TVM (and Halide)


#1

I have recently read this paper: Tiramisu: A Code Optimization Framework for High
Performance Systems
. The paper has a table summarizing comparison of Tiramisu with other similar compiler frameworks such as Halide.

Capture

My question is whether the comparison results stays the same (at least on the criteria discussed in the paper) if we compare Tiramisu with TVM (instead of with Halide).


#2

From the paper:

Halide is an image processing DSL that has a sched-
uling language; however, it uses intervals to represent it-
eration spaces instead of the polyhedral model. This limits
the expressiveness of Halide. For example, unlike
Tiramisu, Halide cannot naturally represent non-rectangular iteration
spaces. This is the reason why Halide distributed over-
approximates the amount of data to communicate (send and
receive) when generating distributed code. This also makes
certain Halide passes over-approximate non-rectangular it-
eration spaces, potentially leading to less efficient code (for
example, it prevents Halide from performing precise bounds
inference for non-rectangular iteration spaces). It also pre-
vents Halide from performing many complex affine transfor-
mations, such as iteration space skewing.
Halide does not have dependence analysis and thus it re-
lies on conservative rules to determine whether a schedule
is legal; for example, Halide does not allow the fusion of
two loops (using the compute_with command) if the sec-
ond loop reads a value produced by the first loop. While
this rule avoids illegal fusion, it prevents fusing many legal
common cases which may lead to suboptimal performance.
Halide also assumes the program has an acyclic dataflow
graph in order to simplify checking the legality of a schedule.
This prevents users from expressing many programs with
cyclic dataflow. It is possible in some cases to work around
the above restrictions, but such methods are not general.
Tiramisu avoids over-conservative constraints by relying
on dependence analysis to check for the correctness of code
transformations, enabling more possible schedules.

It is great to see this space heating up. I think the first question is valid but maybe a broader discussion about having TVM be less tightly bound to halide would be wise.


#3

It is interesting. Please note that while TVM uses HalideIR that is derived from Halide, most of the code generation and optimization passes are done independently(with deep learning workloads in mind), while reusing sensible ones from Halide. So in terms of low level code generation, we are not necessarily bound to some of limitations listed.

In particular, we take a pragmatic approach, to focus on what is useful for deep learning workloads, so you can find unique things like more GPU optimization, accelerator support, recurrence(scan). If there are optimizations that Tiramisu have which is useful to get the state of art deep learning workloads, we are all for bringing that into TVM

I also want to emphasize that TVM is more than a low level tensor code generation, but instead trying to solve the end to end deep learning compilation problem, and many of the things goes beyond the tensor code generation.