Halide like IR and codegen in development at one of FB's PyTorch forks

Hi, I just wanted to share what I found today, in case somebody else is also interested:

There is a very active PyTorch fork at https://github.com/bertmaher/pytorch which seems to be adding support for a lower level, tensor expression IR to their existing Graph IR.

Most of the development is happening under the jit/tensorexpr directory where you can find the usual suspects like Expr, Stmt classes, ScheduleNode etc. For backend they have interpreter, LLVM and CUDA. Some of the components are already being upstreamed with a tag [TensorExpr], see for example https://github.com/pytorch/pytorch/pull/33218

I think this effort shares a similar goal with what they did with TVM last year, namely to have a better support for fusion. Personally I find Torch IR very interesting and I am excited to see this new development.

2 Likes

This is an interesting development. It might be also bring more changes of interpolation and take advantage of what the TVM stack could offer.

It is great to see more momentum to the compilation approaches, and there are a lot more exciting things to be explored. This year we are moving beyond the tensor expression IR towards the unified IR and runtime infrastructure, hopefully these efforts would bring more values to the overall deep learning OSS ecosystem.

2 Likes

Bringing compilation techniques into frameworks is a definitive future for all deep learning stacks, software and hardware. Glad to see that PyTorch is working towards this goal as well.