In paper, micro-kernels compiled by JIT are computed in PS? GEMM and ALU are in PL?

I’m a student and new in this.
I read the paper TVM: An Automated End-to-End Optimizing Compiler for Deep Learning, it mentioned that TVM use FPGA for acceleration. My understanding is that the GEMM and ALU are in PL side, but the micro-kernels compiled by JIT for flexibility are in PS, because it is generate by software but not bitstream. Is it right?
If above is correct, I didn’t find ***.bit file in github project TVM/vta/ . Do we need to generate bit stream ourselves with vivado?
Thanks a lot~

Yes and it is called VTA, you can find more information here

https://docs.tvm.ai/vta/index.html

Thank you for your reply;
I read the document but two things still confuse me;
1.It told me new user can’t put image here.
This page demo search “bitstream”: https://docs.tvm.ai/vta/tutorials/frontend/deploy_resnet_on_vta.html#sphx-glr-vta-tutorials-frontend-deploy-resnet-on-vta-py
in this demo, why it program_fpga with none bitstream. The LOAD, COMPUTE, STORE module is in FPGA, how it get the hardware information;
2. this page search “The runtime can readily make use of schedules to” :https://arxiv.org/pdf/1807.04188.pdf

how the mocro-kernels compiled by JIT works? My understanding is that it turn the new operators into a fix of GEMM and ALU for computing. Is it right?

Hey,

Can you elaborate a little bit more on (1).

Regarding (2), you can find more information about VTA runtime and how it is used in the VTA pass

(1) if you set it to None, it will by download the default bitstream for the vta parameterization. You can answer your question by looking at the definition of that function.

(2) microcode generation is handled by the runtime; if you want to see how these are generated you can add with vta.build_config(debug_flag = 0x6): to the simpler tutorial examples before calling tvm.build()

for instance in the simple matrix multiply tutorial, you can use

with vta.build_config(debug_flag = 0x6):
    my_gemm = tvm.build(s, [A, B, C], "ext_dev",
                        env.target_host, name="my_gemm")

and that will get the runtime to print out the microcoded kernels that get generated

1 Like