VTA released as TVM's newest Hardware Accelerator Backend


We are excited to announce the Versatile Tensor Accelerator (VTA) is an extension of the TVM framework designed to advance deep learning and hardware innovation. VTA is a programmable accelerator that exposes a RISC-like programming abstraction to describe compute and memory operations at the tensor level. We designed VTA to expose the most salient and common characteristics of mainstream deep learning accelerators, such as tensor operations, DMA load/stores, and explicit compute/memory arbitration.

VTA is more than a standalone accelerator design: it’s an end-to-end solution that includes drivers, a JIT runtime, and an optimizing compiler stack based on TVM. The current release includes a behavioral hardware simulator, as well as the infrastructure to deploy VTA on low-cost FPGA hardware for fast prototyping. By extending the TVM stack with a customizable, and open source deep learning hardware accelerator design, we are exposing a transparent end-to-end deep learning stack from the high-level deep learning framework, down to the actual hardware design and implementation. This forms a truly end-to-end, hardware-software open source stack for deep learning systems.

See our complete blog post: https://tvm.ai/2018/07/12/vta-release-announcement.html



Hi @thierry Can you share the way you quantize Resnet-18 model here ? I tried MxNet’s quantization, but there are many operators with different name compare to your network.


Dear @titikid, we’ve indeed applied our own fine-tuning approach to an MxNet model to obtain the 8-bit model. We’ll release support for model conversion very soon. I’ll notify you when it’s available.


Great! I’m looking forward to hearing from you.


Generic 8bit model support is one of the main focus in this release cycle, hopefully we can get something useful together in the community


In order to have a reusable flow, @tqchen is right, providing support for 8-bit model conversion for VTA will be released in the next cycle.

We’ll let you know when it’s available.


Dear @thierry, do you have any updated information for quantization tool?


@titikid it’s till work in progress as Relay is being rolled out (and 8bit support). @tqchen do we have a timeline for this?


Any update on this? :slight_smile:


@nhynes it’s almost ready. 8bit quantization support which is necessary is WIP: https://github.com/dmlc/tvm/pull/2116

Most of Relay has been rolled out, and graph conversion for VTA with graph packing is WIP, and the goal is to have it in in the next week or so.