(1) I see, it might be a python version independent bug, and instead may depend on the specific implementation of PIL. I’ll need to drill down on this a little more to understand what’s causing the bug.
(2) Actually if you wait for the update of VTA which we’ll upstream soon we’ll release a collection of VTA designs generated from different
vta_config.json files. But the idea is that we can explore the space of VTA designs for a given workload by changing the tensor shape, and memory layout directly from the vta_config file. We can also tweak the bit width of the different types. The challenge here is to make sure we don’t exceed FPGA resources, or cause timing violation.
(3) VTA is indeed fixed point hardware, but we could also extend it to support floats. Supporting different numerical representations like unums would require more footwork. If there is interest for a floating point version of VTA, we could start a GitHub Issue and look for contributors, this would actually be quite easy to do. Any volunteers?
(4) This is currently hard to do for two reasons. Mobilenet is not a friendly network for accelerators since they are really tuned for CPU inference. But you can use group-convolutions, which I will also provide support for in the next week or two, which would allow us to support a variant of Mobilenet on VTA. Finally if we want to have a working graph translator, we’ll need to wait on Relay support for VTA which @tqchen is driving.
(5) Zedboard is fairly old, so we don’t have much demand for it. We are releasing support for Ultra-96 and ZCU102, in the 0.5 release (by the end of the year). The challenge with ZCU102, is that it’s not officially supported as a Pynq board, so you’d have to build your own image. You can get started here: https://groups.google.com/forum/#!topic/pynq_project/z4zdtEovD9k