Tvm quantization broken?

mxnet version is ‘1.6.0’ and tvm is ‘0.6.0’. Both libs were compiled from source on ubuntu 16.04.

not quite sure exactly what the problem is or why it’s segfaulting as I’m able to use both tvm and mxnet in in c++ using the same model, just obviously not quantized. Any direction would be appreciated.

Thanks! Matt

import tvm
import numpy as np
import tvm
from tvm.contrib import graph_runtime
import mxnet as mx
from mxnet import ndarray as nd
from matplotlib import pyplot as plt
from tvm.relay.testing.config import ctx_list
from tvm import relay
from tvm.contrib import graph_runtime
from tvm.contrib.download import download_testdata
from gluoncv import model_zoo, data, utils

block = model_zoo.get_model('yolo3_mobilenet1.0_coco', pretrained=True)
dshape = (1, 3, 512, 512)
dtype = 'float32'
target = tvm.target.cuda()

mod, params = relay.frontend.from_mxnet(block, shape={'data': dshape})

with relay.quantize.qconfig(): 
   test = relay.quantize.quantize(mod, params)
Segmentation fault: 11

Stack trace:
  [bt] (0) /home/mkrzus/github/incubator-mxnet/python/mxnet/../../build/libmxnet.so(+0x43bc6e9) [0x7f54d3f056e9]
  [bt] (1) /lib/x86_64-linux-gnu/libc.so.6(+0x354b0) [0x7f553dfcb4b0]
  [bt] (2) /home/mkrzus/github/incubator-tvm/build/libtvm.so(tvm::relay::GetValidCountRel(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)+0x149) [0x7f54f9354f19]
  [bt] (3) /home/mkrzus/github/incubator-tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<bool (tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>::AssignTypedLambda<bool (*)(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&)>(bool (*)(tvm::Array<tvm::relay::Type, void> const&, int, tvm::Attrs const&, tvm::relay::TypeReporter const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0xd4) [0x7f54f91f7c84]
  [bt] (4) /home/mkrzus/github/incubator-tvm/build/libtvm.so(tvm::relay::TypeSolver::Solve()+0x3b0) [0x7f54f946a090]
  [bt] (5) /home/mkrzus/github/incubator-tvm/build/libtvm.so(tvm::relay::TypeInferencer::Infer(tvm::relay::Expr)+0x55) [0x7f54f94545e5]
  [bt] (6) /home/mkrzus/github/incubator-tvm/build/libtvm.so(tvm::relay::InferType(tvm::relay::Function const&, tvm::relay::Module const&, tvm::relay::GlobalVar const&)+0x1d7) [0x7f54f9454d97]
  [bt] (7) /home/mkrzus/github/incubator-tvm/build/libtvm.so(tvm::relay::ModuleNode::Add(tvm::relay::GlobalVar const&, tvm::relay::Function const&, bool)+0x28c) [0x7f54f95316bc]
  [bt] (8) /home/mkrzus/github/incubator-tvm/build/libtvm.so(tvm::relay::transform::FunctionPassNode::operator()(tvm::relay::Module const&, tvm::relay::transform::PassContext const&) const+0x591) [0x7f54f941ae51]

Hi Matt, I think I had similar problem, and I have raised a similar topic here:

My conclusion was that MXNet have been using ::mxnet::TShape in defining attributes of operators, instead of using nnvm::TShape, and the temporarily fixed was to revert to MXNet v1.3.1 IMHO.

1 Like

Thanks @liangfu! I’ll give it a shot and see if that works