Quantize Segmentation Fault

Hi:

I am trying to use quantization to accelerate the inference of a Tensorflow graph.

Just using autotvm without quantization works fine, but when relay.quantize.quantize is used, a segmentation fault occurs.

And when I tried to extracte certain sub_graph of the Tensorflow graph to quantize, it may works or may fail. There exists a node, sub_graph before it can be quantized fine, and when the node is included in the sub_graph, the quantization fails.

So, any suggestion to find out what is wrong and make it work?

The segmentation fault comes out in the quant_passes.append(_transform.FoldConstant()) pass from the function

def quantize(mod, params=None, dataset=None):
    mod = prerequisite_optimize(mod, params)

    calibrate_pass = tvm.transform.module_pass(
        calibrate(dataset), opt_level=1,
        name="QuantizeCalibrate")
    quant_passes = [partition(),
                    annotate(),
                    calibrate_pass]
    if not current_qconfig().do_simulation:
        quant_passes.append(realize())
    quant_passes.append(_transform.FoldConstant())
    quantize_seq = tvm.transform.Sequential(quant_passes)
    with tvm.transform.PassContext(opt_level=3,
                                   required_pass=["QuantizeAnnotate",
                                                  "QuantizeCalibrate",
                                                  "QuantizeRealize"]):
        with quantize_context():
            mod = quantize_seq(mod)

    return mod