Build with opt_level=0 getting error qnn.requantize not registed TOpPattern not registered

  File "", line 171, in run_test_person_detection
    tvm_output = run_tvm_graph(tflite_model_buf, img_data, 'input')

  File "", line 63, in run_tvm_graph
    graph, lib, params =, target, params=params)

  File "/home/siju/workspace/tvm/python/tvm/relay/", line 251, in build
    graph_json, mod, params =, target, target_host, params)

  File "/home/siju/workspace/tvm/python/tvm/relay/", line 120, in build
    self._build(mod, target, target_host)

  File "/home/siju/workspace/tvm/python/tvm/_ffi/_ctypes/", line 219, in __call__
    raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
  [bt] (8) /home/siju/workspace/tvm/build/*, tvm::RelayExpr const&)+0xfe) [0x7eff4c25f17e]
  [bt] (7) /home/siju/workspace/tvm/build/ const&)+0x7b) [0x7eff4c38dd7b]
  [bt] (6) /home/siju/workspace/tvm/build/<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x92) [0x7eff4c17b242]
  [bt] (5) /home/siju/workspace/tvm/build/ const*)+0x2bf) [0x7eff4c265c6f]
  [bt] (4) /home/siju/workspace/tvm/build/ const*)+0xe3) [0x7eff4c38a303]
  [bt] (3) /home/siju/workspace/tvm/build/ const&)+0x7b) [0x7eff4c38dd7b]
  [bt] (2) /home/siju/workspace/tvm/build/<void (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x92) [0x7eff4c17b242]
  [bt] (1) /home/siju/workspace/tvm/build/ const*)+0x461) [0x7eff4c262fe1]
  [bt] (0) /home/siju/workspace/tvm/build/ [0x7eff4bae5abc]
  File "/home/siju/workspace/tvm/include/tvm/ir/op.h", line 574
TVMError: Check failed: idx < data_.size() && data_[idx].second != 0: Attribute TOpPattern has not been registered for Operator qnn.requantize

@anijain2305 Can you please help me with this issue? I want to keep opt_level=0 because the tflite conv weights are int8 and bias is int32, But most of tvm param weights are int16 and int32 after op fusion. So because of this the param size is high in tvm compared to tflite.

I want to load the model in arduino with limited RAM and flash memory. TFlite is able to run and for tvm im getting memory issues.

  • We will have to run opt_level = 1 atleast because Legalize pass, which is necessary for QNN, runs at level 1.

  • The right way to solve this problem is to disable the upcasting. Currently, the weights are upcasted to int16 for ARM CPUs as it results in better performance for Raspberry Pi. We can selectively disable it.

For now, you can set is_fast_int8_on_arm to True. And start from there.

1 Like

@anjiang2016 Thanks a lot for your quick reply. I was able to resolve it. Now my model size is same as tflite.