[solved]Getting same output after deploying tflite on arm_cpu

Hi, I used (https://github.com/dmlc/nnvm/blob/master/docs/how_to/deploy.md) for building an application to run on android device. I am allocating random values to the tvm array x and calling the set_input function(), but everytime I get the same output, I mean the same index for max value out of 1000 values and same values per index. The case is same with the tuned as well as untuned graph. And I get a continuous warning saying: /home/yovan/tvm_all/run_tvm_on_device/tvm/src/runtime/graph/graph_runtime.cc:65: Warning: cannot find “data” among input . Output is something like this (for tflite mobilenet_v1_224):
0.000127
0.000527
0.000987
0.000252
0.000207
0.000938
0.000505
0.000070
0.000158
0.000119
0.000649
0.000668
0.000274
0.000112
0.000165
0.000156
0.000298
0.000191
0.000221
0.000192
0.000126
0.000267
0.001076
0.001035
0.000715
0.000360…

Note: there should be 2 changes in the above code

  1. int out_ndim = 2;
  2. int64_t out_shape[] = {1, 1001};

The input tensor is not data, is input.

Worked, thanks! but is is not just a naming convention? Like, we are providing a name ‘data’ or ‘input’ to the tensor, or the name we had provided in the python file for the graph it should match with that?

This name is the input name of model. I think I am anwering the same question secondly for you. I must say that you should follow the link mentioned by my first answer of your same question: Problem in tuning and deploying tflite model on arm target via rpc

yeah, sorry for that, but how come some inference was coming from after application was called, it does it have some default inputs?

Do you mean the image data for input? for example cat.png or else?

Yes, the image data for input, when I was assigning using set_input(“data”, x) x being the TVMArrayAlloc, it could’t find the name for the input tensor defined in the library right?
So, how come inference could come?