[SOLVED] C++ inference test app doesn't work correctly

Hello,

I’m testing a inference based on mxnet and resnet using c++ application.
As for this, I wrote the c++ code[1] and built it correctly.
The original example code I referred to used “cat.bin” file as input data.
So I converted cat.png I have to cat.bin.raw after resizing 256x256 to 224x224 using InfanView app[2].

However, I faced with a error[3] while running it.

Is there something I missed and could you guide me how I could test it using c++ app?

Thanks,
Inki Dae

  • [1] c++ application

#include <dlpack/dlpack.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/packed_func.h>

#include
#include
#include

int main(void)
{
// tvm module for compiled functions
tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile("./mxnet_resnet18_v1.so");

// json graph
std::ifstream json_in("./mxnet_resnet18_v1.json", std::ios::in);
std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
json_in.close();

// parameters in binary
std::ifstream params_in("./mxnet_resnet18_v1.params", std::ios::binary);
std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
params_in.close();

// parameters need to be TVMByteArray type to indicate the binary data
TVMByteArray params_arr;
params_arr.data = params_data.c_str();
params_arr.size = params_data.length();

std::cout << "param size : " << params_arr.size << std::endl;

int dtype_code = kDLFloat;
int dtype_bits = 32;
int dtype_lanes = 1;
int device_type = kDLCPU;
int device_id = 0;

// get global function module for graph runtime
tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);

DLTensor* x;
int in_ndim = 4;
int64_t in_shape[4] = {1, 3, 224, 224};
TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
// load image data saved in binary
std::ifstream data_fin("./cat.bin.raw", std::ios::binary);
data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);

// get the function from the module(set input data)
tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
set_input("data", x);

// get the function from the module(load patameters)
tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
load_params(params_arr);

// get the function from the module(run it)
tvm::runtime::PackedFunc run = mod.GetFunction("run");
run();

DLTensor* y;
int out_ndim = 1;
int64_t out_shape[1] = {1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);

// get the function from the module(get output data)
tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
get_output(0, y);

// get the maximum position in output vector
auto y_iter = static_cast<float*>(y->data);
auto max_iter = std::max_element(y_iter, y_iter + 1000);
auto max_index = std::distance(y_iter, max_iter);
std::cout << "The maximum position in output vector is: " << max_index << std::endl;

TVMArrayFree(x);
TVMArrayFree(y);

return 0;

}


daeinki@daeinki-linux:~/project/working/tvm_pc/tvm/build/test/app_test_pc$ ./mxnet_resnet
param size : 46765317
terminate called after throwing an instance of ‘dmlc::Error’
what(): [12:40:27] /home/daeinki/project/working/test/tvm/src/runtime/graph/graph_runtime.cc:151: Check failed: data->ndim == data_out->ndim (2 vs. 1)

Stack trace returned 9 entries:
[bt] (0) ./mxnet_resnet(dmlc::StackTraceabi:cxx11+0x54) [0x409424]
[bt] (1) ./mxnet_resnet(dmlc::LogMessageFatal::~LogMessageFatal()+0x2a) [0x4096f0]
[bt] (2) /home/daeinki/project/s5pc210/public/tizen_5.0/working/tvm_pc/tvm/build/test/app_test_pc/…/…/…/build/libtvm.so(tvm::runtime::GraphRuntime::CopyOutputTo(int, DLTensor*)+0x1ac) [0x7f20f5b99bfc]
[bt] (3) /home/daeinki/project/s5pc210/public/tizen_5.0/working/tvm_pc/tvm/build/test/app_test_pc/…/…/…/build/libtvm.so(+0x71def5) [0x7f20f5b94ef5]
[bt] (4) ./mxnet_resnet(std::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x5a) [0x40a892]
[bt] (5) ./mxnet_resnet(tvm::runtime::TVMRetValue tvm::runtime::PackedFunc::operator()<int, DLTensor*&>(int&&, DLTensor*&) const+0xd0) [0x40b09a]
[bt] (6) ./mxnet_resnet(main+0x6cb) [0x4088c9]
[bt] (7) /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7f20f4714830]
[bt] (8) ./mxnet_resnet(_start+0x29) [0x408019]

out dim should set to 2

int out_ndim = 2;
int64_t out_shape[1] = {1,1000, };

Thanks for reply.

Build error happened with the code you mentioned so I modified it like below,
int out_ndim = 2;
int64_t out_shape[2] = {1,1000, };

But the result says,
daeinki@daeinki-linux:~/project/s5pc210/public/working/tvm_pc/tvm/build/test/app_test_pc$ ./mxnet_resnet
param size : 46765317
The maximum position in output vector is: 0

0 is correct? How could I check the label value with 0?

I guess below code needs to be modified correctly because out_ndim and out_shape are changed.

66 // get the function from the module(get output data)
67 tvm::runtime::PackedFunc get_output = mod.GetFunction(“get_output”);
68 get_output(0, y);
69
70 // get the maximum position in output vector
71 auto y_iter = static_cast<float*>(y->data);
72 auto max_iter = std::max_element(y_iter, y_iter + 1000);
73 auto max_index = std::distance(y_iter, max_iter);
74 std::cout << "The maximum position in output vector is: " << max_index << std::endl;

Thanks,
Inki Dae

you can use TVMArrayCopyToBytes(y, data_y, 1000sizeof(float)) instead of static_cast<float>(y->data) to check the result

Thanks for reply,

I modified my code like below but the result is still same,
The maximum position in output vector is: 0

Is there something I’m missing?


DLTensor* y;
int out_ndim = 2;
int64_t out_shape[2] = {1, 1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);

// get the function from the module(get output data)
tvm::runtime::PackedFunc get_output = mod.GetFunction(“get_output”);
get_output(0, y);

float data[1000];

TVMArrayCopyToBytes(y, data, 1000 * sizeof(float));
auto max_iter = std::max_element(data, data + 1000);
auto max_index = std::distance(data, max_iter);
std::cout << "The maximum position in output vector is: " << max_index << std::endl;

It works well. The problem was due to wrong cat.bin file.

Thanks,
Inki Dae

@daeinki Can you please send a patch to fix the corresponding doc?

I meet the same problem, how do you convert cat.bin from cat.png? what’t different between cat.bin and cat.bin.raw? I get cat.bin ,when I run program Segmentation fault (core dumped) come out at

data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);

is it because of wrong cat.bin?

my app code:

#include <dlpack/dlpack.h>
#include <tvm/runtime/module.h>
#include <tvm/runtime/registry.h>
#include <tvm/runtime/packed_func.h>

#include
#include
#include

int main()
{
// tvm module for compiled functions
tvm::runtime::Module mod_syslib = tvm::runtime::Module::LoadFromFile(“net.so”);

// json graph
std::ifstream json_in("net.json", std::ios::in);
std::string json_data((std::istreambuf_iterator<char>(json_in)), std::istreambuf_iterator<char>());
json_in.close();

// parameters in binary
std::ifstream params_in("net.params", std::ios::binary);
std::string params_data((std::istreambuf_iterator<char>(params_in)), std::istreambuf_iterator<char>());
params_in.close();

// parameters need to be TVMByteArray type to indicate the binary data
TVMByteArray params_arr;
params_arr.data = params_data.c_str();
params_arr.size = params_data.length();

int dtype_code = kDLFloat;
int dtype_bits = 32;
int dtype_lanes = 1;
int device_type = kDLOpenCL;//kDLCPU;
int device_id = 0; 

// get global function module for graph runtime
tvm::runtime::Module mod = (*tvm::runtime::Registry::Get("tvm.graph_runtime.create"))(json_data, mod_syslib, device_type, device_id);

DLTensor* x;
int in_ndim = 4; 
int64_t in_shape[4] = {1, 3, 224, 224};
TVMArrayAlloc(in_shape, in_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &x);
printf("00000000000000\n");
// load image data saved in binary
std::ifstream data_fin("cat.bin", std::ios::binary);
data_fin.read(static_cast<char*>(x->data), 3 * 224 * 224 * 4);


// get the function from the module(set input data)
tvm::runtime::PackedFunc set_input = mod.GetFunction("set_input");
printf("111111111\n");
set_input("data", x);
printf("222222222222222\n");
// get the function from the module(load patameters)
tvm::runtime::PackedFunc load_params = mod.GetFunction("load_params");
load_params(params_arr);
printf("load params finish \n");
// get the function from the module(run it)
tvm::runtime::PackedFunc run = mod.GetFunction("run");
run();
printf("run  finish \n");
DLTensor* y;
int out_ndim = 2;
int64_t out_shape[2] = {1, 1000, };
TVMArrayAlloc(out_shape, out_ndim, dtype_code, dtype_bits, dtype_lanes, device_type, device_id, &y);

// get the function from the module(get output data)
tvm::runtime::PackedFunc get_output = mod.GetFunction("get_output");
get_output(0, y);

// get the maximum position in output vector
auto y_iter = static_cast<float*>(y->data);
auto max_iter = std::max_element(y_iter, y_iter + 1000);
auto max_index = std::distance(y_iter, max_iter);
std::cout << "The maximum position in output vector is: " << max_index << std::endl;

TVMArrayFree(x);
TVMArrayFree(y);

return 0;

}

what we need to set_input finally is a decoded 224x244x3 (Width x Height x Channels) buffer.
You can use opencv python lib to decode png and dump it to some file.

(3 * 224 * 224 * 4) -> (3 * 224 * 224)

int device_type = kDLOpenCL -> int device_type = kDLCPU;
In Alloc device type should be always kDLCPU.

thanks for your reply,first. I change these two place you said .but there is error on device_type. below:

terminate called after throwing an instance of ‘dmlc::Error’
what(): [09:48:40] /home/firefly/Documents/tvm/src/runtime/module_util.cc:53: Check failed: ret == 0 (-1 vs. 0) Assert fail: (dev_type == 4), device_type need to be 4

ok,I try to write a opencv python convert program

I change cat.png to 2242243 .bin file . but it is error too, error:

terminate called after throwing an instance of ‘dmlc::Error’
what(): [10:24:20] /home/firefly/Documents/tvm/src/runtime/module_util.cc:53: Check failed: ret == 0 (-1 vs. 0) Assert fail: (dev_type == 4), device_type need to be 4
Stack trace returned 6 entries:
[bt] (0) ./cpptest(dmlc::StackTraceabi:cxx11+0x118) [0x4038f8]
[bt] (1) /home/firefly/Documents/tvm/build/libtvm_runtime.so(+0x29934) [0x7fa0863934]
[bt] (2) /home/firefly/Documents/tvm/build/libtvm_runtime.so(+0x83fec) [0x7fa08bdfec]
[bt] (3) /home/firefly/Documents/tvm/build/libtvm_runtime.so(tvm::runtime::GraphRuntime::Run()+0x3c) [0x7fa08bc4fc]
[bt] (4) ./cpptest() [0x402d1c]
[bt] (5) /lib/aarch64-linux-gnu/libc.so.6(__libc_start_main+0xe0) [0x7fa05418a0]
Aborted (core dumped)

why should I change device type to kDLCPU ? the error tell me It needs to be 4 (kDLOpenCL).
but when I change back , It is erro too, the message is the same as above Segmentation fault . [SOLVED] C++ inference test app doesn't work correctly

Device type while creating mod should be OpenCL.
But while allocating it should be CPU, reason being we can’t copy our data directly into OpenCL device buffers. We first create on CPU and calling set_input copies it to the actual target.

Hi, I’m writing a c++ test application to test a inference using mxnet + resnet18.
reference: https://docs.tvm.ai/deploy/nnvm.html.
the building works but seems it shows wrong result below:
The maximum position in output vector is: 0
data changes as fellow:
cat.png -> resize(224,224,3)(RGB) ->cat.bin

how could I fix it??

My solution is:
1、need opencv
2、change the code in main() function

...

cv::Mat A = cv::imread("data/cat.jpg");
A = cv_img_pre(A);
cv::Mat tensor = cv::dnn::blobFromImage(A, 1.0, cv::Size(224, 224), cv::Scalar(0,0,0), true);
TVMArrayCopyFromBytes(x, tensor.data, 3 * 224 * 224 * 4);

...

int out_ndim = 2;
int64_t out_shape[2] = {1, 1000, };

...

3、add cv_img_pre() function

cv::Mat cv_img_pre(cv::Mat A)
{
  cv::Size new_size = cv::Size(224, 224);

  cv::Mat B0(new_size, CV_32FC3);
  resize(A, B0, new_size);

  cv::Mat B(new_size, CV_32FC3);
  B0.convertTo(B, CV_32FC3);

  cv::Mat C(new_size, CV_32FC3, cv::Scalar(123.0, 117.0, 104.0));
  cv::Mat D(new_size, CV_32FC3, cv::Scalar(58.395, 57.12, 57.375));
  cv::Mat temp;

  temp = B - C;
  B = temp / D;

  return B;
}

Then, you can use .jpg or .png file directly.
I successfully tested on rasp3b+.
But, the result of inference is Egyptian cat (285). I guess this could be a cross-platform issue.
Looking forward to further research and discussion.

btw, where is the file “cat.bin”?

I use “cv::dnn::blobFromImage”, show error “cv::dnn has not been declared”. How do I solve?

Hello, I had encountered the same problem (cv::dnn has not been declared). Have you found a solution?