Core dump at tensorflow run time

I followed the tutorial to compile the tensorflow model in python. Instead of running it in python, I implemented C++ runtime. I got very similar result as python runtime. I guess that the predict possibilities are slightly difference due to image read and resizing using c++ opencv vs PIL in python. However, I got intermittent core dump.

Here is backtrace.

Thread 13 “my-test” received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffde7fc700 (LWP 38638)] 0x000000000e2ddb70 in ?? () (gdb) bt

#0 0x000000000e2ddb70 in ?? ()

#1 0x00007ffff6c776f0 in tvm::runtime::WrapPackedFunc(int ()(void, int*, int), tvm::runtime::ObjectPtrtvm::runtime::Object const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const [clone .isra.117] () from /home/r/

#2 0x00007ffff6ce5277 in std::_Function_handler<void (), tvm::runtime::GraphRuntime::CreateTVMOp(tvm::runtime::TVMOpParam const&, std::vector<DLTensor, std::allocator > const&, unsigned long)::{lambda()#3}>::_M_invoke(std::_Any_data const&) () from /home/r/

#3 0x00007ffff6ce52f7 in tvm::runtime::GraphRuntime::Run() () from /home/r/

#4 0x0000000002ac91c6 in std::function<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const (this=0x7aaab20, __args#0=…, __args#1=0x7fffde7fb740) at /usr/include/c++/7/bits/std_function.h:706

#5 0x0000000002ac9b9e in tvm::runtime::PackedFunc::operator()<>() const (this=0x7aaab20) at /home/r/include/tvm/runtime/packed_func.h:1224

Finally, I found the problem. I am in multi-threads environment. The main thread is terminated and called the destructor to clean up the TVM environment, while the other thread is calling run().

No seg fault anymore.