@weberlo, I tried to modify the uTVM demo python code from Colab. Sth. like below. USE_MICRO is turned on when building library.
def get_resnet():
block = get_model('resnet18_v1', pretrained=True)
module, params = relay.frontend.from_mxnet(
block, shape={'data': RESNET_INPUT_IMG_SHAPE})
func = module['main']
return func, params
...
resnet, params = get_resnet()
with tvm.target.build_config(disable_vectorize=True):
graph, resnet_c_mod, params = relay.build(resnet,
target='c',
params=params)
device_config = tvm.micro.device.host.default_config()
with micro.Session(device_config):
micro_mod = micro.create_micro_mod(resnet_c_mod, device_config) #<=Failed here
ctx = tvm.micro_dev(0)
module = graph_runtime.create(graph, micro_mod, ctx)
It failed as below.
MissionSession::LoadBinary()
AllocateInSection(kText, 1080)
AllocateInSection(kRodata, 0)
AllocateInSection(kData, 0)
AllocateInSection(kBss, 88)
MissionSession::LoadBinary()
AllocateInSection(kText, 174704)
Traceback (most recent call last):
File "utvm.py", line 308, in <module>
micro_mod = micro.create_micro_mod(resnet_c_mod, device_config)
File "/work/git_repo/tvm/python/tvm/micro/base.py", line 162, in create_micro_mod
micro_mod = tvm.runtime.load_module(lib_obj_path)
File "/work/git_repo/tvm/python/tvm/runtime/module.py", line 404, in load_module
return _ffi_api.ModuleLoadFromFile(path, fmt)
File "/work/git_repo/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 213, in __call__
raise get_last_ffi_error()
tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (7) /work/git_repo/tvm/build/libtvm.so(TVMFuncCall+0x65) [0x7f9de4bd4fd5]
[bt] (6) /work/git_repo/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::runtime::TypedPackedFunc<tvm::runtime::Module (std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>::AssignTypedLambda<tvm::runtime::Module (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)>(tvm::runtime::Module (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x86) [0x7f9de4bf1ec6]
[bt] (5) /work/git_repo/tvm/build/libtvm.so(tvm::runtime::Module::LoadFromFile(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)+0x4fe) [0x7f9de4bef77e]
[bt] (4) /work/git_repo/tvm/build/libtvm.so(+0xc7b82f) [0x7f9de4c8482f]
[bt] (3) /work/git_repo/tvm/build/libtvm.so(tvm::runtime::MicroSession::LoadBinary(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, bool)+0x1a1) [0x7f9de4c89be1]
[bt] (2) /work/git_repo/tvm/build/libtvm.so(tvm::runtime::MicroSession::AllocateInSection(tvm::runtime::SectionKind, unsigned long)+0x4d) [0x7f9de4c8998d]
[bt] (1) /work/git_repo/tvm/build/libtvm.so(tvm::runtime::MicroSectionAllocator::Allocate(unsigned long)+0xe4) [0x7f9de4c90784]
[bt] (0) /work/git_repo/tvm/build/libtvm.so(dmlc::LogMessageFatal::~LogMessageFatal()+0x43) [0x7f9de4398a93]
File "/work/git_repo/tvm/src/runtime/micro/micro_section_allocator.h", line 64
TVMError: Check failed: size_ + size < capacity_: cannot alloc 174704 bytes in section with start_addr 0x7f9d8d71c000
Apparently, it tries to allocate for the resnet code, but exceed the capacity_ limit. I tried to enlarge the âtextâ session size in dict specified by tvm.micro.device.host.default_config(). It doesnât help â same failure but with a different number of bytes in âcannot alloc xxx bytesâ error message.
I wonder how can I change the setting for memory layout mapping appropriately, anyone? Thanks.