Hi, TVM newbie here: I was following the tutorial here: https://docs.tvm.ai/tutorials/frontend/from_pytorch.html Specifically, I am interested in making repeated inference requests from an exported PyTorch model.
So I followed the recommendation here: How could I save the "graph", "lib", and "params" and exported the trained artifacts.
# save the graph, lib and params into separate files
from tvm.contrib import util
temp = util.tempdir()
path_lib = temp.relpath("deploy_lib.tar")
lib.export_library(path_lib)
with open(temp.relpath("deploy_graph.json"), "w") as fo:
fo.write(graph)
with open(temp.relpath("deploy_param.params"), "wb") as fo:
fo.write(relay.save_param_dict(params))
Now I try to create a TVMModule so I can load from model file using this API:
tvm::runtime::Module module = tvm::runtime::Module::LoadFromFile("/path/to/exported/lib/above");
This tries to dlopen() the library at some point and fails because the above exported file is only an ELF-binary (*.o) not a shared object, I get the following error from TVM’s runtime/module:
"only ET_DYN and ET_EXEC can be loaded"
Any thoughts on what could be wrong?