Port model from tensorflow to TVM

Hi, I’m trying to follow the tutorial in https://docs.tvm.ai/tutorials/frontend/from_tensorflow.html# to use TVM parse a model from tensorflow, but I hit the folowing error. It seems the error occurs before TVM does something real for parsing, it fails on some initilization with ‘prelude’, I’m new to TVM, I’m not sure what’s going on here, can anyone help?

[15:28:12] /home/zheenwang/tvm/src/relay/ir/module.cc:286: Importing: /home/zheenwang/tvm/python/tvm/relay/std/prelude.rly
Traceback (most recent call last):

File “from_tensorflow.py”, line 18, in
mod, params = tvm.relay.frontend.from_tensorflow(graph_def,layout=layout,shape=shape_dict)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2475, in from_tensorflow
g = GraphProto()

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 1955, in init
self._prelude = Prelude(self._mod)

File “/home/zheenwang/tvm/python/tvm/relay/prelude.py”, line 533, in init
self.load_prelude()

File “/home/zheenwang/tvm/python/tvm/relay/prelude.py”, line 549, in load_prelude
self.mod.import_from_std(“prelude.rly”)

File “/home/zheenwang/tvm/python/tvm/relay/module.py”, line 240, in import_from_std
return _module.Module_ImportFromStd(self, file_to_import)

File “/home/zheenwang/tvm/python/tvm/_ffi/_ctypes/function.py”, line 207, in call
raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (5) /home/zheenwang/tvm/build/libtvm.so(TVMFuncCall+0x61) [0x7fec407fe3b1]
[bt] (4) /home/zheenwang/tvm/build/libtvm.so(+0xb6b27f) [0x7fec4070d27f]
[bt] (3) /home/zheenwang/tvm/build/libtvm.so(tvm::relay::ModuleNode::ImportFromStd(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)+0x176) [0x7fec4070d066]
[bt] (2) /home/zheenwang/tvm/build/libtvm.so(tvm::relay::ModuleNode::Import(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)+0x154) [0x7fec4070ce34]
[bt] (1) /home/zheenwang/tvm/build/libtvm.so(tvm::relay::FromText(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)+0xcb) [0x7fec4070915b]
[bt] (0) /home/zheenwang/tvm/build/libtvm.so(+0xc5798b) [0x7fec407f998b]
File “/home/zheenwang/tvm/python/tvm/_ffi/_ctypes/function.py”, line 72, in cfun
rv = local_pyfunc(*pyargs)
File “/home/zheenwang/tvm/python/tvm/relay/parser.py”, line 25, in fromtext
from tvm.relay import _parser
File “/home/zheenwang/tvm/python/tvm/relay/_parser.py”, line 218
raise ParseError(f"duplicate global var “{name}”")
^
SyntaxError: invalid syntax

Attach my code here, I pass a empty graph_def to the from_tensorflow API, the error keeps the same.
So it seems this issue is not related to the model itself.

import tvm

import tensorflow as tf

from tvm import relay

import numpy as np

import logging

logging.basicConfig(filename=‘myapp.log’, level=logging.INFO)

graph_def = tf.GraphDef()

x = np.ones((1,248,248,3))

shape_dict = {‘graph_input:0’: x.shape}

layout = “NCHW”

mod, params = tvm.relay.frontend.from_tensorflow(graph_def,layout=layout,shape=shape_dict)

Did you pull the most recent changes? This part of the code used to require Python 3.6, but was just updated yesterday to be Python 3.5 compatible.

Thanks, I have just unblocked myself by install anaconda python3.7, the issues are gone. I think you’re right, it’s python version problem.

But I hit some new error, which I don’t understand. I’m only calling the “from_tensorflow()” api, not any “relay.build” api, why it is telling me missing of “LLVM”, in my understanding, LLVM are used in codegen stage, right? In the from_tensorflow, it should be just parsing from tensorflow graph to relay IR. Am I wrong?

Traceback (most recent call last):

File “from_tensorflow.py”, line 23, in
mod, params = tvm.relay.frontend.from_tensorflow(graph_def,layout=layout,shape=shape_dict)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2516, in from_tensorflow
mod, params = g.from_tensorflow(graph, layout, shape, outputs)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2154, in from_tensorflow
op = self._convert_operator(node.op, inputs, attr, graph)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2478, in _convert_operator
sym = convert_map[op_name](inputs, attrs, self._params)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 634, in _impl
params_new = _infer_value(pop_node, params)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/common.py”, line 490, in infer_value
graph, lib, params = tvm.relay.build(func, target=“llvm”, params=params)

File “/home/zheenwang/tvm/python/tvm/relay/build_module.py”, line 244, in build
graph_json, mod, params = bld_mod.build(func, target, target_host, params)

File “/home/zheenwang/tvm/python/tvm/relay/build_module.py”, line 109, in build
self._build(func, target, target_host)

File “/home/zheenwang/tvm/python/tvm/_ffi/_ctypes/function.py”, line 207, in call
raise get_last_ffi_error()

tvm._ffi.base.TVMError: Traceback (most recent call last):
[bt] (6) /home/zheenwang/tvm/build/libtvm.so(TVMFuncCall+0x61) [0x7f0310d86b51]
[bt] (5) /home/zheenwang/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::shared_ptrtvm::runtime::ModuleNode const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x17) [0x7f0310c48e77]
[bt] (4) /home/zheenwang/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::GetFunction(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::shared_ptrtvm::runtime::ModuleNode const&)::{lambda(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)#3}::operator()(tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*) const+0x1b2) [0x7f0310c48d82]
[bt] (3) /home/zheenwang/tvm/build/libtvm.so(tvm::relay::backend::RelayBuildModule::BuildRelay(tvm::relay::Function, std::unordered_map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::runtime::NDArray, std::hash<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::equal_to<std::__cxx11::basic_string<char, std::char_traits, std::allocator > >, std::allocator<std::pair<std::__cxx11::basic_string<char, std::char_traits, std::allocator > const, tvm::runtime::NDArray> > > const&)+0x9c9) [0x7f0310c486e9]
[bt] (2) /home/zheenwang/tvm/build/libtvm.so(tvm::build(tvm::Map<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, tvm::Array<tvm::LoweredFunc, void>, void, void> const&, tvm::Target const&, tvm::BuildConfig const&)+0x509) [0x7f0310733f79]
[bt] (1) /home/zheenwang/tvm/build/libtvm.so(tvm::build(tvm::Map<tvm::Target, tvm::Array<tvm::LoweredFunc, void>, void, void> const&, tvm::Target const&, tvm::BuildConfig const&)+0x566) [0x7f03107331c6]
[bt] (0) /home/zheenwang/tvm/build/libtvm.so(tvm::codegen::Build(tvm::Array<tvm::LoweredFunc, void> const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&)+0xddc) [0x7f031073ba9c]
File “/home/zheenwang/tvm/src/codegen/codegen.cc”, line 46
TVMError: Check failed: bf != nullptr: Target llvm is not enabled

pls make sure u build the tvm project following https://docs.tvm.ai/install/from_source.html did u configure to use llvm before cmake ?

cd incubator-tvm
mkdir build
cd build
cp ../cmake/config.cmake ./
vim ./config.cmake 
## then enable some  feature ?

Thanks, the LLVM issue is gone after I build with USE_LLVM=on.
Now, I hit a “reshape” parse issue, I have a dozen of reshape layer in my tensorflow pb, but only the last reshape fails, where the last reshape is a flattern op in the original tensorflow model code
I don’t understand why it fails while the other reshape success, what does the error mean? Here is the last reshape in my pb
node {
name: “fc/flatten/Reshape”
op: “Reshape”
input: “fc/max_pooling2d/MaxPool”
input: “fc/flatten/Reshape/shape”
attr {
key: “T”
value {
type: DT_FLOAT
}
}
attr {
key: “Tshape”
value {
type: DT_INT32
}
}
attr {
key: “_output_shapes”
value {
list {
shape {
dim {
size: 1
}
dim {
size: 8960
}
}
}
}
}
}

Traceback (most recent call last):

File “from_tensorflow.py”, line 23, in
mod, params = tvm.relay.frontend.from_tensorflow(graph_def,layout=layout,shape=shape_dict)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2516, in from_tensorflow
mod, params = g.from_tensorflow(graph, layout, shape, outputs)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2154, in from_tensorflow
op = self._convert_operator(node.op, inputs, attr, graph)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 2478, in _convert_operator
sym = convert_map[op_name](inputs, attrs, self._params)

File “/home/zheenwang/tvm/python/tvm/relay/frontend/tensorflow.py”, line 623, in _impl
raise RuntimeError("If shape operator is used in reshape to "

RuntimeError: If shape operator is used in reshape to express reshape_like, shape_of must be the direct ancestor of reshape when input shape is symbolic.

also have this problem, look like cause by this pr https://github.com/apache/incubator-tvm/pull/4185
cc: @kevinthesun could u kindly give more details?

I have the same question about the code piece added.
BTW, I can unblock myself by comment out the following code

    if isinstance(pop_node, tvm.relay.expr.Call):
        if "shape_of" not in str(pop_node.op):
            raise RuntimeError("If shape operator is used in reshape to "
                               "express reshape_like, shape_of must be "
                               "the direct ancestor of reshape when input "
                               "shape is symbolic.")
        return _op.reshape_like(inputs[0], pop_node.args[0])

This change should only affect symbolic input shape case. What is the shape argument in your reshape operator? Currently in symbolic input shape case, if it is a Call, we check it must be shape_of op. You can check https://github.com/apache/incubator-tvm/pull/4185/files#diff-eae8ecf976e0031823eeae454466f964R807 this line to see which branch it goes in.

Can you print pop node in your case?

Try https://github.com/apache/incubator-tvm/pull/4285

Is that possible to support more symbolic input shape cases? Like one dimension of the shape comes from a placeholder, and concatenated into the final shape? There are a lot of use cases like that

Dynamic shape supporting is in progress. Soon we will support dynamic shape of non compute intensive kernels. For optimization and code-gen for dynamic shape of compute intensive kernels(conv2d, dense), it will take more time.