Add the document for TVMDSOOp

Integrate TVM optimization into TensorFlow with TVMDSOOp

Introduction

In the next release of TVM(maybe 0.7), we will add the new feature called TVMDSOOp which integrates TVM optimization with TensorFlow.

TVMDSOOp is the general custom operator for TensorFlow which can run any TVM optimization on CPU or GPU. In other words, you can optimize your subgraph or implement new operators in TVM and embed them into TensorFlow graph easily. It is valuable to try TVM and replace part of the model for optimization if you still want to keep using the TensorFlow infrastructure like SavedModel or TensorFlow Serving.

How To Use

Now you can use TVMDSOOp by compiling the latest code of TVM. Notice that TVMDSOOp was not enabled by default and you may set(USE_TF_TVMDSOOP ON) in config.cmake.

Follow the documentation of TVM in https://docs.tvm.ai/install/from_source.html to compile with USE_TF_TVMDSOOP and install TVM Python package.

Now you can use pure TVM APIs to implement the computation operators. The following example will export the library of TVM add operator on CPU.

import tvm
from tvm.contrib import tf_op

def export_cpu_add_so():
    n = tvm.te.var("n")
    ph_a = tvm.te.placeholder((n,), name='ph_a')
    ph_b = tvm.te.placeholder((n,), name='ph_b')
    ph_c = tvm.te.compute(ph_a.shape, lambda i: ph_a[i] + ph_b[i], name='ph_c')
    sched = tvm.te.create_schedule(ph_c.op)
    fadd_dylib = tvm.build(sched, [ph_a, ph_b, ph_c], "c", name="vector_add")

    lib_path = "tvm_cpu_add.so"
    fadd_dylib.export_library(lib_path)

With the latest TVM Python APIs, we can load dynamic libraries easily and use them like normal TensorFlow operators.

import tensorflow as tf
from tvm.contrib import tf_op

def test_tvm_cpu_add_so():
    lib_path = "tvm_cpu_add.so"
    module = tf_op.OpModule(lib_path)
    tvm_add = module.func("vector_add", output_shape=[4], output_dtype="float")

    x = tf.constant([1.0, 2.0, 3.0, 4.0])
    y = tf.constant([1.0, 3.0, 5.0, 7.0])
    print(tvm_add(x, y).numpy())

In order to load the libraries of TVM including libtvm_runtime and tvm_dso_op, please install or add to LD_LIBRARY_PATH before running your script.

LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/of/incubator-tvm/build/ ./your_script.py

Now enjoy hacking for remixing TVM runtime and TensorFlow session.

How It Works

The implementation of TVMDSOOp is straightforward and here is the overall architecture.

Since it is kind of the TensorFlow custom operator, we need to implement the computation kernel and register as TensorFlow operators. This operator support list of tensors as input arguments and setting for the shape/dtype of output tensor.

With TVM runtime APIs, we can load TVM dynamic libraries as Module and get the function which was registered by user’s TVM Python script. Even though TensorFlow passes the Tensor(tensorflow::Tensor) to kernels and TVM runtime requires DLPack for inference, TVMDSOOp will automatically convert the data of tensors for users at the lowest cost. Users only need to optimize their TVM Python scripts and use the operators in TensorFlow graph without extra integration work.

For more detail and code of the implementation, please refer to the merged pull-request in https://github.com/apache/incubator-tvm/pull/4459 .

We have add the document in discuss first and please help to review if you have time @tqchen @FrozenGene @zhiics @gmagogsfm . Hope to add in the official document when the content is ready.

@FrozenGene, @jwfromm, @zhiics it would be great if we can followup about potential suggestion for the docs.

I think we actually need two things. One is thinking about how should we enable the tests to make sure other changes in TVM wouldn’t break this functionality.

The other is adding an official tutorial. There are examples under docs/dev. You can probably take a look at them and add it there. Please also add a bit background and motivation in the doc. A section about the design and implementation would be helpful as well.

I agree with @zhiics. Official tutorial is important. Besides @zhiics’s content, we could also list one example how to integrate it with one tensorflow model end to end, not just the low level tvm.build. This will be the common situation the users want to use.

Thanks @zhiics and @FrozenGene . We have the Keras example with TVMDSOOp as well and we will update the document in google docs later which may help to review.

@tobegit3hub thanks for this great work. I am trying to export an autotuned model with TVMDSOOp. Now, I am stuck at how to register the func_name to the tf_op.

with tvm.transform.PassContext(opt_level=3):
    graph, lib, params = relay.build_module.build(
        mod, target=target, params=params)
lib.export_library("model_tvm.so")

Is it supported now to export an autotuned model(rather than an tvm op) to tensorflow op?

Now only TVM ops are supported, the model or graph objects are not support yet.

Thanks, I have adapted your code to serve a tvm graph, by calling graph_runtime.

Great. Are you willing to contribution the code since TVMDSOOp is already in TVM repository? We are willing to review and help to contribute that in TVM.

Yes, I am considering it. I have implemented TVMInitOp and TVMRunOp. Then users can save mod.so/graph.json/params as tensorflow assets. TVMInitOp will load the tvm graph when the tf model is loaded. TVMRunOp will run initialized tvm graph.

Hi, guys, i am using this tvmdsoop library and i might need your help. For now, i built the tvm with tvmdsoop successfully. the .so files under my …/incubator/build path are: libtvm_dso_op.so, libtvm_runtime.so, libtvm.so, libtvm_topi.so. The tensorflowFlow i used was tensorflow==1.15.0 built from source(cxx_11_abi=1 cuz my gcc’s 6.4.0). after that, i export library path to …/incubator/build. Then i used the test from this document. it’s like this:

but the test failed, the error’s like:

which i am not pretty sure about the reason. Could you please help me, thanks

I believe using this needs cmake 3.12 or later because of the use of FindPython3 in your cmake modules and this would require an update to the install source documentation as that implies a requirement of cmake > 3.5 for building tvm.

thanks for your advice, however, i was using cmake 3.17.0 to build tvm. It seems the problem still exists.