Auto-tuning a relay layer and applying best history

Hi,

I am trying to auto-tune a relay layer (conv2d). So far I have been able to obtain the best schedule (stored in a log_file), but I have been unable to use the function “autotvm.apply_history_best” to apply the best schedule to the layer. I was hoping someone could help me figure out what I am doing wrong.

My workflow is the following:

  1. Tune layer using “topi_x86_conv2d_NCHWc”
  2. Store log file as “conv2d.log”
  3. Create a module out of “relay.nn.conv2d”
  4. Try to apply the schedule “autotvm.apply_history_best(‘conv2d.log’)”

I receive the error:

TypeError: ‘NoneType’ object is not iterable

when applying the best history.

I am tagging @comaniac here since this issue is related to:

#-------------------Code starts here --------------------#

import os

import sys

import numpy as np

import tvm

import logging

from tvm import autotvm

from tvm import relay

from tvm.relay import testing

from tvm.autotvm.tuner import XGBTuner, GATuner, RandomTuner, GridSearchTuner

from tvm.autotvm.graph_tuner import DPTuner, PBQPTuner

import tvm.contrib.graph_runtime as runtime

#Details about the target (CPU/GPU)

target = "llvm -mcpu=core-avx2"

batch_size = 1

dtype = "float32"

log_file = "conv2d_x86.log" 

#graph_opt_sch_file = "conv2d_x86_graph_opt.log"

#Set the input name of the graph

input_name = "data"

#Set number of threads used for tuning based on the number of

#physical CPU cores on your machine.

num_threads = 16

os.environ["TVM_NUM_THREADS"] = str(num_threads)

#Create function to optiimize

func_create = 'topi_x86_conv2d_NCHWc'

#Arguments to create task

args= (('TENSOR', (1, 3, 224, 224), 'float32'), ('TENSOR', (64, 3, 7, 7), 'float32'), (2, 2), (3, 3, 3, 3), (1, 1), 'NCHW', 'float32')

#Workload for the task

workload = ('conv2d', (1, 3, 224, 224, 'float32'), (64, 3, 7, 7, 'float32'), (2, 2), (3, 3, 3, 3), (1, 1), 'NCHW', 'float32')

task = autotvm.task.create(func_create, args=args,target=target, template_key='direct')

task.workload = workload

print(task)

#Define type of auto-tuner

tuner_obj = XGBTuner(task, loss_type='rank')

#logging config (for printing tuning log to the screen)

logging.getLogger('autotvm').setLevel(logging.DEBUG)

logging.getLogger('autotvm').addHandler(logging.StreamHandler(sys.stdout))

#We measure 10 times and take average to reduce variance.

measure_option = autotvm.measure_option(

    builder='local',

    runner=autotvm.LocalRunner(number=10, repeat=1,min_repeat_ms=1000))

#You can use alternatives like XGBTuner.

n_trial = 10

print(n_trial)

tuner_obj.tune(n_trial=n_trial,

           measure_option=measure_option,

           callbacks=[autotvm.callback.log_to_file('conv2d.log')])

dtype = 'float32'

data = relay.var("data", shape=(1, 3, 224, 224), dtype=dtype)

kernel = relay.var("kernel", shape=(64, 3, 7, 7), dtype=dtype)

out = relay.nn.conv2d(data, kernel, strides=(1,1), padding=(3,3,3,3), dilation=(1,1),data_layout='NCHW', out_dtype=dtype)

mod = relay.Module.from_expr(out)

print(mod)

#compile kernels with history best records

with autotvm.apply_history_best('conv2d.log'):

    print("Compile...")

    with relay.build_config(opt_level=3):

        graph, lib, params = relay.build_module.build(mod, target="llvm", params=None)

Could you post the stack dump in addition to the error message only? It would be helpful if I could see which statement throws this error.

Hi comaniac,

Thanks a lot for your prompt response. Please find the error message below. The line that errors out is “graph, lib, params = relay.build_module.build(mod, target=“llvm”, params=None)”:

Traceback (most recent call last):

File “autotune_relayconv2d.py”, line 86, in graph, lib, params = relay.build_module.build(mod, target=“llvm”, params=None)

File “/home/smatizro/tvm/python/tvm/relay/build_module.py”, line 248, in build graph_json, mod, params = bld_mod.build(func, target, target_host, params)

File “/home/smatizro/tvm/python/tvm/relay/build_module.py”, line 118, in build self._build(func, target, target_host)

File “/home/smatizro/tvm/python/tvm/_ffi/_ctypes/packed_func.py”, line 213, in call raise get_last_ffi_error()

TypeError: Traceback (most recent call last): [bt] (8) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr(tvm::RelayExpr const&)+0x2b) [0x7f24490db5fb] [bt] (7) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::relay::ExprMutator::VisitExpr(tvm::RelayExpr const&)+0x83) [0x7f2449222f13] [bt] (6) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x428) [0x7f24490d8558] [bt] (5) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>)#6}::_FUN(tvm::runtime::ObjectRef const&, tvm::relay::ExprFunctor<tvm::RelayExpr (tvm::RelayExpr const&)>)+0x13) [0x7f24490d6553] [bt] (4) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::relay::ForwardRewriter::VisitExpr_(tvm::relay::CallNode const*)+0x8de) [0x7f24490dc58e] [bt] (3) /mathworks/home/smatizro/tvm/build/libtvm.so(std::_Function_handler<void (tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*), void tvm::runtime::TypedPackedFunc<tvm::RelayExpr (tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>::AssignTypedLambda<tvm::RelayExpr ()(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)>(tvm::RelayExpr ()(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&))::{lambda(tvm::runtime::TVMArgs const&, tvm::runtime::TVMRetValue*)#1}>::_M_invoke(std::_Any_data const&, tvm::runtime::TVMArgs&&, tvm::runtime::TVMRetValue*&&)+0x9d) [0x7f244907d65d] [bt] (2) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::RelayExpr tvm::relay::LayoutRewritertvm::relay::alter_op_layout::AlterTransformMemorizer(tvm::relay::Call const&, tvm::Array<tvm::RelayExpr, void> const&, tvm::runtime::ObjectRef const&)+0x1168) [0x7f244907b2f8] [bt] (1) /mathworks/home/smatizro/tvm/build/libtvm.so(tvm::relay::alter_op_layout::AlterTransformMemorizer::CallWithNewLayouts(tvm::relay::Call const&, std::vector<tvm::RelayExpr, std::allocatortvm::RelayExpr > const&)+0x6a1) [0x7f2449079791] [bt] (0) /mathworks/home/smatizro/tvm/build/libtvm.so(+0xb6ded7) [0x7f24492e4ed7] File “/home/smatizro/tvm/python/tvm/_ffi/_ctypes/packed_func.py”, line 78, in cfun rv = local_pyfunc(*pyargs) File “/home/smatizro/tvm/python/tvm/relay/op/nn/_nn.py”, line 270, in alter_op_layout_conv2d return topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, op) File “</home/smatizro/.local/lib/python3.5/site-packages/decorator.py:decorator-gen-37>”, line 2, in conv2d_alter_layout File “/home/smatizro/tvm/python/tvm/target.py”, line 381, in dispatch_func return dispatch_dict[k](*args, **kwargs) File “/home/smatizro/tvm/topi/python/topi/x86/conv2d_alter_op.py”, line 45, in _alter_conv2d_layout kh, kw = attrs.get_int_tuple(“kernel_size”) File “/home/smatizro/tvm/python/tvm/attrs.py”, line 64, in get_int_tuple return tuple(x.value for x in self.getattr(key)) TypeError: ‘NoneType’ object is not iterable

I just ran the code changing from

relay.build_config(opt_level=3):

to

relay.build_config(opt_level=2):

and I no longer receive an error. Any ideas as to why this may be happening?

Looks like an issue in AlterOpLayout. @kevinthesun do you have idea?

Have you tried default schedule? It looks like this error is not related to specific disptacher.

Hi @kevinthesun

Thanks for your help, could you please give me some information on how to get the default schedule for the relay layer?

Should I just build the module without using "with autotvm.apply_history_best? like:

graph, lib, params = relay.build_module.build(mod, target=“llvm -mcpu=core-avx2”, target_host = “llvm”)

If I do that I get the message:

Cannot find config for target=llvm -mcpu=core-avx2, workload=(‘conv2d’, (16, 3, 224, 224, ‘float32’), (64, 3, 7, 7, ‘float32’), (2, 2), (3, 3, 3, 3), (1, 1), ‘NCHW’, ‘float32’). A fallback configuration is used, which may bring great performance regression.

By the way, is there any way you can actually see the default schedule for that relay layer? Similar to what you do with tvm.lower()

Yes in this case you are using the default schedule. The default schedule is defined in TOPI, but it’s a bit difficult to locate the exact schedule used by your op. An ongoing PR is solving this problem.

1 Like

Thanks for your prompt reply. So to summarize my findings with this code so far:

– The code runs with the default schedule

– I am able to generate an optimized schedule with opt_level = 2. I can also apply this schedule to the relay layer using “autotvm.apply_history_best”.

– Using opt_level = 3 or higher results in the error message I shared above.

NOTE: It is interesting that I can use opt_level = 3 for the same type of convolution (size/workload/cpu) in the tutorial " Auto-tuning a convolutional network for x86 CPU" without getting any errors

Is it because kernel_size attribute is not set when creating conv2d layer?

1 Like

Kevin,

After defining kernel_size = (7,7) in

out = relay.nn.conv2d(data, kernel, strides=strides, padding=padding, dilation=dilation, kernel_size = (7,7), data_layout='NCHW', out_dtype=dtype)
mod = relay.Module.from_expr(out)

It works for opt_level=3 !!

I was not expecting this to affect the autotuning since for opt_level=2 it was able to optimize. Thank you very much for taking the time to look into this issue, I really appreciate it :slightly_smiling_face: