SSD300 VGG16 ERROR tvm.error.OpNotImplemented: Operator L2Normalization is not supported in frontend MXNet

THIS QUESTION IS FROM GITHUB ISSUE(
https://github.com/dmlc/tvm/issues/3068)
I have trained a ssd model by using mxnet_ssd(github.com/apache/incubator-mxnet/tree/master/example/ssd). I deployed the model,and the model works well in mxnet.And I want use the tvm(git clone from github and build with LLVM ). But there still is a error: tvm.error.OpNotImplemented: Operator L2Normalization is not supported in frontend MXNet.
I use this code to convert the model.

import mxnet as mx
from mxnet.io import DataBatch, DataDesc
import cv2
from collections import namedtuple
import numpy as np
import tvm
import nnvm

model_prefix='./model/deploy_ssd_vgg16_reduced_300'
epoch=240
#model_prefix='./model/deploy_ssd_mobilenet_v2_300'
#epoch=239
batch_size=1
data_shape=(300,300)
#ctx=mx.gpu(0)
shape_dict = {'data': (1, 3, *data_shape)}
load_symbol, args, args = mx.model.load_checkpoint(model_prefix, epoch)

target = tvm.target.create("llvm")

opt_level=3

nnvm_sym, nnvm_params = nnvm.frontend.from_mxnet(load_symbol, args, args)
with nnvm.compiler.build_config(opt_level=opt_level):
   graph, lib, params = nnvm.compiler.build(nnvm_sym, target, shape_dict, params=nnvm_params)
lib.export_library("./deploy_lib.so")
print('lib export succeefully')
with open("./deploy_graph.json", "w") as fo:
   fo.write(graph.json())
with open("./deploy_param.params", "wb") as fo:
   fo.write(nnvm.compiler.save_param_dict(params))

I knew someone met the same problem before (issue 1223). And it looks like this problem has been fixed.But I still meet the error.
I also test the deploy_ssd_mobilenet_v2_300 model , It works well,and there is no error.
The tvm version is 0.6.dev, the nnvm version is 0.8.0.
Does anyone know how to solve it?

1 Like

Currently, MXNet frontend of nnvm does not support L2Normalization.
You can use relay instead of nnvm. (relay.frontend.from_mxnet)

Example Usage: https://docs.tvm.ai/tutorials/frontend/from_mxnet.html

I test the relay.frontend.from_mxnet. but there still a error:AssertionError: Does not support dilation.

When tvm optimizing relay graph, Alter_Op_Layout pass converts NCHW layout for NCHWc for x86.
However dilation > 1 is not supported yet for NCHWc layout.
You can avoid this error by using ‘opt_level=2’ (or 1) to disable Alter_Op_Layout.

Thanks for your reply. It works. :grinning:But the tvm 's inference speed is slower than mxnet. (tvm relay 3s/ frame mxnet 1.5s/frame )

I got the similar results tvm 's inference speed is slower than mxnet.

How to explain it?