Here’s the minimal example of non-working code. I fixed one of the problems and it happens to be a bug in relay/frontend/keras.py . But I can’t find how to fix the second problem.

I need help to say me where to look for the bug, because probably it is in C++ code. I use a version from gihub (66235d1c3 commit)

```
import sys
import time
import math
import itertools
import numpy as np
import keras
from keras.layers import Input, Layer, Conv2D, UpSampling2D
from keras.models import Model
X = Input(shape=(1, 16384, 1), name="X")
x = X
x = UpSampling2D(size=2)(x)
x = (Conv2D(filters=32,
kernel_size=(1, 54),
activation=None, padding='same',
strides=(1, 1)))(x)
x = (Conv2D(filters=1,
kernel_size=(1, 54),
activation=None, padding='same',
strides=(1, 1)))(x)
model = keras.Model(inputs = X, outputs = x, name='generator')
#keras works fine
data = np.zeros(16384)
data = data.reshape((1, 1, len(data), 1))
#make sure that the keras model works itself
P = model.predict(data)
import tvm
import tvm.relay as relay
#tried different shapes - no success
shape_dict = {'X': [48, 1, 16384, 1]}
mod, params = relay.frontend.from_keras(model, shape_dict)
```

Output:

```
In `main`:
v0.0.3
fn (%X: Tensor[(48, 1, 16384, 1), float32], %v_param_1: Tensor[(32, 1, 1, 54), float32], %v_param_2: Tensor[(32), float32], %v_param_3: Tensor[(1, 32, 1, 54), float32], %v_param_4: Tensor[(1), float32]) {
%0 = nn.upsampling(%X, scale=2, layout="NHWC");
%1 = nn.pad(%0, pad_width=[[0, 0], [0, 0], [0, 0], [26, 27]]);
%2 = nn.conv2d(%1, %v_param_1, channels=32, kernel_size=[1, 54]) in particular dimension 1 conflicts 2 does not match 1; unable to unify: `Tensor[(32, 2, 1, 54), float32]` and `Tensor[(32, 1, 1, 54), float32]`; ;
%3 = nn.bias_add(%2, %v_param_2);
%4 = nn.pad(%3, pad_width=[[0, 0], [0, 0], [0, 0], [26, 27]]);
%5 = nn.conv2d(%4, %v_param_3, channels=1, kernel_size=[1, 54]) in particular dimension 0 conflicts 32 does not match 1dimension 1 conflicts 0 does not match 32; unable to unify: `Tensor[(32, 0, 1, 54), float32]` and `Tensor[(1, 32, 1, 54), float32]`; ;
nn.bias_add(%5, %v_param_4)
}
```

If I change `_convert_upsample`

in relay/frontend/keras.py code to this, the first error is gone:

```
def _convert_upsample(inexpr, keras_layer, _):
_check_data_format(keras_layer)
upsample_type = type(keras_layer).__name__
params = {'layout': 'NHWC'}
params = {}#!!!!!!!
```

```
In `main`:
v0.0.3
fn (%X: Tensor[(48, 1, 16384, 1), float32], %v_param_1: Tensor[(32, 1, 1, 55), float32], %v_param_2: Tensor[(32), float32], %v_param_3: Tensor[(1, 32, 1, 55), float32], %v_param_4: Tensor[(1), float32]) {
%0 = nn.upsampling(%X, scale=2);
%1 = nn.conv2d(%0, %v_param_1, padding=[0, 27], channels=32, kernel_size=[1, 55]);
%2 = nn.bias_add(%1, %v_param_2);
%3 = nn.conv2d(%2, %v_param_3, padding=[0, 27], channels=1, kernel_size=[1, 55]) in particular dimension 0 conflicts 32 does not match 1dimension 1 conflicts 0 does not match 32; unable to unify: `Tensor[(32, 0, 1, 55), float32]` and `Tensor[(1, 32, 1, 55), float32]`; ;
nn.bias_add(%3, %v_param_4)
}
```

But I can’t get rid of that strange `Tensor[(32, 0, 1, 55), float32]`

. I don’t get why it sets one of the dimensions to zero. To me it seems like a mistake in C++ code, as I traced the python code to the C++ call and I don’t see any mention of that weird 0-dimension shape. What files in the C++ code I should check to see where this shape appears?