[Relay] Is this a bug or am I missing something?

I’ve been writing some relay graphs and encountered a confusing issue. I’m not sure if I’m doing something wrong or if this behavior is a bug.

When using a relay constant initialized with an iterable like a numpy or nd array, I get outputs that look like the constant wasn’t properly initialized. Here’s an a snippet of code to that demonstrates the issue.

First, here’s an example that works fine and produces reasonable outputs.

A = relay.var('A', shape=[1, 4], dtype='float32')
B = relay.sum(relay.multiply(A, relay.const(1.)), axis=-1)
B_func = relay.Function([A], B)

with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(B_func, 'llvm', params={})

module = graph_runtime.create(graph, lib, tvm.cpu())
module.set_input('A', np.ones(shape=[1, 4]))
module.run()
print(module.get_output(0)

This produces an output of 4 as expected. However, changing the constant to wrap [1.] instead of 1. gives a nonsense output.

A = relay.var('A', shape=[1, 4], dtype='float32')
B = relay.sum(relay.multiply(A, relay.const(np.asarray([1.]))), axis=-1)
B_func = relay.Function([A], B)

with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(B_func, 'llvm', params={})

module = graph_runtime.create(graph, lib, tvm.cpu())
module.set_input('A', np.ones(shape=[1, 4]))
module.run()
print(module.get_output(0)

The above produces an inconsistent output that appears to be a result of multiplying with uninitialized memory.

I did a quick look through the tests and noticed that there were no cases that use a constant array, so it’s reasonable that this bug could slip by. Any thoughts on why this is happening?

It looks like there is definitely a bug if this is happening, I will try to dig into it tomorrow.

I realized after looking further that you are not passing in the parameters to the graph runtime. The graph runtime does not support inline constants, and when compiling for it we lift the constants out of the program.

See:

import tvm
from tvm import relay
import numpy as np
from tvm.contrib import graph_runtime

A = relay.var('A', shape=[1, 4], dtype='float32')
B = relay.sum(relay.multiply(A, relay.const(np.asarray([1.]))), axis=-1)
B_func = relay.Function([A], B)

with relay.build_config(opt_level=3):
    graph, lib, params = relay.build(B_func, 'llvm', params={})

module = graph_runtime.create(graph, lib, tvm.cpu())
module.set_input(**params)
module.set_input('A', np.ones(shape=[1, 4]))
module.run()
print(module.get_output(0))
1 Like

You can use the high-level interface too when experimenting:

import tvm
from tvm import relay
import numpy as np
from tvm.contrib import graph_runtime

A = relay.var('A', shape=[1, 4], dtype='float32')
B = relay.sum(relay.multiply(A, relay.const(np.asarray([1.]))), axis=-1)
B_func = relay.Function([A], B)

with relay.build_config(opt_level=3):
    ex = relay.create_executor("graph", target='llvm')
    B_f = ex.evaluate(B_func)
    print(B_f(np.ones(shape=[1,4])))
    print(B_f(np.zeros(shape=[1,4])))

1 Like

That makes perfect sense, thanks for the great answer.