[Bugfix] Support for placeholders defining tensor shapes

I created a fix in response to the discussion started here. A simple example illustrating the problem:

A = tvm.placeholder((1,1), dtype='int32',name="A")
C = tvm.compute((A[0,0],A[0,0]), lambda i : 0)  
sch = tvm.create_schedule(C.op)  
f = tvm.build(sch,[A, C])  

It seems this should the above snippet should be supported as an analogous program using var instead of placeholder can be built:

A = tvm.var(dtype='int32')
C = tvm.compute((A,A), lambda i : 0)
sch = tvm.create_schedule(C.op)
f = tvm.build(sch,[A, C])

Upon investigation the error is caused by the following offending assertion:

assert((A(0, 0) == float32(arg2.shape[0])), "Argument arg2.shape[0] has an unsatisfied constraint")

The A(0,0) is a Call node with call_type=Call::Halide which is normally mutated into A[0] during the StorageFlatten pass. However, in this instance this assertion statement is created during the arg binding phase of MakeAPI and make it to codegen without being flattened.

The avenue I investigated to fix this is to call StorageFlatten on the Buffer shapes as they are created. This approach proved fruitful in showing that these programs can build and run successfully.However, I was wondering if would be more robust to have the buffer shapes exist in the IR so that they will be flattened when StorageFlatten runs.

Any suggestions are appreciated!