so here is something I bumped into:
"float" means different things in different places, not always as expected.
The background is that C/C++, PyTorch, and others will interpret float to mean 32 bit floating point numbers aka float32 and arguably float32 is the most common datatype in deep learning. NumPy, on the other hand, interprets it to be float64.
>>> torch.float torch.float32 >>> numpy.ones(1, dtype='float').dtype dtype('float64')
This carries over to the PyTorch JIT, too - notice the
>>> torch.jit.trace(lambda x: x, torch.ones(1, dtype=torch.float32)).graph graph(%0 : Float(1:1)): return (%0)
Now TVM relay is sympathetic to both views:
>>> tvm.relay.expr.const(1, dtype="float").data.dtype 'float64' >>> tvm.relay.expr.const(1.).data.dtype 'float32' >>> tvm.relay.expr.var("x", "float") Var(x, ty=TensorType(, float32))
To the naive user that I am, there seems to be an inconsistency between what dtype=“float” means for const and var.
In the ill-fated PR #5756 I proposed to make const - which currently defers to numpy for its decision - use consider
float to mean
This has, however met opposition and the request that I highlight it for discussion here.
This hit me while looking at the PyTorch frontend.
Some obvious potential routes of action w.r.t. the behaviour of the dtype argument in
Keep as is,
standardize on float=float32,
standardize on float=float64,
prohibit passing float as dtype but insist on float32 or float64. This seems safest but would mean that we would want to fix everything using it.
Variants could be with or without some deprecation warning as an intermediate step.
In terms of cleaning up code using “float”, I have a PR to submit today or tomorrow that attempts to clean up the use of types in the PyTorch frontend (mainly distinguishing more between “this is a TVM dtype” and “this is a PyTorch scalar type name”), but I won’t be not looking at other use(
I would like to add that I have no immediate interest in it - I thought that it would be useful to allow consistent use of
float as dtype but I personally have since resolved that I should just not use
float or store for identifying a dtype in TVM.