Hi all!
I’m using TVM for post training quantization and noticed that as of now, conv2d_transpose operations can not be quantized and fall back to float32.
- Is there a limitation behind this or is it simply a missing feature?
- If it’s a missing feature, which parts of the code would I need to modify to add such support?
Maybe the community experts could help to clarify these questions? @vinx13 @janimesh or @ziheng @shoubhik I would highly appreciate your response.
Thank you & Best regards, Robert