Questions


How to map a API function in backend in C++ into frontend in Python? (4)
Problem on tune_relay_x86.py (3)
Few question regarding autotvm and tune_relay_x86 (6)
[TOPI] How to map our own instrcution set to the operator library? (1)
How can I build TVM in Windows? (3)
How can i do AutoTVM once for a operator but apply the best config for other input/output shape with best performance? (4)
Any documentation on winograd convolution scheduling on CUDA? (2)
[Quantization] Which operators are quantized in TVM? (4)
Anyone who knows how to set compile target for Jetson Nano? (1)
Add new ops for relay.frontend.from_onnx.py (4)
[Relay][split] Don't know how to handle type 'tvm.relay.expr.TupleWrapper' (5)
Python debugger segfaults with tvm (8)
[SOLVED] Auto-tuning CUDA: Poor Performance (1)
[Relay][Expr] how can I swap inputs' data's column in relay? (1)
How can we run TVM without pynq-specific cma library? (1)
How to decide the tiling size? (4)
ZeroDivisionError when compile mxnet model target cuda under Windows (2)
Too large factor for unrolling error when auto tuning model for android arm cpu (3)
[RELAY]Downcast from relay.IncompleteType to relay.TensorType failed (5)
How can we pass tuple inputs to a Relay function? (7)
[Frontend][ONNX][TopK] I've added a TopK op (7)
Auto-tune for different CPUs (2)
The calculation of INT8 (2)
Quantization failed for ResNet50 (8)
Question about Conv1D support (2)
Generating GPU code in TVM (and check out the generated code) (2)
[Quantization] How was calibration dataset been used? (3)
[Relay] Register op pattern based on target (5)
Getting tvm-generated cuda codes (2)
I didn't get the performence difference under 'cpu' or 'vulkan' or 'opencl' mode, i didin't know what's wrong, any helps, thanks (5)