Uncategorized


Welcome to TVM Community (1)
C++ API test running on NVIDIA GPU with different betch_size and different repeat, run time increased (1)
[SOLVED] (M,N)x(N,1) Matrix-Matrix Multiplication vs. Matrix-Vector Mult (2)
Tvm.select error when i try tutorial (2)
TVM TF Converter Bug with Inception_v4 network (11)
TVM module deployment problem with C++ (1)
Example of running inference on a int8 quantized model? (2)
Model can Import by NNVM but failed on Relay (5)
[SOLVED]Adding tvm4j to a bazel build (4)
Model performance is much slower on Android Device using Java API, ComparedToRunEvaluator (3)
Deploying a TVM model with LLC options (3)
Relay Alter OP Layout Pass Regression (10)
[Resolved] Kernel version mismatch (6)
AutoTVM + Relay tutorials broken? (6)
AutoTVM error when loading ONNX model and invoking NNVM compiler (5)
Dependence decorate and Pillow package (3)
LSTM CPU version (1)
CL_INVALID_WORK_GROUP_SIZE error after auto-tuning for OpenCL on Android Device (11)
OpenCL Runtime error (14)
Question about TVMOpParam.flatten_data (1)
No OpenCL platform matched given existing options (8)
How can I supply pre-compiled cl kernels to TVM? (3)
How does the C++ API work (and is there a mistake in the example?) (5)
More slower use TVM than MXNet when I use batch forward (1)
Auto-TVM failed on Android Device, with error msg of "Do not know how to handle return type code 113" (2)
How to load OpenCL kernels when using runtime api for C++ (2)
Fusing conv-act-pooling (2)
Show unflattened tensor in tvm.lower() (5)
Choice about IR, SSA or ANF? (6)
Using external lib (cuDNN) with INT8 ops (4)