Support of different GPU architectures

Hi there,

I tuned the GPU TVM model of resnet-100 for target Cuda 1080TI(Pascal architecture). I got enough speedup! Good works guys.

But I have a question about deploying and running TVM modules on different NVIDIA GPU architectures. Because when I tried to run this module on Tesla K80(Kepler architecture). I got the following error: " CUDAError: Check failed: ret == 0 (-1 vs. 0) : cuModuleLoadData(&(module_[device_id]), data_.c_str()) failed with error: CUDA_ERROR_INVALID_PTX"

How I understand I need to build a new TVM module for other architecture(Tesla K80). Is it the right way? Or maybe there is a way for building a TVM module for different GPU architectures in one time.

Thank you in advance.

I found one way to solve this problem. If I tune the model on K80 it works on GTX 1080Ti and other GPUs. I get almost the same performance on GTX 1080Ti.

hi, Can you share your solution?

It’s possible that the schedule of 1080 Ti figured out by AutoTVM cannot be applied to K80. The current AutoTVM assumes the model will be tuned for every device. On the other hand, you may load the tuning log of 1080 Ti when start tuning K80 and AutoTVM should be able to find a decent schedule for K80 in a shorter time.

@ydy Nothing special. I just use this tutorial to tune my model on K80 and save TVM module. Then I can use the TVM module on GTX 1080Ti.

@comaniac thank you for the great idea.