[RFC] Chameleon RL integration

Hi Folks,

Perhaps some of you already saw (or even know members) or hear about this work:

Do you think would be opportune to integrate into TVM mainstream ? I didn’t contact authors yet, but would love to help with integration steps into TVM.

3 Likes

looks cool, has anybody validated experiments in their paper? If this work really improves AutoTVM’s long tuning time pain point while achieving similar inference performance, I’m definitely interested.

Not validated yet (on personal side).

There was a review process with some details including willing from them to merge into Apache TVM.

It would be great if we could evaluate the performance of a sufficient number of modern models before considering an RFC. This paper only evaluates AlexNet, VGG-16 and ResNet-18, but those models are already well-tuned in TVM. We should at least evaluate series of ResNet, MobileNet, DenseNet models, and even some CV models like SSD and YOLO.

@masahi @comaniac

  • Will do some personal eval on mali / arm / cuda-quantized with yolo first.
  • However even under-performing (hope not the case, not even slightly) it still can be a nice to have among random, xgb, ga, grid ones.

Contacted the authors, got confirmation for the willing to upstream so will help out wherever will be necessary.

2 Likes

@cbalint13 I think xgb, random etc are not something to compare against. Rather they are alternative to our search module based on simulated annealing (from looking at their abstract, haven’t read the paper).

We can integrate the adaptive sampling part first. This is the easiest part to integrate (I guess the of core implementation < 200 loc). This is also most effective part according to the results in their paper.

The RL part does’t show too much improvement according to their experiments.