You can use a variable, but that does not address the real issue which is that autotuning only tunes for concrete shapes. It is likely that you can get good coverage for a large number of shapes even when only tuning for a few concrete shapes, but there will likely be corner cases.
Another way of providing general support for many shapes is to fix the splits of loop axes and to handle the uneven edges separately. Imagine you have a loop that is split by a factor of 128, but the current shape is not a multiple of this factor e.g., 1000. One approach is to first do the 896 iterations according to the previous split, and then to handle the remaining 104 iterations separately. However, we currently do not have perfect support for this use case.
In practice, we find that single shape kernels offer the highest performance due to the high degree of specialization and that they are suitable for common use cases (e.g., NN inference).