Thanks for that perspective. My background is in traditional compilers and I agree that the O-levels are a trade off between compile time / compiled performance / code size trade / IEEE compliance (-Ofast). Further to that, they do provide ease of use with a simple expectation that people are used to from a compiler or a compiler driver and they provide an easy mechanism for folks to test conveniently what the compiler does or doesn’t do and to expect a reasonable level of performance. I’ve always viewed the -f(option) not enabled as part of a standard -O or -W option as really a bit of a cop-out to help with some point fixes but equally I’ve observed they are most at risk of bit-rotting if they aren’t used heavily enough. Not only that , there are other heuristics that you can play with using the --param option.
To my mind trying to reason about a well defined set of criteria for -O levels to me sounds useful enough to consider. Given the combinatorial explosion of frameworks x targets, we do have an issue in terms of making things simpler, thus a taxonomy of understanding -O<levels> is probably useful to maybe even consider different passes for different frameworks.