Regardless of design paradigm, isn't it the frequency response and distortion measured in the end that matter? These are components designed with certain criteria in mind that are obviously not interchangeable and must be matched together somehow. Its important to be aware of the technical details that matter, like impedance characteristics, to have the best chance of getting best results, but in the end, I do not think either paradigm can be measured as definitively better, although I suspect that the way these things are usually determined, via certain accepted distortion measurements, etc., that the common voltage paradigm measures better when done correctly.
Mapman, Put in a nutshell, to answer the first question above, 'no' is the short answer. The longer answer is that the ear cares about certain distortions and others not so much. In addition, the ear will interpret (as I have mentioned previously) some distortions as tonality, and will favor them over actual frequency response errors or accuracy.
An excellent example is how some amps can sound bright, but measure the frequency response and they are perfectly flat. This is because trace elements of odd ordered harmonics are interpreted by the ear as brightness even though it does not show on the instruments.
Another way of looking at this is that the Voltage Paradigm for the most part ignores human hearing rules, opting instead for arbitrary figures on paper. In essence, an example of the Emperor's New Clothes. I do not think that this was done on purpose, its just how things have worked out in the history of the last 45-50 years or so.
You have to understand that back in those days, there was very little that was understood about how the ear actually perceives sound. So the Voltage model was set up around low distortion and flat frequency response.
In the interim, we have learned a lot about human hearing, but one thing I find amusing is that one of the earlier facts we discovered was that the ear uses odd ordered harmonics as a gauge of sound pressure. That was known by the mid-1960s. Yet the industry ignored it.
The Power model rests on the idea that if we build the equipment to obey human hearing rules, the result will sound more like music. Well, if we are to obey one of the most fundamental hearing rules, we have to get rid of negative feedback, otherwise the result will always sound brighter and harsher than What Is Real.
The evidence that this is not a topic of debate is all around us- probably the easiest to understand is that, over half a century after being declared obsolete, tubes are still with us (and we are still having these conversations). If the Voltage model was really the solution, it would have eclipsed all prior art and would be the only game in town. That it failed at that speaks volumes.
Now in saying this I am not trying to make you or anyone else wrong. I would love the Voltage model to actually work, but IMO there are only a few examples that do; they represent a tiny minority, much smaller than the pond I am fishing from, to use Duke's apt expression.
Its more a matter of what collectively we choose to ignore, things like the fact that the stereo can sound loud or shouty at times. Its been my experience that if the system is really working, you will never have any sense of volume from it, that is to say its relaxed even with 110db peaks. If you cringe at the thought of running the volume that high then you know exactly what I mean. Yet real music hits peaks like that all the time and we don't cringe.