high power tube amps vs ss


I have always had low efficiancy speakers and had powerfull ss amps to power them. Now I see there are a number of tube amps in the 150 - 200 WPC range. My questions is: is there anything to be gained by switching to these higher power tube amps over ss amps?
winggo
THis interactive chart is a fantastic reference resource.

Not only does it help you understand how music works, but also relates that to ear sensitivity, ie frequencies that our ears are most sensitive to, at least normally.

So when looking at frequency response curves for a particular setup for example, compare what is measured to ear sensitivity as indicated. Also consider the harmonic elements that comprise various instruments in the recording as indicated in the chart. That should help one really assess what is going on when they listen better.

I have a framed copy of a poster of this chart hanging in my main listening room for easy reference when needed. The paper is not interactive though unfortunately... :^).
Great chart Mapman. The chart also provides insight into why a speaker may subjectively sound bright or flat (i.e., dull). The chart shows that our ears are very sensitive to midrange frequencies. So, if our audio rig emphasizes frequencies in the 2K to 3K range, the speaker may sound "bright." And I assume the opposite is also true.
One additional factor to consider is how individuals hear. I doubt any two people hear exactly the same plus all our hearing changes as we age. Those 50 or over may be challenged to hear test tones much above 10-12Khz or so. That can be a mixed blessing when it comes to how gear sounds in that younger ears may have greater sensitivity in that range. Age might be one of the most telling factors regarding what "sounds good" if such a study were done I believe. Also note that the chart indicates that even though we may not "hear" higher frequencies, we may still be affected by them via other senses.
Regardless of design paradigm, isn't it the frequency response and distortion measured in the end that matter? These are components designed with certain criteria in mind that are obviously not interchangeable and must be matched together somehow. Its important to be aware of the technical details that matter, like impedance characteristics, to have the best chance of getting best results, but in the end, I do not think either paradigm can be measured as definitively better, although I suspect that the way these things are usually determined, via certain accepted distortion measurements, etc., that the common voltage paradigm measures better when done correctly.

Mapman, Put in a nutshell, to answer the first question above, 'no' is the short answer. The longer answer is that the ear cares about certain distortions and others not so much. In addition, the ear will interpret (as I have mentioned previously) some distortions as tonality, and will favor them over actual frequency response errors or accuracy.

An excellent example is how some amps can sound bright, but measure the frequency response and they are perfectly flat. This is because trace elements of odd ordered harmonics are interpreted by the ear as brightness even though it does not show on the instruments.

Another way of looking at this is that the Voltage Paradigm for the most part ignores human hearing rules, opting instead for arbitrary figures on paper. In essence, an example of the Emperor's New Clothes. I do not think that this was done on purpose, its just how things have worked out in the history of the last 45-50 years or so.

You have to understand that back in those days, there was very little that was understood about how the ear actually perceives sound. So the Voltage model was set up around low distortion and flat frequency response.

In the interim, we have learned a lot about human hearing, but one thing I find amusing is that one of the earlier facts we discovered was that the ear uses odd ordered harmonics as a gauge of sound pressure. That was known by the mid-1960s. Yet the industry ignored it.

The Power model rests on the idea that if we build the equipment to obey human hearing rules, the result will sound more like music. Well, if we are to obey one of the most fundamental hearing rules, we have to get rid of negative feedback, otherwise the result will always sound brighter and harsher than What Is Real.

The evidence that this is not a topic of debate is all around us- probably the easiest to understand is that, over half a century after being declared obsolete, tubes are still with us (and we are still having these conversations). If the Voltage model was really the solution, it would have eclipsed all prior art and would be the only game in town. That it failed at that speaks volumes.

Now in saying this I am not trying to make you or anyone else wrong. I would love the Voltage model to actually work, but IMO there are only a few examples that do; they represent a tiny minority, much smaller than the pond I am fishing from, to use Duke's apt expression.

Its more a matter of what collectively we choose to ignore, things like the fact that the stereo can sound loud or shouty at times. Its been my experience that if the system is really working, you will never have any sense of volume from it, that is to say its relaxed even with 110db peaks. If you cringe at the thought of running the volume that high then you know exactly what I mean. Yet real music hits peaks like that all the time and we don't cringe.
"This is because trace elements of odd ordered harmonics are interpreted by the ear as brightness even though it does not show on the instruments."

I have trouble understanding how the ear hears something as "bright" that does not evidence itself somehow when measured.

I've always taken that as some resulting frequency anomoly in one of those frequency ranges where the ear is most sensitive, but how serious can it be if not even measurable? Where is the evidence that the effect exists, much less the cause?