high power tube amps vs ss


I have always had low efficiancy speakers and had powerfull ss amps to power them. Now I see there are a number of tube amps in the 150 - 200 WPC range. My questions is: is there anything to be gained by switching to these higher power tube amps over ss amps?
winggo
Regardless of design paradigm, isn't it the frequency response and distortion measured in the end that matter? These are components designed with certain criteria in mind that are obviously not interchangeable and must be matched together somehow. Its important to be aware of the technical details that matter, like impedance characteristics, to have the best chance of getting best results, but in the end, I do not think either paradigm can be measured as definitively better, although I suspect that the way these things are usually determined, via certain accepted distortion measurements, etc., that the common voltage paradigm measures better when done correctly.

Mapman, Put in a nutshell, to answer the first question above, 'no' is the short answer. The longer answer is that the ear cares about certain distortions and others not so much. In addition, the ear will interpret (as I have mentioned previously) some distortions as tonality, and will favor them over actual frequency response errors or accuracy.

An excellent example is how some amps can sound bright, but measure the frequency response and they are perfectly flat. This is because trace elements of odd ordered harmonics are interpreted by the ear as brightness even though it does not show on the instruments.

Another way of looking at this is that the Voltage Paradigm for the most part ignores human hearing rules, opting instead for arbitrary figures on paper. In essence, an example of the Emperor's New Clothes. I do not think that this was done on purpose, its just how things have worked out in the history of the last 45-50 years or so.

You have to understand that back in those days, there was very little that was understood about how the ear actually perceives sound. So the Voltage model was set up around low distortion and flat frequency response.

In the interim, we have learned a lot about human hearing, but one thing I find amusing is that one of the earlier facts we discovered was that the ear uses odd ordered harmonics as a gauge of sound pressure. That was known by the mid-1960s. Yet the industry ignored it.

The Power model rests on the idea that if we build the equipment to obey human hearing rules, the result will sound more like music. Well, if we are to obey one of the most fundamental hearing rules, we have to get rid of negative feedback, otherwise the result will always sound brighter and harsher than What Is Real.

The evidence that this is not a topic of debate is all around us- probably the easiest to understand is that, over half a century after being declared obsolete, tubes are still with us (and we are still having these conversations). If the Voltage model was really the solution, it would have eclipsed all prior art and would be the only game in town. That it failed at that speaks volumes.

Now in saying this I am not trying to make you or anyone else wrong. I would love the Voltage model to actually work, but IMO there are only a few examples that do; they represent a tiny minority, much smaller than the pond I am fishing from, to use Duke's apt expression.

Its more a matter of what collectively we choose to ignore, things like the fact that the stereo can sound loud or shouty at times. Its been my experience that if the system is really working, you will never have any sense of volume from it, that is to say its relaxed even with 110db peaks. If you cringe at the thought of running the volume that high then you know exactly what I mean. Yet real music hits peaks like that all the time and we don't cringe.
"This is because trace elements of odd ordered harmonics are interpreted by the ear as brightness even though it does not show on the instruments."

I have trouble understanding how the ear hears something as "bright" that does not evidence itself somehow when measured.

I've always taken that as some resulting frequency anomoly in one of those frequency ranges where the ear is most sensitive, but how serious can it be if not even measurable? Where is the evidence that the effect exists, much less the cause?
"An excellent example is how some amps can sound bright, but measure the frequency response and they are perfectly flat. "

The ear sensitivity chart in the diagram I shared alone would seem to explain that. We don;t hear bass as well as other frequencies, so when response measures flat, bass may be less heard. But that is how our ears work, so it is what it is. ITs a clear alternate example of how we hear what otherwise measures differently.
The huge unflatness of the ear sensitivity chart would also seem to debunk any claims one might make about being able to hear flat frequency response. If you hear it as being flat, it in fact cannot be. Significant equalization would have to be applied to the source to have any chance. At that point, what you hear as flat would no longer be natural, rather "enhanced" to make it sound that way to compensate for lack of flat response with our hearing.
The huge unflatness of the ear sensitivity chart would also seem to debunk any claims one might make about being able to hear flat frequency response. If you hear it as being flat, it in fact cannot be. Significant equalization would have to be applied to the source to have any chance. At that point, what you hear as flat would no longer be natural, rather "enhanced" to make it sound that way to compensate for lack of flat response with our hearing.

Its more complex than that, our ear/brain system can recognize acoustic environments and compensate for them... BTW I hope you are not suggesting that we need to compensate our ears with EQ.

I have trouble understanding how the ear hears something as "bright" that does not evidence itself somehow when measured.

I've always taken that as some resulting frequency anomoly in one of those frequency ranges where the ear is most sensitive, but how serious can it be if not even measurable? Where is the evidence that the effect exists, much less the cause?

If we can't measure it can it exist? Sure! Our instruments have limits of their own- noise being an excellent example (another being the tendency to quantify a phenomena as a reading on a meter...). When an amplifier has low harmonic distortion measurements, its often described as having such low distortion that its "buried in the noise of the instruments".

The simple fact is that in regards to sensitivity to odd ordered harmonics, our ears are **more** sensitive than instruments. This is not hard to understand if you also know that the ear is that sensitive because it uses odd orders to gauge sound pressure- look at it as a survival trait. If you can't tell how loud a tiger is growling, you may well soon be dead. The ear needs to be pretty sensitive as a result. There are other things that the ear sucks at compared to instruments; this simply isn't one of them :)

General Electric did the studies of this phenomena back in the 1960s. It was perhaps one of the first real forays into the hows and whys of human hearing perceptual rules. We have learned a lot more since then.