Why are most High End Amps class A


Hello, new here and wondering.

I've recently been looking and reading at Audiogon and see that most "High End Amps" are class A. Currently I own a McIntosh C28 preamp and MC2105 amp. To me they sound fabulous.

Would a "High End" class A sound any better?

Of course I realize that there are very expensive class A's that would blow away my Mac's, but what about say a used class A in the $ 1000.00 to $2000.00 price range?

Thank you so much for your input!
gp_phan
Kijanki, we're definately running into some terminological inconsistencies in this field - but if we're talking about very low signal levels, I'd say that the amp in question must certainly be a Class B amp that for marketing reasons was being labelled as AB. If it occurred at higher power levels (say above a few watts), then I could see how a Class AB amp could exhibit this behavior.

100mA is probably about right for a single pair of bipolar transistors biased Class B across some pretty low emitter resistors, maybe 0.15 ohm? But I think of 0.22, 0.33, and 0.47 ohm as being the common values, so I'd say that most amps are biased proportonately lower for Class B.

It does seem quaint to use an all-NPN output stage these days . . . maybe for an inexpensive paging-system amplifier it might make sense, but for hi-fi, your're right. Crazy.
Kirkus - I don't do much with discrete stuff so I remember wrong definition of class B (as a one without bias current). Now I realized that it has to have bias current in order to break Vbe voltage of output transistors.
Plinius considers all-NPN to be a good way to go for their high-end models, even today. I personally find it a bit clumsy, and that alone might be worse off than slightly different transistor characteristics. Nature seems to value simplicity more highly than complexity.

I agree that many amps marketed as "class AB" are really Class B. The threshold between the two is whatever one wants it to be these days.

I disagree with the "amplifier phase delay" problems with horizontal biamping. There are plenty more phase-inducing circumstances in the system that are worse than the phase of a linear amp, like the speaker drivers' phases. Whether or not a particular horizontal biamp will work well is more dependent on luck than technicalities. However, in vertical biamping, you need the same amps because they are each reproducing the same frequencies and, as most of us know, not all amps sound the same.

Gp phan - Don't worry about what Class the amp is. Just listen and enjoy. If you are really curious about a Class A, just get one and try it for yourself. Besides, there is a lot of sonic performance overlap between Classes. Potential merits can be discussed all day but in the end, they really don't matter. Afterall, it is the sound that should count most.

Arthur
And you're correct in the assertion that increasing bias doesn't improve the problem, it just increases the signal level at which it occurs.

But it does reduce the audibility of the problem. You now have distortion only when the output signal is very high and you have no crossover distortion (GM doubling or whatever you like to call it - lets say transition distortion) when output signal is very low and it runs in Class A (both sides conducting).

A small absolute amount of distortion on a large signal is better than the same absolute distortion on a small signal.

In one case the listener may notice the transition distortion (large part of the overall signal) while in the other case it will be much less audible due to it being a smaller proportion of a much larger signal.
You now have distortion only when the output signal is very high and you have no crossover distortion (GM doubling or whatever you like to call it - lets say transition distortion) when output signal is very low and it runs in Class A (both sides conducting).
This is quite correct, but the gm-doubling transition distortion is much worse than the crossover distortion. So as to audibility . . . it all depends on the application - each road has its burdens to bear.

A small absolute amount of distortion on a large signal is better than the same absolute distortion on a small signal.
I've heard it asserted that crossover distortion manifests itself (it drives THD upward) more as the signal level is reduced . . . and honestly, I'm not sure whether or not it's true. It seems to make intuitive sense, but I've measured lots of amplifiers, and I'm doubtful as to whether or not the measured data supports it. Complicating the issue is that THD+N of course rises in a linear manner with a reduction of signal level . . . but when you measure just the noise, you get the same results.

I think it may be that as the signal levels are reduced, the proportion of the total signal that's in the crossover region increases, but the crossover non-linearities are at the same time being spread out across a larger proportion of the waveform, making them less severe. Whether or not these opposing factors cancel each out is the question, and I certainly haven't the skill to investigate it with pure mathematics, and my current measurement equipment isn't sensitive enough to find the answer emperically.