Why are most High End Amps class A


Hello, new here and wondering.

I've recently been looking and reading at Audiogon and see that most "High End Amps" are class A. Currently I own a McIntosh C28 preamp and MC2105 amp. To me they sound fabulous.

Would a "High End" class A sound any better?

Of course I realize that there are very expensive class A's that would blow away my Mac's, but what about say a used class A in the $ 1000.00 to $2000.00 price range?

Thank you so much for your input!
gp_phan
Plinius considers all-NPN to be a good way to go for their high-end models, even today. I personally find it a bit clumsy, and that alone might be worse off than slightly different transistor characteristics. Nature seems to value simplicity more highly than complexity.

I agree that many amps marketed as "class AB" are really Class B. The threshold between the two is whatever one wants it to be these days.

I disagree with the "amplifier phase delay" problems with horizontal biamping. There are plenty more phase-inducing circumstances in the system that are worse than the phase of a linear amp, like the speaker drivers' phases. Whether or not a particular horizontal biamp will work well is more dependent on luck than technicalities. However, in vertical biamping, you need the same amps because they are each reproducing the same frequencies and, as most of us know, not all amps sound the same.

Gp phan - Don't worry about what Class the amp is. Just listen and enjoy. If you are really curious about a Class A, just get one and try it for yourself. Besides, there is a lot of sonic performance overlap between Classes. Potential merits can be discussed all day but in the end, they really don't matter. Afterall, it is the sound that should count most.

Arthur
And you're correct in the assertion that increasing bias doesn't improve the problem, it just increases the signal level at which it occurs.

But it does reduce the audibility of the problem. You now have distortion only when the output signal is very high and you have no crossover distortion (GM doubling or whatever you like to call it - lets say transition distortion) when output signal is very low and it runs in Class A (both sides conducting).

A small absolute amount of distortion on a large signal is better than the same absolute distortion on a small signal.

In one case the listener may notice the transition distortion (large part of the overall signal) while in the other case it will be much less audible due to it being a smaller proportion of a much larger signal.
You now have distortion only when the output signal is very high and you have no crossover distortion (GM doubling or whatever you like to call it - lets say transition distortion) when output signal is very low and it runs in Class A (both sides conducting).
This is quite correct, but the gm-doubling transition distortion is much worse than the crossover distortion. So as to audibility . . . it all depends on the application - each road has its burdens to bear.

A small absolute amount of distortion on a large signal is better than the same absolute distortion on a small signal.
I've heard it asserted that crossover distortion manifests itself (it drives THD upward) more as the signal level is reduced . . . and honestly, I'm not sure whether or not it's true. It seems to make intuitive sense, but I've measured lots of amplifiers, and I'm doubtful as to whether or not the measured data supports it. Complicating the issue is that THD+N of course rises in a linear manner with a reduction of signal level . . . but when you measure just the noise, you get the same results.

I think it may be that as the signal levels are reduced, the proportion of the total signal that's in the crossover region increases, but the crossover non-linearities are at the same time being spread out across a larger proportion of the waveform, making them less severe. Whether or not these opposing factors cancel each out is the question, and I certainly haven't the skill to investigate it with pure mathematics, and my current measurement equipment isn't sensitive enough to find the answer emperically.
Shadorne - tests show that underbias and overbias causes increase in distortion (especially higher order odd harmonics) for all signals (in wide range of output power). Increasing bias above optimal level increases non-linear area (of non-constant gm-doubling) and makes it worse.

Proper way to do it is to bias it into class A so nonlinear area would never be reached and gm doubling would appear constant for all signal all the time. Unfortunately this requires (typical) bias of 150% of anticipated max output current. At least that's how I understand it.
tests show that underbias and overbias causes increase in distortion

Agreed but I don't understand the point? Of course you have to match the bias precisely between both sides as otherwise you'll get loads of higher harmonic distortion. However you can design circuits that help achieve this and help maintain this under various conditions - it all boils down to careful topology.