How could 100 Watt class a has more head room than a 300 Watt amp Class AB


Put aside which brand or make.
I put two amps into a test, both highend amp came from the same manufacturer.
Both double down the power with half of the impedance load, and THD is about the same.
Regardless of the size and cost difference, from a pure science perspective.
300 watt in theory should provide more headroom and sound ease when it reaches 100db, but the reverse is the true, the class A 100 watt seems to provide more headroom.
I have tried to use another set of speakers which is much easier to drive and it reaches the same conclusion.
Can someone explain why?
Quality or quantity of watt, how do we determined?
samnetw
You are going to have to offer very specific examples of the amps you are talking about as well as the speakers.

Best,

E
Single ended class A amps don't so I'm going to assume you're talking push-pull class A. A class AB amp is usually said to clip when the demands of the waveform exceed the voltage and/or current capability of the amp circuit or power supply. Usually the power supply is the major limiting factor. 
A P-P class A amp is roughly like a class AB amp, but the bias is set WAY higher. Instead of a few milliamps just to keep the devices on in their linear region, the bias is cranked up to cover the entire waveform. Once a 100 watt P-P class A amp reaches it's bias limit, one side of the push-pull goes completely off and the other side conducts beyond it's bias. That's a p-p class A transitioning into class AB. That's actually how high bias class AB amps are designed to work. The result of the p-p class A amp behaving like that is massive amounts of head room. As Erik suggests, loads and designs strongly contribute to this, but that's the general reason why. 
How could 100 Watt class a has more head room than a 300 Watt amp Class AB

If one were a Mosfet output and the other a BJT (bi-polar) and both were well engineered push/pull and driving into a heavy load, such as the Wilson Alexia which has an EPDR (equivalent peak dissipation resistance) of just .9ohm!!!! at 65hz.
Then the 100w BJT amp would drive it better than the 300w Mosfet.
As the 100w BJT "could" in effect give out 800w into that .9ohm load where the Mosfet would probably go down to less than it’s rated 8ohm 300w output.
Cheers George
Sounds like you are talking about two SS amps, so probably from Pass but that is just a guess.
To your question, the Class A amp will likely have a stiffer power supply and more capacitance.  Also, IME, I like the sound of Class A better since to me they sound more fleshed out than the best AB amps I have heard and so may sound more harmonically complete when pushed.  A last observation based on experience is that really good Class A amps do not get strident sounding as Class AB amps can when they are starting to get stressed, the Class A amps simply run out of steam - at least that is how the two different sets of Lamm hybrid monos I owned used to behave.  My 300 wpc Class A Claytons have yet to run out of steam, even driving my new lower-efficiency (85dB) speakers.
To George's response, my listening has resulted in a preference for bi-polar output stages vs. mosfets.  I believe some designers use mosfets to try and emulate a "tube sound" but to me they do not control the output as well.