The other being the measurement with "correct impulse responses", ie. measuring a DAC not only a not existing, abstract sequence of (one) sample. " - pegasus
The above statement clearly demonstrates that you don't yet understand what an impulse response test really is. The folks at MQA have been banking on this problem to assist them with the smoke screen. Again, read the beginning of this thread. For emphasis ( I don't know how to use bold type on this interface)
IN ITS TOTALITY, AN IMPULSE RESPONSE IS THE FULL CHARACTERIZATION OF THE TIME AND FREQUENCY DOMAIN BEHAVIOR OF ANY LINEAR, TIME INVARIANT SYSTEM UNDER TEST.
Please read the above over in your head several times. If there is any term contained therein that is unclear or confusing, please let me know and I will do my best to try explain it to you. Audio systems are considered by most engineers who build them to be "linear, time invariant" systems - or at least - that is the goal.
The impulse response plot posted by Stereophile of the MQA and non MQA DACs show latency distortion as well as added noise in the MQA file. Whether or not this is audible or audibly pleasing/objectionable to the average listener is and likely always will be a matter of endless debate. What is not in debate is that it IS DISTORTION. Any distortion you want to talk about in these kinds of linear system approximations has its origins in energy storage - whether its a standing wave in a speaker cavity or a simple phase delay in a first order crossover network. When a signal's voltage and current go out of phase, distortions result and are typically detected in the form of even and odd ordered harmonics. The more rapidly and intensely energy is stored, the more harmonics are produced regardless of the level of damping (resistance/loss) applied between the storage elements. LATENCY = ENERGY STORAGE = DISTORTION. Simple phase delay networks that involve linear phase changes may appear to be "distortion free" but that only depends on the "working bandwidth" or frequencies of interest. In a linear, time invariant system, time and frequency distortions are derived from one another - different representations of the same thing.
So your subsequent statement -
" Since when is latency a distortion...?It may be a limiting factor for practibility reasons, or a simple inconvenience. But in replay audio it is (AFAIK) of no concern at all. "
represents further proof that your knowledge level is lacking. There are plenty of filtering tricks one can apply to reduce undamped oscillation in a circuit. Linkwitz-Riley crossovers come to mind. There is a faint reference to this technique in the original Sound on Sound BS article put out to promote Mr. Craven's "apodizing filters" - essentially cascading buffered linear phase filters to achieve rapid rolloff without some of the deleterious affects of single stage steep crossovers. ( I found no reference to Linkwitz in the original "Craven's a genius" article, btw.). But if you have actual experience with these types of circuits and have done distortion measurements on them, you will find that total harmonic distortion creeps up as the amplitude of the signal drops off in the transition band of the filter - buffered Linkwitz-Riley or not. There is no free lunch. And it looks like others are waking up to the fact that what Stuart and Craven are offering is more like reheated left over meatloaf than a miraculous "free lunch".

