Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Qualia8,

There is a vast array of specializations among the neurons in the brain. Some, as you pointed out, detect differences, others sameness; yet others, change or motion or timing, etc. Ignoring that complexity, may lead both sides of this discussion to over-simplification at best and to closed-mindedness at worst.

With that in mind, let me add the flip side to my previous post to you. The after-effects may not only smear differences, but they may also distort sameness. Take for example the two abstract amorphous paintings containing a rich array of colors in my living room. Everyone who looks at either one, reports the same phenomena. The colors change, the amorphous shapes change and those shapes move. Now, we know the painting remains the same. The changes are the result of the brain's processing. It appears the after-images of the various colors "combine" with the direct stimuli to produce a change in the perception, which in turn forms it's after-images which "combine" with the subsequent direct stimuli, etc. What follows is a sequence of illusory changes which create a dynamic that is not there.

This perceptual phenomena of after-images has been studied but it has not been eliminated. The temptation to reduce it's effect by taking micro-second intervals of music, automatically prejudices the methodology against percieving differences that require longer intervals; for example, decay and rhythm.

The debate with probably go on. In the meantime, it's good to have a discussion that produces more illumination than heat.

Enjoy the Music,
John
But if these "after effects" mattered, John, then we'd see listening test results showing that putting gaps between samples improved subjects' sensitivity to differences. I don't know of any such test results. Do you?
TBG: So who called you a fool? Who called you "anti-science"? Citations, please.
Hi Phredd2,

You asked for some additional elements for rigorous methodology. In addition to the acuity tests I mentioned previously, participants should pass reasonable memory tests. Otherwise, their inability to distinguish 2 amps may not be a statement about the amps but about the participants. It is fine with me if an audiophile wants to listen privately just to see if he/she likes or prefers a component. But this is not acceptable for rigorous testing. Therefore, participants should be able to demonstrate their critical listening skills. If they aren't accustomed to listening consciously for nuances in harmonic textures, changes in micro-dynamics, phrasings, ambience, decay, etc., then they may miss subtle differences in how 2 amps reproduce the different musical elements.

"After-effects", as pointed out in my previous posts, are inherent to our perceptual mechanisms and brain circuitry/chemistry and may smear differences between 2 components in a short-term DBT. Consequently, a negative result of a short-term DBT may have an interpretation other than "no difference in the amps". Allowing enough time for the "after-effects" to subside, is one way to reduce their effects. However, this may add to some degradation of memory, as pointed out in one of the posts above; but that just re-inforces my contention that the underlying complexity has not been unravelled enough yet to make definite determinations. Please see my exchanges with Qualia8 for additional comments.

Great Listening,
John
Therefore, participants should be able to demonstrate their critical listening skills.

Once again, the scientists are ahead of you. Standards for appropriate listener training exist. And they weren't devised based on the misapplication of principles from visual perception, let alone high-end cant; they were developed through experience that identified the background necessary to produce reliable results, both positive and negative.

If anyone doesn't feel those standards are sufficiently high, there has always been an alternative: Propose higher standards, and then find some audible difference that can't be heard without the benefit of your more rigorous training. For all the griping about DBTs, I don't see anybody anywhere doing that.

Finally, recalling the original subject of this thread, has any audio reviewer ever demonstrated that he posesses "critical listening skills" in a scientifically rigorous way? Nope. In fact, there's at least a little data suggesting that audio reviewers are *less* effective listeners than, say, audio dealers. This isn't too surprising. A dealer who carries equipment that sounds bad will go out of business. If a reviewer recommends something that sounds bad, he just moves on to the next review.