Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Tbg: If all you care about is finding a great speaker, why'd you start this thread???

All individuals are not the same. I never said they were. I think you're hung up on the idea of a hypothesis about what a majority of people can hear (in which case it would be necessary to test a random sample of all people). But the more common question in audio is, can anybody hear it? To answer that question in the affirmative, all you have to do is find *one* person who can hear a difference between two components. That's why testing a single individual can be appropriate. (Just remember that, in a single-person test, the null hypothesis relates to that single person; if he flunks, you can't conclude anything about anyone else.)

Here's a good example of the kind of testing that researchers do:

http://www.nhk.or.jp/strl/publica/labnote/lab486.html

Note that one of their 36 subjects got a statistically significant result. In a panel that large, this can easily happen by chance. The check this, they tested that individual again, and she got a random result, suggesting that her initial success was merely a statistical fluke.
I started the thread because I am curious about those who doubt others' abilities to hear the benefits of some components and wires. As many proponents can point to few examples of DBT and nevertheless seem confident of the results, I assumed that they saw DBT as endorsing their personal beliefs. Furthermore, my personal experiences with DBT same/different setups has been that I too could not be confident that my responses were anything other than random. But my experiences with single blind tests with several components which were compared have been more favorable with a substantial consensus on the surprisingly best component.

Speakers have always been a problem for me. Some are better in some regards and others in other areas. I suspect that within the limits of what we can afford, all of us picks our poison.

I did read you reference article and found it very interesting a troublesome as I use a Murata super tweeter with only comes in a 15k Hz and extends to 100k Hz. I am 66 and have only limited hearing above 15k Hz, yet in a demonstration I heard the benefits of the super tweeter, even though there was little sound and no music coming from the super tweeter when the main speakers were turned off. Everyone else in the demonstration heard the difference also. I know that the common response by advocates of DBT is that we were influenced by knowing when they were on.

I must admit that I am confident of what I heard and troubled by my not hearing a difference in a DBT. Were this my area of research rather than my hobby, I would no doubt focus on the task at hand for subjects in DBTs as well as the testing apparatus. My confidence is still in human ears, and I suspect that this is where we differ. I guess it is a question of the validity of the test.

For a sincere DBTer, such as yourself, I am not being truculent. For those embracing DBT as simple self-endorsement, I am dismissive.
For those embracing DBT as simple self-endorsement, I am dismissive.

No objectivists of my acquaintance (and I am acquainted with some fairly prominent ones), "embrace DBT as simple self-endorsement." A number of them, myself included, were subjectivists until we heard something that just didn't make sense to us. I know of one guy (whose name you would recognize) who was switching between two components and had zeroed in on what he was sure were the audible differences between them. Then he discovered that the switch wasn't working! He'd been listening to the same component the whole time, and the differences, while quite "obvious," turned out to be imaginary. He compared them again, blind this time, and couldn't hear a difference. He stopped doing sighted comparisons that day.

Research psychologists did not adopt blind testing because it gave them the results they wanted. They adopted it because it was the only way to get reliable results at all. Audio experts who rely on blind testing do so for the same reason.

Final thought: No one has to use blind comparisons if they don't want to. (Truth be told, while I've done a few, I certainly don't use them when I'm shopping for audio equipment.) Maybe that supertweeter really doesn't make a difference, but if you think it does, and you're happy with it, that's just fine. Just don't get into a technical argument with those guys from NHK!