Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Tbg: The average consumer cannot really do a blind comparison of speakers, because speaker sound is dependent on room position, and you can't put two speakers in one place at the same time. But I recommend you take a look at the article on Harman's listening tests that I linked to above. If you can't do your own DBTs, you can at least benefit from others'.

I think there's a danger in relying on reviewers because "I agreed with them in the past." First, many audiophiles read reviews that say, "Speaker A sounds bright." Then they listen to Speaker A, and they agree that it sounds bright. But were they influenced in their judgment by that review? We can't say for sure, but there's a good probability.

Second, supposed we do this in reverse. You listen to Speaker A, and decide it sounds bright. Then you read a review that describes it as bright. So you're in agreement, right? Not necessarily. A 1000-word review probably contains a lot of adjectives, none of which have very precise meanings. So, sure, you can find points of agreement in almost anything, but that doesn't mean your overall impressions are at all in accord with the reviewer's.

Finally, if you're interested in speakers, I highly recommend picking up the latest issue of The Sensible Sound, which includes a brilliant article by David Rich about the state of speaker technology and design. It's a lot more of a science than you think. The article is not available online, but if your newsstand doesn't have it (it's Issue #106) you can order it online at www.sensiblesound.com. Believe me, it is worth it.
my take on dbt is this, if your talking about running db tests with amps,preamp's, source's & speaker's what's the point,its about what sound's "right" to each person & no amount of testing can show who like's what better, i too hate the word synergy but it's a real thing.

now if were talking about db tests on thing's like gear that has been "upgraded internaly" being db tested against a stock model or exotic cables against regular wire there is alot of merit to a db test, i would also think db test's would be great for alot of the thing's in our hobby that are deemed 'snake oil' like clock's & jar's of rock's & especially interconnect's & wire's.

you cant just dismiss all db test's as inconclusive or worthless nor can you say all db test's are worthy.

mike.
Wattsboss: I'd be careful about accusing others of naivete, if you're going to make posts like this. In a DBT, everything except the units under test are kept constant. So, for exampe, if you were comparing CD players, you would feed both to the same amp, and on to the same speakers. You wouldn't have to "blind" the associated components, because the associated components would be the same.
leme, I am not at all interested in DBTesting as I know from personal experience that there are substantial differences between both cables and amps. This is why I would have to say there is real concept invalidity to BDTesting. Furthermore, I really don't care what the results would be but suspect that a disproportional percentage of the time DBTests accept the null hypothesis.

Pabelson, I did not mean to say that I put much stake in what a reviewer may say even were I to have agreed with him in the past.

Bigjoe, certainly you can dismiss DBT if you find it invalid. Science has to be persuasive or orthodox. And as I keep saying this is not a hypothesis testing circumstance; it is a personal preference situation. Science is supposed to be value free with personal biases not influencing findings, but taste is free of such limitations or the need to defend them.
Several people here seem to mistake the purpose of DBT. The purpose is not necessarily finding the "best" component, although that may be the case, for instance, in Harman's speaker testing. The point is often simply to see if there is any audible difference whatsoever between components. As Pabelson noted way, way back in this thread, if two systems differ with respect to *any* fancy audiophile qualities (presentation, color, soundstage, etc.) then they will be distinguishable. And if they are distinguishable, that will show up in DBT. Ergo, if two systems are NOT distinguishable with DBT, they do not differ with respect to any fancy audiophilic qualities. (That's modus tollens.)

So, if two amps cannot be distinguished unless you're looking at the faceplates, why buy the more expensive one? Now who finds fault with that reasoning?

It's not a matter of "I like one kind of sound, that other guy likes another kind of sound, so to each his own." If no one can distinguish two components, then our particular tastes in sound are irrelevant. There's just no difference to be had.