Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
I got no problem being blindfolded for two weeks solid as long as you point me in the general direction of the porcelain amp stand when need be.
Hmm, one of you speaks as if you have participated in and/or has seen the results from many audio DBT tests. Where are these test held? Where are they reported?
Agaffer: A list of DBT test reports appears here:

http://www.provide.net/~djcarlst/abx_peri.htm

This list is a bit old, but I don't know of too many published reports specifically related to audio components since then. After a while, it became apparent which components were distinguishable and which were not. So nobody publishes them anymore because they're old news.

Researchers still use them. Here's a test of the audibility of signals over 20kHz (using DVD-A, I think):

http://www.nhk.or.jp/strl/publica/labnote/lab486.html

The most common audio use of DBTs today is for designing perceptual codecs (MP3, AAC, etc.). These tests typically use a variant of the ABX test, called ABC/hr (for "hidden reference"), in which subjects compare compressed and uncompressed signals and gauge how close the compressed version comes to the uncompressed.

Finally, Harman uses DBTs in designing speakers. Speakers really do sound different, of course, so they aren't using ABX tests and such. Instead, they're trying to determine which attributes of speakers affect listener preferences. The Harman listening lab (like the one at the National Research Council in Canada, whose designers now work for Harman) places speakers on large turntables, which allows them to switch speakers quickly and listening to two or more speakers in the same position in the room. Here's an article about their work:

http://www.reed-electronics.com/tmworld/article/CA475937.html

And just for fun, here's a DBT comparing vinyl and digital:

http://www.bostonaudiosociety.org/bas_speaker/abx_testing2.htm

I think Stan Lipshitz's conclusion is worth noting:

Further carefully-conducted blind tests will be necessary if these conclusions are felt to be in error.
Pableson, I find your posts interesting though not really responsive to the initial thread by TBG about the place of DBT in audio. Nor, have I felt that your posts have been responsive to my similar concerns and additional concerns about experimental validity (though I am sure that not all have been invalid), for instance, the very interesting and amusing 1984 BAS article where the Linn godfather not only failed to differentiate the analog from the digitally processed source, he identified more analog selections as digital. But... this was an atypical setup that would not be found in any home. We can’t really generalize from this, and this has nothing to do with advocacy of the "subjectivist" viewpoint. If you would be true to your objectivist bona fides, wouldn't you have to agree?

Then, there’s the issue, supported by your citations, that there have been DBT’s going back years that have demonstrated noticeable differences between individual components.

So, I think there is a background issue, and this was also mentioned in TBG’s initial post. Many adherents of DBT seem to be seeking the very "conformance" that you want to point out in others. That "conformance?" That until the very qualities claimed to exist can be proven to exist they must be assumed not to exist. Intoxicating argument, but ultimately revealing of a distinct bias, the invalidation of the experience of others as an a priori position until they can meet your standard. This "you ain't proved nothin'" approach is especially troublesome when one reads subjective reviews and realizes that the points they raise, creative writing they may well be, could never be addressed by DBT, ABX, or any other similar methodology. The majority of what we are able to perceive is not amenable to measurement that can be neatly, or even roughly, correlated with perception. To claim otherwise is an illusion. Enter the artists with some scientific and technical skill and we have high end audio. Sadly, with them come the charlatans and deluded along with average and "golden eared" folks who hope that they can hear their music sound a bit more like they think they remember it sounding somewhere in the past. Add something like cables and it seems the battle lines are drawn.

I’m a bit suspicious that you might not allow the person who can reliably detect a difference between two components to write whatever he wants in your forthcoming journal. You claim that once the DBT is passed, he can describe a component any way he wants. It doesn’t really make sense to me because a "just noticeable difference" is not the same as being able to notice all of the differences subjective reviewers claim, does it? If someone can tell the real Mona Lisa from a reproduction, even a well executed one, do you really care to hear about everything else he thinks about it? I don’t. I might want to see it myself, though.

I don’t think there will ever be anything like being able to recreate the exact sonic experience of a live musical performance in a home or studio. What we can hope for are various ways to recreate some reasonable semblance of some aspects of some performances. DBT probably has a place there.

In the meantime, I’d like to suggest a name for your journal, The Absolutely Absolute Sound. I think Gunbei has a supply blindfolds.
Rouvin: There really isn't much point in arguing with someone who assumes his conclusions, and then does nothing but repeat his assumptions. Here's what I mean:

The majority of what we are able to perceive is not amenable to measurement that can be neatly, or even roughly, correlated with perception.

How do you know what you are *able* to perceive (as distinct from what you *think* you perceive)? In the field of perceptual psychology, which is the relevant field here, there are standard, valid ways of answering that question. But it's a question you are afraid to address. Hence your refusal of my challenge to actually conduct a DBT of any sort. And the idea that you, an amateur audio hobbyist without even an undergraduate degree in psychology, has any standing to declare what is and is not valid as a test of hearing perception is pretty risible.

Finally, just to clear up your most obvious point of confusion: There is a difference between "what we are able to perceive" and "how we perceive it." You are conflating these two things, again because you don't want to face up to the issue. "What we are able to perceive" is, in fact, quite amenable to measurement. It's been studied extensively. There are whole textbooks on the subject.

Your harping on subjective reviewing, by contrast, is about "how we perceive it." We can't measure sound and make predictions about how it will sound to you, because how it will sound to you depends on too many factors besides the actual sound. That's why we need DBTs--to minimize the non-sonic factors. And when we minimize those non-sonic factors, we discover that much of what passes for audio reviewing is a lot of twaddle.