Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
In any AB comparison, the two compared signal path elements (components, cables, tubes, etc.) must have at least a week's trial of listening experience before being switched to the other, preferably obver 4 or more such switches.

More immediate AB comparison is not sufficient to reveal subtle but significant differences.

This is because of the nature of auditory perception and its dependence on memory accrued over time (days or weeks, and not hours).

Changes attributed to burn-in are nearly always the result of becoming familar (memory accrual) of a component over days or weeks, for instance - your perception is what changes and not the component.

Imediate AB testing can be downright invalid, and is only useful for detecting large and obvious sonic differences.
This is because of the nature of auditory perception and its dependence on memory accrued over time (days or weeks, and not hours).

This is just 180 degrees opposite of the truth. Auditory memory for subtle differences dissipates in a matter of seconds. I defy you to cite a single shred of scientific evidence to the contrary.
I’ve gone over this debate and would like to summarize many of the points made.

As to DBT there may be:
1. Problems with methodology, per se, in audio;
2. Problems with most DBT that has been done, e.g., lack of random assignment or no control groups, making these experiments invalid scientifically;
3. Problems with particular experiential designs that are unable to yield meaningful results;
4. Sample problems, such as insufficient sample size, non-random samples;
5. Statistical problems making interpretation of results questionable.

All of these problems interact, making the results of most DBT’s in audio scientifically meaningless.

Advocates of DBT have been especially vociferous in this forum, but what have they actually said to respond to these criticisms? Virtually nothing beyond "No!" or "Where’s your proof?"

The "proof" of their position cited has been interesting, but it has been a reporting on the power of "sham" procedures or other stories that do not meet the guidelines necessary for a DBT procedure to qualify as science.

At the same time, they call DBT science, and maintain the supremacy of science. Calling something science without strictly adhering to scientific procedures, unfortunately is not science, and this is the case with DBT in audio far more often than not. It is more akin to the claims that intelligent design is science than it is science at this point. An additional point made in this forum has been the large number of DBT’s done that have failed to demonstrate that differences can be heard. A large number of scientifically compromised procedures yields no generalizable conclusions.

For anyone who has worked at a major university research mill, as I have, the skepticism about research results is strong. It is not that there is an anti-research or anti-science attitude. Rather, it is a recognition that the proliferation of research is more driven by the necessity of publishing to receive tenure and/or the potential for funding, increasingly from commercial interests that have compromised the whole process. We will have to see what happens to scientific DBT in audio when and if it happens.

I conclude that we are speaking fundamentally different languages when advocates of subjective audio evaluation and DBT advocates speak. For my part, subjective evaluation is fine as long as I understand that I better think twice before I believe a reviewer. I also truly believe in the supremacy of science, and intelligent design is not science.
Rouvin, I substantially agree, of course. I agree moreover about the liabilities of publish or perish in academia and its effect on research, even though I am in a field with no commercial interests, other than public polling.

I do study public policy also, including the impact of creationism or intelligent design as it is now called. It is awkward to get good state data on science degrees issued before and after adoption of anti-evolution policies, but the worst states in terms of failing to teach evolution have not experienced a decline in science degrees. They never had many in Kansas, for example. It is much like abortion restrictions, the states that adopt such restrictions are those with few abortions and experience no decline thereafter. Where abortion is common, no politician would risk introducing a restriction or voting for one.

I too have been struck by why those advocating DBT seem to think that anyone need bother paying attention to results when buyers obviously hear a difference which causes them to buy. Anyone who trusts reviewers other than to suggest what you might want to give a listen, are bound to be disappointed.
For a guy who doesn't believe in intelligent design, Rouvin, you practice its methods to perfection. You offer no evidence of your own--no tests, no results, nothing that can be replicated or disproved. Instead, you quibble with the "methodology," which you seem substantially uninformed about ("e.g., lack of random assignment or no control groups, making these experiments invalid scientifically"--Why in the world would you need a control group in a perception test?)

We are speaking different languages, Rouvin. DBT advocates are speaking the language of science. You are not.