Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
With regard to the DBT on two amps that were indistinguishable in a DBT...even if I accept the results as is, and not impose my own sense that "I could have heard the difference unlike those participating" etc and even if I accept the results as valid, that yes one could not tell the difference....still doesn't tell me much: this is because all it has shown me that these two amps are indistinguisable in a particular system set up (especially speakers!), in a particular room, etc...so adding that piece of information a review for example doesn't help me unless I have the exact same set up sans the amps. Not arguing reviewers are perfect either...far from it...they are inherently subjective and bound by their own experiences and equip as we all are...but at least he or she can frame their observations in context which allows me as a potential buyer what to look for, what to investigate, what things I may need to consider etc.
Put it another way, double blind testing cannot work IMO in audio precisely because there is no "absolute" definition on "what sounds better" to begin with (with apologies to HP). How do we define it the first place? Measurements are fine but in only in terms of allowing us figure out what aspects of sound (jitter, frequency range, channel separation etc etc)that contribute to what we are hearing...but in the end its a VALUE judgement. Some like tubes, some like SS. Some need gigantic floorstanders for frequency extension, some prefer mini-monitor accuracy & imaging. Some prefer transparecy, air etc others image solidity, warmth etc. The evaluators who comment after a DBT are including their own subjective value judgements as well. You cannot answer a question that is ill defined in the first place. In this light someone else perhaps half joking said what about cars, wine, literature whatever....and in a way he is right on the money: in all of these pursuits value judgements occur.

If we are to just focus aspects of sound and have no value judgement...i.e just want to know say the level of jitter of one against the other etc...then we can just measure for the most part, no need for DBT again.

The one aspect I think DBT can be used perhaps is not comparing one brand vs another, but within brand and the same model: for example you have the exact same system set up, and then test a new model: one has more jitter than the other...then one has upsampling switched on, the other doesn't etc etc...by the manufacturers....this way one may try to investigate what matters most, a priority schematic, if you will one, to audiophiles or the public at large and then devise products or even array of options on the same product to maximize revenue or provide tailor-made solutions for various segements. Obviously this is more of marketing strategy than an answer to the "holy grail".
Sooner or later someone is gonna start advocating db testing for cars...yikes!
It is a well structured experiment that differs greatly from what we normally hear and how we hear it.

This is just an astounding statement. How in the world can simply not telling someone what they are listening to affect what they *hear*? I'll grant you, it can certainly affect what they think about what they hear, but that's just the point. What they think is a function of things besides what they hear, and DBTs isolate the non-sonic effects.

Note that there's no necessary contradiction between these two statements:
1) Harry can't hear a difference between A and B.
2) Harry prefers A to B.
Both can be true. All it means is that Harry prefers A to B for some reason other than its sound (even if he thinks the sound is the reason).
Greg: Your generally thoughtful and balanced letter was, in my opinion, a little too balanced. Here's where you went astray:

The proponents of dbt...want to engage in very short tests conducted by the uninitiated. Most proponents of dbt use it to try and prove what they already have concluded.eg cables and amps all sound the same.

This reflects a basic misunderstanding. Objectivists don't want short tests, we want good tests. (All the research suggests that short tests are in fact better tests, but DBTs can be any duration you want). And a requirement of good DBTs is that you provide the subjects with adequate "training," meaning that they are familiar with the sound of the equipment they are comparing. The "uninitiated" make very poor test subjects.

Finally, no one argues that all cables and amps sound the same, and that's not the purpose of DBTs. The purpose of DBTs is to determine *which* components sound the same, and which do not.

How about a dbt between vinyl and digital. Or electrostatic and dynamic speakers- tubes and solid state.

All of these have been done, at one time or another. Vinyl and digital are easily distinguishable--unless the digital is a direct copy of the vinyl. Speakers are always distinguishable in DBTs. Tube and solid state amps are often but not always distinguishable. When they are distinguishable, it's usually because the tube amp is underpowered and clipping (though very mellifluously, as tubes are wont to do!), or because the output impedance of the amp is interacting with cable and speaker to produce frequency response errors.