Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
I sent the following letter to Sterophile:

Those of us who have been audiophiles for a long time(20+ years) have chronicled the progress of audio components. We have gone down the wrong road too many times to count. Either led there by others or led by our own ignorance and prejudice. We need to remember that while for the consumer this is a hobby for the producers it is a business. Producers must make a profit or die.The road to hi-fi perfection is littered with excellent products whose producers did not pay attention to normal business practices.
Audio producers have to fight for market share like anyone else. The best way to get market share is through aggressive advertising. Build a better mouse trap and they will come to you. Design a slick ad campaign and they will also come to you. There is a problem however. The audio reviewer. If done correctly, it seeks to pick the best mouse trap and debunk the advertising myths. Like manufactures the magazine is also a business and it must make a profit or die. Even worse it's profits come primarily form the very producers it seek to evaluate. A canceled subscription hardly competes with a canceled ad. This puts an ethical strain own the most principled reviewer.
Audiophiles aren't stupid. This a hobby. We are not just interested in good music. We like our components to come in beautiful packages, exclusivity, etc. Just because we purchased something unnecessary does not mean we were tricked. I am sure the piano black finish on my turntable has nothing to do with the sound. Does that mean I was tricked?
From a consumer standpoint if a manufacturer claims that his product sounds better or different it is the reviewers job to evaluate the manufactures claim.
Most reviewers want the manufacturers claim to be true. The reasons are obvious. They recommend a better product to their readers, the state of the art is advanced and the manufacturer can buy ads. Negative reviews save their readers money and nudge producers in the right direction and establish their credibility.
Ironically everyone can't be right. Being right or wrong has serious financial consequences for all involved. Reviewers have been wrong. Manufacturers have been wrong and sadly some have tried to rig the process. More often than not mistakes are based on ignorance and prejudice. Ignorance must be cured by the ignorant, and corruption should be prosecuted. Maybe we can do something about prejudice?

If we did not know what product is being tested we could at least eliminate our personal prejudice. Nothing wrong with double blind testing(dbt) per se. The proponents of dbt bring their own prejudices to the table. They want to engage in very short tests conducted by the uninitiated. Most proponents of dbt use it to try and prove what they already have concluded.eg cables and amps all sound the same. And that expensive products are just a rip off. How about a dbt between vinyl and digital. Or electrostatic and dynamic speakers- tubes and solid state.
The opponents of dbt are also somewhat disingenuous . I do not need dbt because I am not prejudiced. It is the nature of prejudice to not be aware of it.. The person who is prejudiced just thinks he is right.
The design of components is a mixture of art and science. The reviewers job is almost all art Some things just don't lend it self to scientific testing. Could you have a dbt of who is the most beautiful women or what piece of music is the most soothing.
Alas dbt does not even approach the real question. What difference does it make to me whether a &b sound different or the same. My point is which one more closley approximates the illusion of real music for me. Hasn't that always been the goal for audiophiles.
I have always stated that dbt is more a test of sonic memory, the better your memory the better the better you will test.
If you read this carefully you are going to be surprised by my conclusion. Everyone who is involved in audio design or review should from time to time engage in some sort of dbt testing! Your goal should be to determine which product sounds more like music. You may discover biases you did not know you have. Take your time. Make your dbt as much like your regular evaluation process. I think you will benefit from it.
Reginald G. Addison
rgregadd@aol.com
Forestville, Maryland
Pabelson, why do you willfully ignore the truth? It is only your CLAIM that DBT gives the reality, when you say "DBT--because it usefully separates reality from illusion" Certainly you don't claim that DBTesting is isomorphic to reality. It is a well structured experiment that differs greatly from what we normally hear and how we hear it. I would say that DBT is an illusion of reality and that reality would be found in the amp that most preferred, especially were personal ownership and manufacturer hidden.

I suspect that this discussion has gone as far as it can. You insist that double blind same/different testing is valid and I say it is not because it is an invalid assessment of people's hearing differences and saying what they like. I am not saying I like what I like and I reject it any more than you are saying I know there are no differences among amps, etc. and therefore anything that shows otherwise is not science as represented by DBT.
the illusion is the false reality. Kinda by definition
That is correct, semantically. It's also kinda philosophical.
IMO we should distinguish between semantics and philosophical extrapolations and the simple PRACTICAL application of DBT in our (restricted) context.

BTW, I also suggest that certains things CAN be indicative of performance or INFLUENCE things, in OUR context, such as:
*measurements -- as long as we measure what correlates to what we're looking for (i.e. we would have to determine in advance which measurement indicates what aspect, in terms of perceived sound; little has been done there)
* wires for example -- because they link two electrical circuits, active / passive & combinations thereof
* active components: their circuit design, power supplies, input & output stages, components used... influence the distortion levels AND how well these components interact with the load. Change the load (what the output stage "sees") and things change electrically; if we change something in the system, we've modified the system "circuit" fer pete's sake. Things may also change in the audible range...

...etc.

So, maybe we are discussing whether it's worth setting up dbt to help notice differences in the audible spectrum?
Or whether perhaps dbt is not the most efficient/reliable method of doing so in this particular context?
Or, perhaps, discussion is a way of communicating -- a marvellous, human activity that we all need. And the subject of dbt allows us to do just that -- so what we really want to do is to talk regardless and dbt offers us just that opportunity, whether it is or isn't panacea.
I go for the latter -- my take of course! Cheers:)
Qualia8, I think the blind testing of women orchestra players has been a wonderful thing. So I'm cool with that and think it's a wonderful breakthrough for women musicians and for fairness.
But audio testing isn't single-player auditioning is it? ... and probably another way of auditioning that would be great would be to have auditonist X blind play along with the rest of her section or the rest of the orchestra! ... I'm willing to say that blind auditioning is a good way of rooting out sexism in big-city orchestra hiring, but I'm not willing to conclude that this addressese the problems with audio auditioning.
The problem again is synergy--and time and acquired taste and listener quirkiness.
You write, "So, if two amps cannot be distinguished unless you're looking at the faceplates, why buy the more expensive one? Now who finds fault with that reasoning?"
To which I would say you would need to test the amplifiers in question with at least 3 different types of speakers, including speakers with different sensitivity levels, different ohlm levels before you can go anywhere near calling it a valid test. Beyond that, I know that my listening preferences, food preferences, beer preferences, beauty preferences are not static and frozen. I have a close friend who is not drop-dead gorgeous on a first look, but keep looking at her, and over time, you just are drawn more and more to her face. It's a beauty that takes time to emerge and when it hits you, you're deeply enthralled because you keep searching and studying her beauty. Yet, if you put her picture in a mag, maybe I wouldn't pick it out. This woman's beauty increases over time, and this is different from the "now that I'm with her/him, of course, he/she's good looking effect."
So the point: should we we blind audition for 1 month? 3 months? 6 months? .....time is critical here.
Ultimately, I reject the idea of auditioning a single component other than a source component. Speakers and amplifiers have to be auditioned as a team. And teams, we all know, often combine in ways that are more than the sum of their parts or less than the sum of their parts.
to Jeff Jones, hey, it's OK if someone wants to test something; it's just let's be honest about the very real limits of the tests! ... This reminds me of modern presidential polling ... pollsters call 1,000 people around the USA and get hangups, etc., and eventually come up with a number of people voting for Bush vs. Kerry. And the polls have a margin of error of 3 to 5 percentage points .... So the poll one day says Bush 50, Kerry 50; this really could mean (figuring in the margin of error) Bush 55, Kerry 45 or Kerry 55, Bush 45. The poll is basically useless. But the way out of this is that pollsters take samples every day as the campaign advances and there are so many different polling agencies ... and so it's only under conditions of repeated poll taking and repeated polltaking by multiple and antogonistic entities that gives us any confidence that yes, the 2004 Presidential race is neck and neck. When we see polls over 10 straight days, taken by 10 different entities, clustering around Bush 50, Kerry 50, and clustering over a period of weeks, ONLY then can we have some confidence. But even so, voter turnout is always the X factor, and most pollsters admit that's the one they can never nail down. A higher than usual turnout among Democrats or Republicans or evangelicals or blacks or whomever will make the poll results pretty much invalid.
I just don't think the tests can get us the information we're looking for--without doing something like the equivalent of daily tracking by multiple entities ... And by the way, in polling, all these entities have an incentive to get the numbers right because they will make more money and win more acclaim. The polls by the candidates themselves have this incentive in a big way. But where's the equivalent incentive for audio?
I frankly celebrate the wonderful elusive complicated complexity and quirkiness of listening to audio.
I'm quite willing to enjoy that.
With regard to the DBT on two amps that were indistinguishable in a DBT...even if I accept the results as is, and not impose my own sense that "I could have heard the difference unlike those participating" etc and even if I accept the results as valid, that yes one could not tell the difference....still doesn't tell me much: this is because all it has shown me that these two amps are indistinguisable in a particular system set up (especially speakers!), in a particular room, etc...so adding that piece of information a review for example doesn't help me unless I have the exact same set up sans the amps. Not arguing reviewers are perfect either...far from it...they are inherently subjective and bound by their own experiences and equip as we all are...but at least he or she can frame their observations in context which allows me as a potential buyer what to look for, what to investigate, what things I may need to consider etc.