Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Qualia, you state, "So, if two amps cannot be distinguished unless you're looking at the faceplates, why buy the more expensive one? Now who finds fault with that reasoning?" My point is that DBTesting "no difference" is not no difference. It is not a valid methodology as it is at odds with what people hear even if they cannot see the faceplates. Furthermore, I can hear a difference and my tastes are all that matters. This is not scientific demonstration.
Tgb:

All of us hear are interested in one thing: the truth. If DBT is a fundamentally flawed methodology, its results are no guide to the truth about what sounds good. So if the studies are all flawed, and there are audible differences between amplifiers with virtually the same specs, even if, somehow, no one can detect those differences without looking at the amps, then I'm with you. Likewise, if there isn't anything fundamentally wrong with the studies, and they strongly indicate that certain components are audibly indistinguishable, then you should be with me.

Your own perceptions -- "I can hear a difference and my tastes are all that matters" -- should not trump science any more than your own experiences in general should trump science. I remember seeing ads with athletes saying "Smoking helps me catch my wind." I also recall people saying how smoking made them healthy and live long. Their personal experiences with smoking did not trump the scientific evidence, though. This is just superstition. The Pennsylvania Dutch used to think that if you didn't eat doughnuts on Fastnacht's Day, you'd have a poor crop. Someone had that experience, no doubt. But it was just an accident. Science is supposed to sort accident from true lawful generalization. It's supposed to eliminate bias, as far as possible, in our individual judgments and take us beyond the realm of the anecdote.

Now, if your perception of one component bettering another is blind, then ok. But if you're looking at the amp, then, given what we know about perception, your judgments aren't worth a whole lot.

So... are the studies all flawed? Well, certainly some of the studies are flawed. But, as Pableson said, the studies all point to the same conclusions. And there are lots of studies, all flawed in different ways. Accident? Probably not.

Compare climate science. Lots of models of global temperatures over the next hundred years and they differ by a wide margin from each other (10 degrees). They're all flawed models. But they all agree there's warming. To say that the models are flawed isn't enough to dismiss the science as a whole. Same in psychoacoustics.

Long story short: there's no substitute for wading through all of the studies. I haven't done this, but I've read several, and I didn't see how the minor flaws in methodology could account for no one's being able to distinguish cables, for instance.
[W]hat components do you think match up well against really really expensive ones?

That is a loaded question. I know a guy who wanted to find the cheapest CD player that sounded identical to the highly touted Rega Planet. He went to a bunch of discount stores, bought up a half dozen models, and conducted DBTs with a few buddies. Sure enough, most of the units he chose were indistinguishable from the then-$700 Planet. The cheapest? Nine dollars.

That is not a misprint.

Lest you think he and his friends were deaf and couldn't hear anything, they really did hear a difference between the Planet and a $10 model. At that level, quality is hit-or-miss. But I should think that any DVD player with an old-line Japanese nameplate could hold its own against whatever TAS is hyping this month. If they sound different, it's probably because the expensive one doesn't have flat frequency response (either because the designer intentionally tweaked it, or because he didn't know what he was doing).

Amps are a bit trickier, because you have to consider the load you want to drive. But the vast majority of speaker models out there today are fairly sensitive, and don't drop much below 4 ohms impedance. A bottom-of-the-line receiver from a Denon or an Onkyo could handle a stereo pair like that with ease. (Multichannel systems are a different story. But I once asked a well-known audio journalist what he would buy with $5000. He suggested a 5.1 Paradigm Reference system and a $300 Pioneer receiver. He was not joking.)

There are good reasons to spend more, of course. Myself, I use a Rotel integrated amp and CD player. I hate all the extra buttons on the A/V stuff, and my wife finds their complexity intimidating. Plus, I appreciate simple elegance. I also appreciate good engineering. If I could afford it, I'd get a Benchmark DAC and a couple of powerful monoblocks. But that money is set aside for a new pair of speakers.
One thing about being over 60 is that the style of thought in society has changed but not yours. When I was a low paid assistant professor and wanted ARC equipment for my audio system, I just had to tell myself that I could not afford it, not that it was just hype and fancy face plates or bells and whistles and that everyone knows there is no difference among amps, preamps, etc. DBT plays a role here. Since it finds people can hear no differences and has the label of "science," it confirms the no difference hopes of those unable to afford what they want. My generation's attitudes no result in criticizing other peoples buying decisions as "delusional."

I certainly have bought expensive equipment whose sound I hated (Krell) and sold immediately and others (Cello) that I really liked. I have also bought inexpensive equipment that despite the "good buy" conclusion in reviews proved nothing special in my opinion (Radio Shack personal cd player). There is a very low correlation between cost and performance, but there are few inexpensive components that stand out (47 Labs) as good buys. This is not to deny that there are marginal returns for the money you spend, but the logic of being conscious of getting your money's worth really leads only to the cheapest electronics probably from Radio Shack as each additional dollar spent above these costs gives you only limited improvement.

DBTesting, in my opinion, is not the meaning of science, it is a method that can be used in testing hypotheses. In drug testing, since the intrusion entails giving a drug,, the control group would notice that they are getting no intrusion and thus could not be benefited. Thus we have the phony pill, the placebo. The science is the controlled random assignment pretest/posttest control design and the hypothesis, based on earlier research and observations of data, that it is designed to answered with the testing.

If we set aside the question of whether audio testing should be dealt with scientifically, probably most people would say that not knowing who made the equipment you hear would exclude your prior expectations about how quality manufacturers equipment might sound. Simple A/B comparisons of two or even three amps with someone responsible for setting levels is not DBT. Listening sessions need to be long enough and with a broad range of music to allow a well based judgment. In my experience, this does remove the inevitable bias of those who own one of the pieces and want to confirm the wisdom of their purchase, but more importantly does result in one amp being fairly broadly confirmed as "best sounding." I would value participation in such comparisons, but I don't know whether I would value reading about such comparisons.

I cannot imagine a money making enterprise publishing such comparisons or a broad readership for them. I also cannot imagine manufacturers willingly participating in these. The model here is basically that of Consumers Reports, but with a much heavier taste component. Consumers Reports continues to survive and I subscribe, but it hardly is the basis of many buying decisions.

My bottom line is that DBT is not the definition of science; same/different comparisons are not the definition of DBT; any methodology that overwhelmingly results in the "no difference" finding despite most hearing a difference between amps clearly is a flawed methodology that is not going to convince people; and finally, that people do weigh information from tests and reviews into their buying decisions, but they also have their personal biases. No mumble-jumble about DBTesting is ever going to remove this bias.
To the doubters of DBT:

Women are fairly recent additions to professional orchestras. For years and years, professional musicians insisted they could hear the difference between male and female performers, and that males sounded better. Women were banished to the audience. The practice ended only after blind listening tests showed that no one could discern the sex of a performer.

Surely, these studies had as many flaws as blind cable comparisons. Probably more, since they involved live performances by individual people, which are inevitably idiosyncratic.

Would the DBT doubters here have been lobbying to keep women out of orchestras even after the tests? Or would they, unlike the professional musicians of the day, never have heard the difference in the first place?