Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Gregm, I do not know how many out there experienced the Murata demonstration at CES 2004, but it was a great deal like what you describe. Initially, the speakers played a passage. Then the super tweeters were used and the passage replayed. The ten people in the audience all expressed a preference for the use of the super tweeters. There was much conversation but ultimately someone asked to hear the super tweeter only. The demonstrator said, we already were hearing it.

When we all refocused on the sound, all that we could hear was an occasional spit, tiz, snap. There was no music at all. The Muratas come in at 15k Hz. I left and dragged several friends back for a second demonstration with exactly the same results.

Would there be any benefit to having this done single or double blind? I don't think so. Do we need to have an understanding for how we hear such high frequency information, without which it might be a placebo or Hawthorne Electric phenomenon? I don't.

But this experience is quite at odds with the article that Pabelson cited. What is going on? I certainly don't know, save to suggest that there is a difference in what is being asked of subjects in the two tests.
I teach a course on the philosophy of color and color perception. One of the things I do is show color chips that are pairwise indistinguishable. I show a green chip together with another green chip that is indistinguishable. Then, I take away the first chip and show a third green chip that is indistinguishable from the second. And then I toss the second chip and introduce a fourth chip, indistinguishable from the third. At this point, I bring back the first green chip and compare it with the fourth. The fourth chip now looks bluish by contrast, and is easily distinguished from the original. How does that happen? We don't notice tiny differences, but they add up to noticable differences. We can be walked, step-wise, from any color to any other color without ever noticing a difference, provided our steps are small enough!

Same for sound, I bet. That's why I don't understand the obsession with pair-wise double-blind testing of individual components. Comparing two amps, alone, may not yield a discriminable difference. Likewise, two preamps might be pairwise indiscriminable. But the amp-pre-amp combos (there will be four possibilities) may be *noticably* different from one another. I bet this happens, but the tests are all about isolating one component and distinguishing it from a competitor, which is exactly wrong!

The same goes for wire and cable. It may be difficult to discern the result of swapping out one standard power cord or set of ic's or speaker cables. But replace all of them together and then test the completely upgraded set against the stock setup and see what you've got. At least, I'd love to see double-blind testing that is holistic like this. I'd take the results very seriously.

From the holistic tests, you can work backward to see what is contributing to good sound, just as you can eventually align all color chips in the proper order, if presented with the whole lot of them. But what needs to be compared in the first place are large chunks of the system. Even if amp/pre-amp combos couldn't be distinguished, perhaps amp/pre-amp combos with different cabling could be (even though none of the three elements used distinguishable products!). I want to see this done. Double blind.

In short: unnoticable difference add up to *very* noticable differences. Why this non-additive nature of comparison isn't at the forefront of the subjectivist/objectivist debate is a complete mystery to me.

-Troy
Troy: Psychoacoustics is well aware of the possibility that A does not necessarily equal C. That hardly constitutes a reason to question the efficacy of DBTs.

And you are quite correct that changing both your speaker cables and interconnect(s) simultaneously might make a difference, when changing just one or the other would not. But assuming you use proper level-matching in your cable/wire comparisons, there probably won't be an audible difference, no matter how many ICs you've switched in the chain. (And if you don't use proper level-matching in your cable/wire comparisons, you will soon be parted from your money, as the proverb goes.)

You might be interested to know that Stereo Review ran an article in June 1998, "To Tweak Or Not to Tweak," by Tom Nousaine, who did listening tests comparing two very different whole systems. (The only similarities were the CD player and the speakers, but in one system the CD player fed an outboard DAC.) The two systems cost $1700 and $6400. The listening panel could hear no difference between the two systems, despite differences in DACs, preamps (one a tube pre), amps, ICs, and speaker cables.

So, contrary to your assertions, this whole question has been studied, and there is nothing new under the sun.
My point was not to call into question the efficacy of blind testing. I am quite in favor of it. Even when only one element of a system is varied, the results are interesting, and valuable. For instance, if I can pairwise distinguish speakers (blindly) of $1K and $2K, but not be able to distinguish similarly priced amps, or powercords, or what have you, then my money is best spent on speakers. Likewise, if preamps are more easily distinguishable than amps, I'll put my money there. A site that's interesting in this regard is:

http://www.provide.net/~djcarlst/abx_data.htm

I never said DBT is ineffective. It's just that *most* testing ignores the phenomenon that I cited: sameness of sound is intransitive, i.e., a=b,b=c, but not a=c. If the question is whether a certain component contributes to the optimal audio system, this phenomenon can't be ignored.

Of course scientists studying psychoacoustics are already aware of the phenomenon. I don't think I'm making a contribution to the science here. But the test you cite above is an exception, and for the most part, A/B comparisons are done while swapping single components, not large parts of the system. This is fine, when you *do* discover differences. Because then you know they're significant. But when you don't find differences, it's indeterminate whether there are no differences to be found OR the differences won't show up until other similar adjustments are made elsewhere in the system.

But I am *very much* in favor of blind testing, even in the pair-wise fashion. For instance, I want to know what the minimum amount of money is that I could spend to match the performance of a $20K amp in DBT. Getting *that* close to a 20K amp would be good enough for me, even if the differences between my amp and it will show up with, say, simultaneously swapping a $1K preamp with a $20K preamp. So where's that point of auditorily near-enough for amps?

I've also learned from DBT where I want to spend my extremely limited cash: speakers first, then room treatment, then source/preamp, then amp, then ic's and such. I'll invest in things that make pair-wise (blind) audible differences over (blind) inaudible differences any day.

Still, for other people here, who are after the very best in sound, only holistic testing matters. Their question (not mine) is whether quality cabling makes any auditory difference at all, in the very best of systems. Same for amps.

Take a system like Albert Porter's. Blindfold Mr. Porter. If you could swap out all the Purist in his system and put in Radio Shack, and *also* replace his amps with the cheapest amps that have roughly similar specs, without his being able to tell, that would be very surprising. But I haven't seen tests like that... the one you mention above excepted.
In theory, I like the idea of double blind testing, but it has some limitations as others have already discussed. Why not play with some other forms of evaluating equipment?

My first inclination would be to create a set of categories; such as dynamics, rythm and pace, range, detail, etc.. You could have a group of people listen and rate according to these attributes on a scale of perhaps 1 to 5. You could improve the data by having the participants not talk to one another before completing their ratings, by hiding the equipment from them during the audition, and by giving them a reference audition where pre-determined ratings are provided from which the rater could pivot up or down across the attributes.

Yet another improvement would be to take each rating category and pre-define its attributes. For example, ratings for "detail" as a category could be pre-defined as: 1. I can't even differentiate the instruments and everything sounds like a single tone. 2. I can make out different instruments, but they don't sound natural and I cannot hear their subtle sounds or noises. 3. Instruments are well differentiated and I can hear individual details such as fingers on the fret boards and the sound of the bow on the violin string. Well, you get the picture. The idea is to pre-define a rating scale based on characteristics of the sound. Notice terms such as lush or analytical are absent because they don't themselves really define the attribute. They are subjective conclusions. Conceivably, a blend of categories and their attributes could communicate an analysis of the sound of a piece of equipment, setting aside our conflicting definitions about what sounds 'best', which is very subjective. Further, such a grid of attributes, when completed by a large number of people, could be statistically evaluated for consistency. Again, it wouldn't tell you whether the equipment is good or bad, but if a large number of people gave "detail" a rating of #2 and you had a low deviation around that rating, you might get a good idea of what that equipment sounds like and decide for yourself whether those attributes are desireable to you or not. Such a system would also, assuming their were enough participants over time, flush out the characteristics of equipment irrespective of what other equipment it was used with by relying upon a large volume of anecdotal evidence. In theory, the characteristics of a piece of equipment should remain consistent across setups or at least across similar price points.

Lastly, by moving toward a system of pre-defined judgements one could create some common language to rating attributes. Have you noticed that reviewers tend to use the same vocabularly whether evaluating a $500 piece of gear or a $20,000 piece of gear. So, the review becomes judgemental and loses its ability to really place the piece of gear in the spectrum of its possible attributes.

It's not a double blind study, but large doses of anecdotal evidence when statistically evaluated can yield good trend data.

Just an idea for discussion. If you made it this far, thanks for reading my rant :).

Jeff