Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg

Showing 50 responses by pabelson

Tbg: The main question of your post seems to be, Do objectivists like Arny Krueger extol blind tests only because they like the results? The short answer is no. Arny K. and his ilk did not invent blind tests as a weapon to use against the high-end industry. In fact, they did not invent blind tests at all. Blind listening tests were developed much earlier by perceptual psychologists, and they are the basis for a huge proportion of what we know about human hearing perception (what frequencies we can hear, how quiet a sound we can hear, how masking works to hide some sounds when we hear others, etc.). Blind tests aren’t the only source of our knowledge about those things, but they are an essential part of the research base in the field.

Folks in the audio field, like Arny, started using blind tests because of a paradox: Measurements suggested that many components should be sonically indistinguishable, and yet audio buffs claimed to be able to distinguish them. At the time, no one really knew what the results of those first blind tests would be. They might have confirmed the differences, which would have forced us to look more closely at what we were measuring, and to find some explanation for those confirmed differences. As it turned out, the blind tests confirmed what perceptual psychologists would have predicted: When two components measured differently enough, listeners could distinguish them in blind tests; when the measurements were more similar (typically, when neither measured above known thresholds of human perception), listeners could not distinguish them.

Do all blind tests result in a “no difference” conclusion? Of course not, and you’ve cited a couple of examples yourself. Your preamp test, for one. (Even hardcore objectivists agree that many preamps can sound different.) Arny’s PCABX amp tests, for another. (Note, however, that Arny typically gets these positive results by running the signal through an amp multiple times, in order to exaggerate the sonic signature of the amp; I don’t believe he gets positive results when he compares two decently made solid state amps directly, as most of us would do.)

Your comments on statistical significance and random samples miss an important point. If you want to know what an entire population can hear, then you must use a random sample of that population in your test. But that’s not what we want to know here. What we want to know here is, can anybody at all hear these differences? For that, all we need to is find a single test subject when can hear a difference consistently (i.e., with statistical significance). Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable.

Which leads to a final point. You say you are a scientist. In that case, you know that quibbling with other scientists’ evidence does not advance the field one iota. What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence? It’s not about who’s right. It’s about getting at better understanding. If you have some real evidence, then you will add to our knowledge.
I am afraid your argument that DBT proves humans cannot hear the minor differences runs counter to most people's experiences.

Granted, but why should we assume that the scientists are all wrong and people's observations are right? Surely you know that our perceptions can fool us. Think of optical illusions. Well, there are also such things as aural illusions. One of the most basic is this: When you hear two sounds, you often think they are different, even when they are exactly the same.

So when you say, I hear a difference between this cable and that one, is there a real difference, or is it just an aural illusion? We don't know. That's why scientists developed the forced-choice DBT--because it usefully separates reality from illusion. The only controversy here comes from people who don't want to look at the evidence.
TBG: This thread had been put to bed. It was dormant for two weeks. I'm not the one who revived it. And if it doesn't matter to you, why do you keep posting? Go buy your expensive cables, and just enjoy the pleasure they give you. As you said, what does it matter to you if scientists say your cables are indistinguishable from zipcord?
Tbg: For someone who "teaches statistics," you express a rather narrow perspective on the field. Think about how you would use statistics to determine whether a coin is fair. (You do agree that you can use statistics to do this, don't you?) The problem of determining whether a certain subject can hear a difference between two components is precisely the same. Do his results suggest that he was just guessing which was which (the equivalent of flipping a fair coin), or that he could indeed hear a difference (flipping an unbalanced coin). At any rate, it really doesn't matter whether you think statistics is applicable here. People who actually study hearing and do listening tests use statistics for this purpose every day of the week.

I would define undeniable differences as those for which measurements would lead us to predict such differences. If there are measured characteristics of two components that are above the known threshold of human detection, then there's no real need to do a DBT to determine whether they sound different. For example, if one amp has a THD of 0.1%, and the other is at 3%, we can safely assume that they are audibly different. Transducers typically measure differently enough that we can assume they sound different. Ditto many (but not all) tube amps. Solid state amps, unless they are underpowered for the speakers they are driving or have a non-flat frequency response (perhaps due to an impedance mismatch) generally do not.

Before I get tagged with the "measurements are everything" slur, let me say that these measurements can only predict WHETHER two components will sound different. If they do sound different, the measurements cannot tell us (at least not very well) which you will prefer, or even in what ways they will sound different to you.

For more info on DBTs, see the ABX home page, mirrored here:

http://www.pcavtech.com/abx/
My apologies. I took you for the typical DBT-basher. As for amps, assuming you are talking about solid-state amps designed to have flat frequency response, I seriously doubt it matters (in a DBT) what preamp you use, or how expensive it is. If it has the power to drive your speakers, it will sound like any other good solid state amp with enough power to drive your speakers. Or so the bulk of the research suggests.

To your final point, I'm not sure what's in Mr. Porter's system, but the Nousaine experiment at least suggests that he would NOT notice such a swap, assuming you could finesse the level-matching issues. That's not to say that Mr. Porter's system is not right for Mr. Porter--merely that it might be possible for someone else to achieve a Porter-like sound for somewhat less money. And swapping out amps and cables is one thing; I wouldn't even dream of touching his turntable!
I just think the proper hypothesis should be that a sample of people can hear a difference between cables or amps.

Well, that's one possible hypothesis. Another possible hypothesis is that one particular individual can hear a difference. That's the equivalent of testing the fairness of one particular coin. Note that the sample size isn't one. It's the number of listening trials/coin flips.

I am only concerned that the choice of the sample size may be determined by what the researcher's intended finding might be.

The choice of sample size isn't what's critical here. The statistical significance is. Granted, larger samples reduce the possibility of false negatives, but it's not as if there have never ever been any ABX tests with large sample sizes. The Stereo Review cables test had a sample size of 165. The possibility of a false negative is very low with a sample that big. (Since you teach statistics, I'll let you do the math.)

And if you think the reason these tests come up negative so often is sample size, you as a "scientist" ought to know how to respond: Do your own experiment. Complaining about other people's data isn't science.

I think it is a far more interesting hypothesis to suggest that those with "better ears" would do better.

Then test it. The SR panel was a pretty audio-savvy bunch, as I recall.

I don't think most audiophile would be convinced or should be convinced that all amps or wires sound the same.

Are you saying they're all close-minded?
Tbg: If all you care about is finding a great speaker, why'd you start this thread???

All individuals are not the same. I never said they were. I think you're hung up on the idea of a hypothesis about what a majority of people can hear (in which case it would be necessary to test a random sample of all people). But the more common question in audio is, can anybody hear it? To answer that question in the affirmative, all you have to do is find *one* person who can hear a difference between two components. That's why testing a single individual can be appropriate. (Just remember that, in a single-person test, the null hypothesis relates to that single person; if he flunks, you can't conclude anything about anyone else.)

Here's a good example of the kind of testing that researchers do:

http://www.nhk.or.jp/strl/publica/labnote/lab486.html

Note that one of their 36 subjects got a statistically significant result. In a panel that large, this can easily happen by chance. The check this, they tested that individual again, and she got a random result, suggesting that her initial success was merely a statistical fluke.
So, Rouvin, if you don't think all those DBTs with negative results are any good, why don't you do one "right"? Who knows, maybe you'd get a positive result, and prove all those objectivists wrong.

If the problem is with test implementation, then show us the way to do the tests right, and let's see if you get the results you hope for. I'm not holding my breath.
My evidence for what? That illusion is a false reality? You need evidence for that? Look in a dictionary.
Agaffer: A list of DBT test reports appears here:

http://www.provide.net/~djcarlst/abx_peri.htm

This list is a bit old, but I don't know of too many published reports specifically related to audio components since then. After a while, it became apparent which components were distinguishable and which were not. So nobody publishes them anymore because they're old news.

Researchers still use them. Here's a test of the audibility of signals over 20kHz (using DVD-A, I think):

http://www.nhk.or.jp/strl/publica/labnote/lab486.html

The most common audio use of DBTs today is for designing perceptual codecs (MP3, AAC, etc.). These tests typically use a variant of the ABX test, called ABC/hr (for "hidden reference"), in which subjects compare compressed and uncompressed signals and gauge how close the compressed version comes to the uncompressed.

Finally, Harman uses DBTs in designing speakers. Speakers really do sound different, of course, so they aren't using ABX tests and such. Instead, they're trying to determine which attributes of speakers affect listener preferences. The Harman listening lab (like the one at the National Research Council in Canada, whose designers now work for Harman) places speakers on large turntables, which allows them to switch speakers quickly and listening to two or more speakers in the same position in the room. Here's an article about their work:

http://www.reed-electronics.com/tmworld/article/CA475937.html

And just for fun, here's a DBT comparing vinyl and digital:

http://www.bostonaudiosociety.org/bas_speaker/abx_testing2.htm

I think Stan Lipshitz's conclusion is worth noting:

Further carefully-conducted blind tests will be necessary if these conclusions are felt to be in error.
If you find this fascinating, Qualia8, then maybe you're the one who should be taking these sugar pills.

Obviously I agree with you, since you agree with me. There's a lot of expectation bias (aka, placebo effect) and confirmation bias (looking for--and finding--evidence to support your prior beliefs) in hearing perception. But I suspect some high-enders would rather sacrifice the retirement fund than admit that they might be subject to these mechanisms.

To your last point, it is NOT all ruined for me. I can spend my time auditioning speakers, trying to optimize the sound in my room, and seeking out recordings that really capture the ambience of the original venue.
Tbg: The average consumer cannot really do a blind comparison of speakers, because speaker sound is dependent on room position, and you can't put two speakers in one place at the same time. But I recommend you take a look at the article on Harman's listening tests that I linked to above. If you can't do your own DBTs, you can at least benefit from others'.

I think there's a danger in relying on reviewers because "I agreed with them in the past." First, many audiophiles read reviews that say, "Speaker A sounds bright." Then they listen to Speaker A, and they agree that it sounds bright. But were they influenced in their judgment by that review? We can't say for sure, but there's a good probability.

Second, supposed we do this in reverse. You listen to Speaker A, and decide it sounds bright. Then you read a review that describes it as bright. So you're in agreement, right? Not necessarily. A 1000-word review probably contains a lot of adjectives, none of which have very precise meanings. So, sure, you can find points of agreement in almost anything, but that doesn't mean your overall impressions are at all in accord with the reviewer's.

Finally, if you're interested in speakers, I highly recommend picking up the latest issue of The Sensible Sound, which includes a brilliant article by David Rich about the state of speaker technology and design. It's a lot more of a science than you think. The article is not available online, but if your newsstand doesn't have it (it's Issue #106) you can order it online at www.sensiblesound.com. Believe me, it is worth it.
Steve: I wouldn't be quite so dogmatic about the lack of differences, for one reason: Many audiophiles don't level-match when they do comparisons. So there really are differences to hear in that case. Of course, a difference you can erase with a simple tweak of the volume knob isn't one worth paying for, in my opinion.
Wattsboss: I'd be careful about accusing others of naivete, if you're going to make posts like this. In a DBT, everything except the units under test are kept constant. So, for exampe, if you were comparing CD players, you would feed both to the same amp, and on to the same speakers. You wouldn't have to "blind" the associated components, because the associated components would be the same.
Sean T.: If you believe in DBTs, then you have to believe in the results of DBTs. Some years ago (I can get details if you want), Tom Nousaine delivered a paper at an AES conference in which he sumarized the results of about two dozen published DBTs of amplifiers. Of those, only five reported statistically significant positive results. One involved a comparison of 10-watt and 400-watt amps, so clipping distortion was a likely cause. Two others involved a misbiased or oscillating tube amp. One author simply tossed out 25% of his results. And the fifth involved amps with reportedly large frequency response differences.

In other words, amps can sound different, but 1) they usually don't; and 2) when they do, there is a very good and easily measurable explanation. If you can distinguish two amps with flat frequency response and low distortion in a blind test, you will be the first. And most amps today have flat frequency response and low distortion, at least when they are not driven beyond their capabilities.
[W]hat components do you think match up well against really really expensive ones?

That is a loaded question. I know a guy who wanted to find the cheapest CD player that sounded identical to the highly touted Rega Planet. He went to a bunch of discount stores, bought up a half dozen models, and conducted DBTs with a few buddies. Sure enough, most of the units he chose were indistinguishable from the then-$700 Planet. The cheapest? Nine dollars.

That is not a misprint.

Lest you think he and his friends were deaf and couldn't hear anything, they really did hear a difference between the Planet and a $10 model. At that level, quality is hit-or-miss. But I should think that any DVD player with an old-line Japanese nameplate could hold its own against whatever TAS is hyping this month. If they sound different, it's probably because the expensive one doesn't have flat frequency response (either because the designer intentionally tweaked it, or because he didn't know what he was doing).

Amps are a bit trickier, because you have to consider the load you want to drive. But the vast majority of speaker models out there today are fairly sensitive, and don't drop much below 4 ohms impedance. A bottom-of-the-line receiver from a Denon or an Onkyo could handle a stereo pair like that with ease. (Multichannel systems are a different story. But I once asked a well-known audio journalist what he would buy with $5000. He suggested a 5.1 Paradigm Reference system and a $300 Pioneer receiver. He was not joking.)

There are good reasons to spend more, of course. Myself, I use a Rotel integrated amp and CD player. I hate all the extra buttons on the A/V stuff, and my wife finds their complexity intimidating. Plus, I appreciate simple elegance. I also appreciate good engineering. If I could afford it, I'd get a Benchmark DAC and a couple of powerful monoblocks. But that money is set aside for a new pair of speakers.
"If you can distinguish two amps with flat frequency response and low distortion in a blind test, you will be the first." This means one of two things: there is no differences among amps or DBTesting does not allow humans to judge the differences.

No, TBG, it only means that the differences are not sufficient to be audible by human ears. Read the data, or supply your own. As of now, your only argument seems to be, "I don't believe it, so it can't be true."

As for DBT methodology, it is accepted by everyone in the field of perceptual psychology, in part because it gets plenty of positive results. It just doesn't always get them in the narrow category of high-end audio, because high-end audio has more than its share of snake oil.

Finally, there's a difference between a "delusion" and an "illusion." Look it up.
It is a well structured experiment that differs greatly from what we normally hear and how we hear it.

This is just an astounding statement. How in the world can simply not telling someone what they are listening to affect what they *hear*? I'll grant you, it can certainly affect what they think about what they hear, but that's just the point. What they think is a function of things besides what they hear, and DBTs isolate the non-sonic effects.

Note that there's no necessary contradiction between these two statements:
1) Harry can't hear a difference between A and B.
2) Harry prefers A to B.
Both can be true. All it means is that Harry prefers A to B for some reason other than its sound (even if he thinks the sound is the reason).
Not arguing reviewers are perfect either...far from it...they are inherently subjective and bound by their own experiences and equip as we all are...but at least he or she can frame their observations in context which allows me as a potential buyer what to look for, what to investigate, what things I may need to consider etc.

As part of that context, wouldn't you like to know whether this reviewer can actually hear a difference between the product he is reviewing and some reference? And if he can't, what does that tell you about his review?
Yes, beauty can grow on you. But notice that it's not the lady who's changing. It's you. What does that tell us about long-term comparisons?

TBG: Yes, there are far too few objectivists to make a market. That's why the largest-selling magazine in the US that reviews audio equipment is that subjectivist redoubt . . . Sound & Vision.
Suffice it to say that Gregadd's history of DBTs is almost entirely false. They were in use long before the invention of the ABX Comparator, which was just a convenient tool to do what we already knew how to do. If you go back and look at what 'reviewers like Fremer" were able to distinguish, you would find no mystery at all.

TBG owes us an explanation of how two things that sound identical can replicate music differently.
Gregadd: If you were talking about the response of a single objectivist, you should have named him right up front. Instead, you tagged all objectivists as dishonest, based on your interactions with one man. That's an understandable error, but it's an error.

There have been dozens of published DBTs of amps. Some have been positive, some have not. Fremer claims to have done a positive test. So what? He ain't the first, and won't be the last.

I'll pose to you the question I've posed to others: Shouldn't a reviewer, before he reviews an amp, confirm that he really can hear a difference between this amp and his reference amp when he doesn't know which is playing? Ever wonder why none of them do this?
TBG: Two amps that reproduce music differently enough to be heard will NOT sound identical in a DBT. But how do we know that two amps reproduce music differently? You say they do, but how do we know you are right?

Let me pose the question a bit differently. Here we have two amps that are not distinguishable in a level-matched, quick-switching ABX test, generally regarded in scientific circles as the gold standard for determining audible differences. A subjectivist claim that these two amps reproduce music differently. How would he prove that they do? Whatever he does, he has to use a blind test, because a sighted test can prove nothing about audibility. That's settled science. So what kind of test should he use?
Yes, Qualia, we are asking the same question. It's the same question that subjectivists have been asked for years, and they don't have an answer, so they have to stoop to insulting people's intellectual integrity, as Gregadd has just done yet again. Why do they bother?
But if these "after effects" mattered, John, then we'd see listening test results showing that putting gaps between samples improved subjects' sensitivity to differences. I don't know of any such test results. Do you?
TBG: So who called you a fool? Who called you "anti-science"? Citations, please.
Therefore, participants should be able to demonstrate their critical listening skills.

Once again, the scientists are ahead of you. Standards for appropriate listener training exist. And they weren't devised based on the misapplication of principles from visual perception, let alone high-end cant; they were developed through experience that identified the background necessary to produce reliable results, both positive and negative.

If anyone doesn't feel those standards are sufficiently high, there has always been an alternative: Propose higher standards, and then find some audible difference that can't be heard without the benefit of your more rigorous training. For all the griping about DBTs, I don't see anybody anywhere doing that.

Finally, recalling the original subject of this thread, has any audio reviewer ever demonstrated that he posesses "critical listening skills" in a scientifically rigorous way? Nope. In fact, there's at least a little data suggesting that audio reviewers are *less* effective listeners than, say, audio dealers. This isn't too surprising. A dealer who carries equipment that sounds bad will go out of business. If a reviewer recommends something that sounds bad, he just moves on to the next review.
Gregadd: There are lots of different DBTs. Some measure preferences, some measure *how* different two things are, etc. I'd say a reviewer should be allowed to use whichever method he likes (or invent his own, as long as it's level-matched and blind). But if he can't tell the difference between his own amp and the one he's reviewing under those very generous conditions, I think his readers ought to know that. Don't you?

And what's your problem with saying that equipment sounds good or bad? This is an audio discussion site. Eighty percent of the conversations here are about that. As for scientific tests of good and bad sound, that's what Sean Olive at Harman does for a living. Try Googling him.
So readers should be kept in the dark about the listening abilities of the reviewer? Whose interest does that serve?
Henry: Yes, of course, listening tests are only relevant for the specific gear you're listening to. But a review is about specific equipment. Think of it this way: A reviewer has a reference system. He gets a new amp for review. Can he tell whether his original amp or the review amp is in his system, without looking? If not, is there any value at all to what he says about the sound of the review amp?
Every scientific field has its own methodology, Rouvin. If you had made an effort to acquaint yourself with the rudiments of perceptual psychology, you'd be in a better position to pontificate on it.

By the way, methodology is NOT at the heart of science. Empiricism is. Methodology is just a means to an end. Empiricism demands reliable, repeatable evidence. You still haven't got any.
For a guy who doesn't believe in intelligent design, Rouvin, you practice its methods to perfection. You offer no evidence of your own--no tests, no results, nothing that can be replicated or disproved. Instead, you quibble with the "methodology," which you seem substantially uninformed about ("e.g., lack of random assignment or no control groups, making these experiments invalid scientifically"--Why in the world would you need a control group in a perception test?)

We are speaking different languages, Rouvin. DBT advocates are speaking the language of science. You are not.
Henry: Go back and re-read the thread. I provided a link to a whole list of articles on DBTs, including tests of cables, amps, CD players, tweaks, etc. That's why I'm on solid ground in demanding that the opponents of DBTs do the same. As of yet, no one has come up with a single experiment anywhere disputing what those tests have shown. Not one.

Science isn't done by arguing about methodology in the abstract. It's done by developing a better methodology and producing more robust results with it. People like Rouvin wouldn't even know how to do that. And the people who do know how to do that aren't doing it, because they have better uses of their time than re-proving something that's been settled science for decades. If you think it isn't settled, then it's up to you to come up with some evidence that unsettles it.
Who's misrepresenting what, TBG? I never said cables can't sound different. I cited an article earlier that did 6 cable comparisons, and 5 of them turned out positive. I've corrected your misstatements about this previously. Please don't repeat them again.

Just for the record, what DBTs actually demonstrate is that cables are audibly distinguishable only when there are substantial differences in their RLC values. For most cables in most systems, that is rarely the case. Exceptions may include tube amps with weird output impedances, speakers with very difficult impedance curves, and yards and yards of small-gauge cable.

No finding is ever proven rather it is tentatively accepted unless further data or studies using different methodologies suggests an alternative hypothesis.

Exactly. So where's your data? Where are your studies?

Any fool's testing would indicate that is untrue, even if only in sighted comparisons.

The faculty lounge is most amused.
Your evidence and appeals to "what scientists already knew" authority are not the way to make your conclusions broadly accepted.

Rest assured, I have no illusions about the possibility of convincing someone who, despite a complete lack of knowledge about the field, nonetheless feels qualified to assert that a test methodology used by leading experts in the field for decades lacks "face validity."

I'm just demonstrating, to anyone who might be reading this with an open mind, that the people who carp about DBTs in audio threads have neither an understanding of the issue nor a shred of real tangible data to support their beliefs.
Tbg: If these tests didn't yield positive results, they'd be useless for research. Just because they don't yield positive results when you want them to doesn't make them invalid. A good example of a mix of positive and negative tests is the ABX cable tests that Stereo Review did more than 20 years ago. Of the 6 comparisons they did, 5 had positive results; only 1 was negative. (The one negative, however, used similar cables and had subjects listen to music rather than noise. In most of the other 5 cases, the measured differences were much greater; in one, they listened to noise rather than music--it's easier to hear level and frequency differences with full-spectrum noise than with music.)

I presumed you knew statistics. 15 out of 20 is the 95% confidence level, which means that we can be 95% sure that the listener really heard a difference, and wasn't just guessing lucky. The 95% threshold is a reasonable one in this case.

I suspect the tests you did involved multiple listeners listening at the same time. It's better to use one subject at a time, and to let the subject control the switching. But the Stereo Review tests used multiple listeners at once, and got plenty of positive results. Subjectivists often object that ABX tests use quick switching between components, but there's solid research showing that this approach actually works better--it's easier to hear differences when you can switch immediately between the two. I know subjectivist audiophiles consider that heresy, but the research is pretty clear.

Some manufacturers use DBTs, others don't. It makes no sense for components where differences are undeniable (microphones, turntables, cartridges, and speakers are good examples). As for "voicing" of amps and cables, people who claim to do that without DBTs are either fooling themselves or trying to fool you.

Almost nobody has a preconceived notion that things sound the same. Many objectivists used to be subjectivists till they started looking into things, and perhaps did some testing of their own.

As for reviews, a high-end magazine that used DBTs couldn't survive. Advertisers would pull out, and readers would revolt. Better to give the people what they want.
For those embracing DBT as simple self-endorsement, I am dismissive.

No objectivists of my acquaintance (and I am acquainted with some fairly prominent ones), "embrace DBT as simple self-endorsement." A number of them, myself included, were subjectivists until we heard something that just didn't make sense to us. I know of one guy (whose name you would recognize) who was switching between two components and had zeroed in on what he was sure were the audible differences between them. Then he discovered that the switch wasn't working! He'd been listening to the same component the whole time, and the differences, while quite "obvious," turned out to be imaginary. He compared them again, blind this time, and couldn't hear a difference. He stopped doing sighted comparisons that day.

Research psychologists did not adopt blind testing because it gave them the results they wanted. They adopted it because it was the only way to get reliable results at all. Audio experts who rely on blind testing do so for the same reason.

Final thought: No one has to use blind comparisons if they don't want to. (Truth be told, while I've done a few, I certainly don't use them when I'm shopping for audio equipment.) Maybe that supertweeter really doesn't make a difference, but if you think it does, and you're happy with it, that's just fine. Just don't get into a technical argument with those guys from NHK!
Troy: Psychoacoustics is well aware of the possibility that A does not necessarily equal C. That hardly constitutes a reason to question the efficacy of DBTs.

And you are quite correct that changing both your speaker cables and interconnect(s) simultaneously might make a difference, when changing just one or the other would not. But assuming you use proper level-matching in your cable/wire comparisons, there probably won't be an audible difference, no matter how many ICs you've switched in the chain. (And if you don't use proper level-matching in your cable/wire comparisons, you will soon be parted from your money, as the proverb goes.)

You might be interested to know that Stereo Review ran an article in June 1998, "To Tweak Or Not to Tweak," by Tom Nousaine, who did listening tests comparing two very different whole systems. (The only similarities were the CD player and the speakers, but in one system the CD player fed an outboard DAC.) The two systems cost $1700 and $6400. The listening panel could hear no difference between the two systems, despite differences in DACs, preamps (one a tube pre), amps, ICs, and speaker cables.

So, contrary to your assertions, this whole question has been studied, and there is nothing new under the sun.
data is the heart of science.

And this thread, now at over 170 posts, still doesn't contain a shred of reliable, replicable data demonstrating audible differences between components that can't be heard in standard DBTs.

There are many reasons to believe that as applied to audio gear, this methodology does not validly assess the hypothesis that some components sound better.

Name one. Check that. Name one that won't get you laughed out of the Psych Dept. faculty lounge.

You, sir, also have no evidence that is intersubjectively transmissible.

I don't need "evidence that is intersubjectively transmissible," because I'm not changing the subject. The subject is hearing, and what humans can and cannot hear. In order to argue that DBTs can't be used for differences in audio gear, you have to claim that human hearing works differently when listening to audio gear than it does when listening to anything else. That's about as pseudoscientific as it gets.
Rouvin: Let me take your two points in order. First:

One, that most DBT tests as done in audio have readily questionable methods – methods that invalidate any statistical testing, as well as sample sizes that are way too small for valid statistics.

Then why is it that all published DBTs involving consumer audio equipment report results that match what we would predict based on measurable differences? For badly implemented tests, they've yielded remarkably consistent results, both positive and negative. If the reason some tests were negative was because they were done badly, why hasn't anyone ever repeated those tests properly and gotten a positive result instead? (I'll tell you why--because they can't.)

Two, and the far more important point to me, do the DBT tests done or any that might be done really address the stuff of subjective reviews?

DBTs address a prior question: Are two components audibly distinguishable at all? If they aren't, then a subjective review comparing those components is an exercise in creative writing. You seem to be making the a priori assumption that if a subjective reviewer says two components sound different, then that is correct and DBTs ought to be able to confirm that. That's faith, not science. If I ran an audio magazine, I wouldn't let anyone write a subjective review of a component unless he could demonstrate that he can tell it apart from something else without knowing which is which. Would you really trust a subjective reviewer who couldn't do that?
Rouvin: You're the one who says these are badly implemented tests (though you seem to be familiar with only a few). I wouldn't claim they're perfect. But that doesn't make their results meaningless; it leaves their results open to challenge by other tests that are methodologically better. My point is that you can't produce any tests that are both 1) methodologically sound; and 2) in conformance with what you want to believe about audio. And until you do produce such tests, you haven't really got any ground to stand on.

You state that golden ears exist, but at the end of the paragraph you admit that this position is indefensible, so you saved me the trouble. ;-) To your point that these golden ears get averaged out in a large test, you're simply wrong. I've never seen a DBT where individuals got a statistically significant score, but the broader panel did not. When it happens, then we'll worry about it.

So, my position remeains that there is surely a place for DBT testing, but even after all the methodological and sampling issues were addressed, I'm still unsure how it fits into the types of reviews most audiophoiles want.

They may not fit with what audiophiles want, but that says more about audiophiles than it does about DBTs.

In your hypothetical magazine, after DBT establishes that the Mega Whopper is distinguishable from El Thumper Grande, how would either be described? Would there be a DBT for each characteristic?

Once you pass the test, you can describe the Thumper any way you want.
Rouvin: There really isn't much point in arguing with someone who assumes his conclusions, and then does nothing but repeat his assumptions. Here's what I mean:

The majority of what we are able to perceive is not amenable to measurement that can be neatly, or even roughly, correlated with perception.

How do you know what you are *able* to perceive (as distinct from what you *think* you perceive)? In the field of perceptual psychology, which is the relevant field here, there are standard, valid ways of answering that question. But it's a question you are afraid to address. Hence your refusal of my challenge to actually conduct a DBT of any sort. And the idea that you, an amateur audio hobbyist without even an undergraduate degree in psychology, has any standing to declare what is and is not valid as a test of hearing perception is pretty risible.

Finally, just to clear up your most obvious point of confusion: There is a difference between "what we are able to perceive" and "how we perceive it." You are conflating these two things, again because you don't want to face up to the issue. "What we are able to perceive" is, in fact, quite amenable to measurement. It's been studied extensively. There are whole textbooks on the subject.

Your harping on subjective reviewing, by contrast, is about "how we perceive it." We can't measure sound and make predictions about how it will sound to you, because how it will sound to you depends on too many factors besides the actual sound. That's why we need DBTs--to minimize the non-sonic factors. And when we minimize those non-sonic factors, we discover that much of what passes for audio reviewing is a lot of twaddle.
Qualia: It reminds me of a trick John Dunlavy used to play on visitors to his speaker factory. He would show them an expensive cable (maybe even his own!) and zipcord, and let them audition both. They'd rave about the pricey one, of course. What he wouldn't tell them is that he never changed the cable. They were listening to zipcord the whole time.

One possible weakness of your experiment is that it assumes we know what it is that's tricking us--the price, the looks, etc. But it could be anything (the brand name, perhaps). Also, the value of a perception experiment is somewhat compromised when you intentionally mislead the subject.

There's a much easier way to get over the blindness objection, or at least most of it. In a standard ABX test, you can actually see both cables, and you know which one is A and which one is B. The only thing that's "blind" is the identity of X. Why someone with good ears can't ace this, if the differences are so obvious, is beyond me.

Let me rephrase that: People with good ears CAN ace it--when there's a difference large enough to be heard.
Greg: Your generally thoughtful and balanced letter was, in my opinion, a little too balanced. Here's where you went astray:

The proponents of dbt...want to engage in very short tests conducted by the uninitiated. Most proponents of dbt use it to try and prove what they already have concluded.eg cables and amps all sound the same.

This reflects a basic misunderstanding. Objectivists don't want short tests, we want good tests. (All the research suggests that short tests are in fact better tests, but DBTs can be any duration you want). And a requirement of good DBTs is that you provide the subjects with adequate "training," meaning that they are familiar with the sound of the equipment they are comparing. The "uninitiated" make very poor test subjects.

Finally, no one argues that all cables and amps sound the same, and that's not the purpose of DBTs. The purpose of DBTs is to determine *which* components sound the same, and which do not.

How about a dbt between vinyl and digital. Or electrostatic and dynamic speakers- tubes and solid state.

All of these have been done, at one time or another. Vinyl and digital are easily distinguishable--unless the digital is a direct copy of the vinyl. Speakers are always distinguishable in DBTs. Tube and solid state amps are often but not always distinguishable. When they are distinguishable, it's usually because the tube amp is underpowered and clipping (though very mellifluously, as tubes are wont to do!), or because the output impedance of the amp is interacting with cable and speaker to produce frequency response errors.
Let's not be so selective about what neuroscience has been discovered, John. It, along with psychoacoustics, has indeed discovered that it can take time to learn the sound of something, and the difference in sound between things. But they've also discovered that, once you've learned those differences, the best way to confirm that those differences are really there is through short-term listening tests that allow you to switch quickly between the two components. So why is it that a reviewer, who supposedly spends weeks "getting to know" a component, and who also owns a reference component which he also knows well, can't hear a difference between the two in such a test?

My point about how we change was aimed at the reviewer who reports differences between the component under review and something else he may have heard months before, but doesn't have now. He's claiming to do something that your neuroscientist/psychoacoustician has found to be impossible.
It is totally beyond me why anyone would have such distrust of what they hear to rely on DBT.

Apparently so. The explanation is simple: If you understand what scientists have learned about human hearing perception over the course of decades, then you will understand why we shouldn't always trust what we hear, and why in these cases listening blind is far more reliable than listening when you know what you're listening to. I suspect that you don't want to understand this, because it will upset the beliefs you've acquired over the years.

Now, there's nothing wrong with not knowing (or not accepting) this. After all, you don't have to understand the principles behind an internal combustion engine to buy a car. And if you can afford a multi-thousand-dollar audio system, it doesn't really matter. You'll probably get good sound regardless.

But if you can't afford that kind of an audio system, it can matter a lot.
I merely would state that I and many others reject that DBT validly assesses sonic difference among cables, etc. Where is your demonstration of face validity or any demonstration of validity?

Where to begin? First, we can physically measure the smallest stimulus that can excite the otic nerve and send a signal to the brain. It turns out that subjects in DBTs can distinguish sounds of approximately the same magnitude. This shows that DBTs are sensitive enough to detect the softest sounds and smallest differences the ear can detect.

To look at it another way, basic physics tells us what effect a cable can have on the signal passing through it, and therefore on the sound that emerges from our speakers. And basic psychoacoustics tells us how large any differences must be before they are audible. DBTs of cables match this basic science quite closely. When the measurable differences between cables are great enough to produce audible differences in frequency response or overall level, the cables are distinguishable in DBTs. When the measurable differences are not so great, the DBTs do not produce positive results.

That's how validation is done--we check the results of one test by comparing it to knowledge determined in other ways. DBTs of audio components came late to the party. All they really did was to confirm things that scientists already knew.
Puremusic: Psychoacoustics and neuroscience are already way ahead of you. In fact, the kinds of things that get argued about in audio circles aren't even being researched anymore, because those questions were settled long ago.

Just to take one example, you insist on "sufficient time between samples." The opposite is true, in the case of hearing. Our ability to pick out subtle differences in sound deteriorates rapidly with time--even a couple of seconds of delay can make it impossible for you to identify a difference that would be readily apparent if you could switch instantly between the two sources. (Think about it for a second--how long would a species survive in the wild if it couldn't immediate notice changes in its sonic environment?)
This is because of the nature of auditory perception and its dependence on memory accrued over time (days or weeks, and not hours).

This is just 180 degrees opposite of the truth. Auditory memory for subtle differences dissipates in a matter of seconds. I defy you to cite a single shred of scientific evidence to the contrary.