Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
I have nothing against double blind testing. The only thing I question is the listening time period. Initial impressions may change over a longer period of time. Two weeks would probably be more appropriate than a couple of hours. My bet is that listening impressions would be dramatically different. In any event it would make an interesting test.
Rja, I fully suspect you are right. We would have to run experiments to find out.
The problem I see with this test is that if the equipment does not work together(synergy) then the test is not valid. For instance, a Cary amp and a Cary preamp may work well together, but a Cary amp and a Krell preamp may not. In the end, what sounds best to my ears or yours is what matters.
My take on the article was that J.A. was being deliberately obtuse because he has no intention of ever doing db tests. It also seems unreasonable to argue that any difference which can not be detected unless you know which component you are listening to is a sonic difference.
I think the obvious problem that J.A. preferred to ignore is that they just did a lousy job with the db test.
If 'toe tapping' or emotional response is the right measurement criteria & if it takes more than a quick a-b switch to guage the measurement criteria, that is aok. You just have to set up the test to measure the right criteria & allow time for the measurements to occur.
I.E. - It is not that there is any problem with db tests, it is that a poorly designed test will not give accurate information. IMHO.
When someone claims a dramatic difference exists, a double blind test will effectively evaluate that claim. If only a subtle improvement is claimed the brief listening time of the typical DB test is a problem.
Is it just me or is the word 'synergy" the magic word that enables anyone to justify ANY componet, even if a blind test yeilds results that suggest the peice of equipment wasnt worth the money?
I know that everything has to come together to get your toes' tappin, but it seems the word Synergy is often used as a safe word....and all opinions and bets are off.
It reminds me of the not so distant past when pornography was trying to be defined and the conclusion was "I cant define it, but I know it when I see it".....IMO that is way to vague. I guess it all boils down to each person and the sound they hear, to each his own, but as long as the word synergy is used...alot of folks should refrain from putting down anothers choices, (tubes-vs-solid state, digital-vs-vinyl..and so on, because synergy wins every time.
What seems to be beyond audiophiles is that the only criteria of blind testing is that the participant has no information but the presented experience. Those who think blind testing is conceptually flawed have to answer a question: If what is desired is an unbiased review of sound quality, how does product information promote that?

Since "synergy" (I hate that word) is a factor in any stereo/component review, why bring it up as a factor for blind testing? The same situation exists with time. How is time a factor for bind testing but for not "sighted" testing?

I hate to ring this bell, but the drugs everyone takes...blind testing. Like a million psychology experiments...blind testing. Scientists made eliminating bias work for them - audiophiles haven't, but still some think they know better.
I am somewhat unhappy that I spoke of J.A. in my post as he brings along a lot of baggage. Many of you who have posted above seem sincerely to believe that better conceived db tests would yield recommendations of some components or cables. My reading of what I have seen posted is that many of those advocating db testing expect a conclusion that says there are no differences and thus buy the cheapest. This seems to have been J.A.'s experience in the 3 amp comparison, but in my limited experience such comparisons with db do yield a recommendation, as in the Bozak instance.

Fundamentally, I have no confidence in same/different comparisons in db with too small a sample and with too much dependence on statistical significance tests. A conclusion that all amps are the same or that all cables are the same is just to at odds with my experience to be acceptable. Perhaps when you randomly assign some to the drug and others to the placebo, double blind testing makes research design sense. But I do not concede that db testing is the fundamental essence of the scientific method. Experimentally, a control group design makes sense but double blind testing seldom is necessary. Often it takes great originality to cope with subjects knowing they are being experimented on. The Hawthorne Electric study is the best example of this.

I also really wonder how A, B, and C comparisons of amps, etc. using double blind would be done and reported. How would the random sample be drawn , and where would they assemble? And would we need to assess the relationship between more qualified listeners and others?

There are some reviewers whose opinions I am responsive to as they have previously said things consistent with what I hear. With double blind testing there would be no reviewers I presume.
I agree "synergy" is an overused word, but for Blind Testing, what would be your reference amp, preamp, source, speakers, wire, ect.? Would the reference be what the manufacturer prefers, you prefer or I prefer? In the world of science there are set standards, but what are the set standards in the Audio world? We can measure db, distortion, ect., but in the Audio world there is not a perfect standard for what sounds the best to you or I. A HONEST reviewer would be much appreciated in this dishonest world we live in.
Tbg: The main question of your post seems to be, Do objectivists like Arny Krueger extol blind tests only because they like the results? The short answer is no. Arny K. and his ilk did not invent blind tests as a weapon to use against the high-end industry. In fact, they did not invent blind tests at all. Blind listening tests were developed much earlier by perceptual psychologists, and they are the basis for a huge proportion of what we know about human hearing perception (what frequencies we can hear, how quiet a sound we can hear, how masking works to hide some sounds when we hear others, etc.). Blind tests aren’t the only source of our knowledge about those things, but they are an essential part of the research base in the field.

Folks in the audio field, like Arny, started using blind tests because of a paradox: Measurements suggested that many components should be sonically indistinguishable, and yet audio buffs claimed to be able to distinguish them. At the time, no one really knew what the results of those first blind tests would be. They might have confirmed the differences, which would have forced us to look more closely at what we were measuring, and to find some explanation for those confirmed differences. As it turned out, the blind tests confirmed what perceptual psychologists would have predicted: When two components measured differently enough, listeners could distinguish them in blind tests; when the measurements were more similar (typically, when neither measured above known thresholds of human perception), listeners could not distinguish them.

Do all blind tests result in a “no difference” conclusion? Of course not, and you’ve cited a couple of examples yourself. Your preamp test, for one. (Even hardcore objectivists agree that many preamps can sound different.) Arny’s PCABX amp tests, for another. (Note, however, that Arny typically gets these positive results by running the signal through an amp multiple times, in order to exaggerate the sonic signature of the amp; I don’t believe he gets positive results when he compares two decently made solid state amps directly, as most of us would do.)

Your comments on statistical significance and random samples miss an important point. If you want to know what an entire population can hear, then you must use a random sample of that population in your test. But that’s not what we want to know here. What we want to know here is, can anybody at all hear these differences? For that, all we need to is find a single test subject when can hear a difference consistently (i.e., with statistical significance). Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable.

Which leads to a final point. You say you are a scientist. In that case, you know that quibbling with other scientists’ evidence does not advance the field one iota. What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence? It’s not about who’s right. It’s about getting at better understanding. If you have some real evidence, then you will add to our knowledge.
Pabelson, you added greatly to my historic understanding of double blind testing. Can you please give citations for the instances where same/different tests yield differences? I think something is fundamentally wrong with the research design unless there are such instances, including just single run throughs of the signal.

I am quite uncomfortable with the idea that finding a single person who can hear differences 15 out of 20 times would be convincing. I do not know how you can set a level here. Why 15 out of 20?

All of the instances were I participated in same/different db testings were too quick and there is too high a probability of the respondent guessing. I also felt that the testing was unrealistic of the listening experience. By contrast the A, B, C etc. comparison using double blind was more analogous to the listening experience. As I said, because of this, I would be interested in such testings. Here I would again suggest the hypothesis that could be tested as to whether there were differences among those with long experience in working with music.

Advancing the field. Yes, that would be nice. I have seen quality components, IMHO, be ignored because of name brand manufactures cache. I have little question that the field has advanced greatly during the 40 years that I have been involved, especially digital. Someone has suggested that manufacturers use double blind testing all the time, but in my experience, they do not. There is also the voicing of components by such notable designers as Kondo, etc. I presently am overwhelmed by the Shindo Labs 301 turntable. All of this is without the aid of double blind testing.

I have no doubt that some proponents of dbt are sincere as I am sure that the overwhelming number of instances where the small sample are unable to hear a difference leads some to embrase db because it fits their preconceived judgments, especially if they cannot afford more expensive gear.

I also still say that reviews would be very curious with dbt. Would you start with 100 amps being compared and then each month add another? Would anyone buy such a magazine or use it for judging what they will buy? Would manufacturers concede that product D is indeed better and withdraw their amps?
Tbg: If these tests didn't yield positive results, they'd be useless for research. Just because they don't yield positive results when you want them to doesn't make them invalid. A good example of a mix of positive and negative tests is the ABX cable tests that Stereo Review did more than 20 years ago. Of the 6 comparisons they did, 5 had positive results; only 1 was negative. (The one negative, however, used similar cables and had subjects listen to music rather than noise. In most of the other 5 cases, the measured differences were much greater; in one, they listened to noise rather than music--it's easier to hear level and frequency differences with full-spectrum noise than with music.)

I presumed you knew statistics. 15 out of 20 is the 95% confidence level, which means that we can be 95% sure that the listener really heard a difference, and wasn't just guessing lucky. The 95% threshold is a reasonable one in this case.

I suspect the tests you did involved multiple listeners listening at the same time. It's better to use one subject at a time, and to let the subject control the switching. But the Stereo Review tests used multiple listeners at once, and got plenty of positive results. Subjectivists often object that ABX tests use quick switching between components, but there's solid research showing that this approach actually works better--it's easier to hear differences when you can switch immediately between the two. I know subjectivist audiophiles consider that heresy, but the research is pretty clear.

Some manufacturers use DBTs, others don't. It makes no sense for components where differences are undeniable (microphones, turntables, cartridges, and speakers are good examples). As for "voicing" of amps and cables, people who claim to do that without DBTs are either fooling themselves or trying to fool you.

Almost nobody has a preconceived notion that things sound the same. Many objectivists used to be subjectivists till they started looking into things, and perhaps did some testing of their own.

As for reviews, a high-end magazine that used DBTs couldn't survive. Advertisers would pull out, and readers would revolt. Better to give the people what they want.
Double blind tests, statistcal analysis...it all seems ridiculous and out of place in this hobby. Might as well do double blind tests and statistical analysis to determine what wine, art, automobiles, cheese, homes, or golf clubs to buy.

Don't we all agree that equipment is purposely voiced by the manufacturer...even if the goal of the voicing is to provide the gear with no sonic signature? That all equipment sounds somewhat different individually, and within each system?

Are we really endeavoring to purchase Stepford Gear, void of any personality. Just the facts, Ma'am.

Really?

This discussion leaves me cold, and frankly drains my interest in the hobby.

I want to enjoy music in my home. I don't believe I can reproduce the live event. I don't believe I can reproduce the sound the mixing engineer heard in the recording studio. I don't care about these things.

I care about closing my eyes and getting lost in the notes. I care about tapping my toes to the rhythm.

No equipment purchase based on a double blind test or a statistical analysis is going to provide that, IMO.

Pabelson, I must admit that I had not known of the Stereo Review's db tests. Out of curiosity I will have to look them up. Are there others?

I teach statistics. Apart from making judgments about the population from a random sample, the concept of a confidence interval has no meaning. We never can make the conclusion, "...that the listener really heard a difference, and wasn't just guessing lucky." With a random sample of sufficient size, you can get a confidence level of .05 which might be that your experimental group's mean response was right 15 out of 20 times. This is why I ask about this number in the absence of a random sample. 15 out of 20 may impress you, but it has no basis in statistics.

I also do not understand the notion that db testing is unneeded for, "components where differences are undeniable." Undeniable by whom?

I grow less convinced that db testing has any potential for sheading light in the evaluation of stereo equipment.

Tvad, were good db testing procedures, I would think we would have to assess whether some were better evaluators than others. As I said earlier, I still think review magazines would be boring and that most audiophiles would ignor the results, if any were positive.
Jeez, sorry for my out-of-place post. I don't know what I thought I read to initiate that response.
Tvad, I find your last post preposterous. You state that you only care about closing your eyes and getting lost in the notes and about tapping your toes to the rhythm. If that's what our hobby is about then anyone with an iPod on the subway is an audiophile. I have a friend who regularly gets down like that with his Bose Wave player. I even suspect the people in the deeply tinted window SUV with the fancy wheels that was absolutely booming "urban youth music" were lost in the music and tapping their toes. I should have rolled down my window and said hello to my audiophile brothers.

Your espousal of unfettered radical subjectivism is precisely where a large part of our hobby has gone wrong. By dismissing any pretense of fidelity to the source material you have made all systems effectively equal because someone somewhere will think any system sounds great. Beyond what makes someone feel good there actually are objective standards for judging whether a piece of equipment faithfully reproduces an input signal. We can argue about exactly what these standards are, but it would be foolish to ignore them.

BTW, if getting lost/toe tapping is a high priority, a bottle of good Scotch is a more effective system upgrade than any cable change.
Onhwy61, I don't believe one has to obsess over the importance of double blind tests and statistical analysis of equipment to be an audiophile. But, in the interest of not arguing the issue, I'll accept that by your standards I may not be an audiophile. Know what? It doesn't matter.

I enjoy listening to music on my system. I think it sounds pretty darn good. I assembled it by discussing components with other audiophiles, reading reviews (and posts), and by purchasing the pieces and listening for myself as I added, subtracted and tweaked the system into its present form. I based no decision solely on cost. If something lower cost sounded better to me, I kept it. If not, I sold it and kept the more expensive gear. But, I never spent a minute blidfolded (although I spent many minutes with my eyes closed) nor did I spend hours pouring over technical measurements...especially when it came to wire. I simply trusted my ears.

I never ignore specs or measurements of electronics or loudspeakers, but I don't consider specs to be the end-all in determining whether to purchase the gear. Further, I don't consider specs to be the end-all in determining what constitutes "audiophile" gear. I have no doubt that there are plenty of mid-fi components that measure substantially better than my VAC amp, and I'm certain my VAC amp has far worse specs than the uber-spec'd digital amps I've auditioned thus far. But, I enjoy the sound of the VAC more. I've never read a single test measurement for any of the modified CD players I've owned, and I'm reasonably certain none are published (post modification). That hasn't stopped me from accepting that they sound better than their stock cousins.

Listening to music and feeling good (or feeling some emotion) isn't just a high priority, it's what it's ALL about as far as I'm concerned. If that's not criteria for entry into the audiophile club, I'll happily not belong.
Tvad writes =If that's not criteria for entry into the audiophile club, I'll happily not belong.=

"I would never join any club that would have me as a member"
-Groucho Marx
Onhwy61, your comments read like a racist expressing his disdain for the infusion of impurities into the master plan of audiophilia. I don't if you intended them to be so exclusive, but they struck me that way. It would be fair enough to say that you do a hobby one way, and allow others to walk their own paths. But if I'm hearing you accurately, then I'll personally opt for a scotch with my music. Of course, not while I'm out bumpin' with the brothers in my SUV.
I mentioned this in another thread not too long ago. In blind taste testing Pepsi usually wins. When the brands are known Coke almost always wins. I think this means that Coke comes with a plethora of baggage (at least more than Pepsi) that affects objectivity to the extent that it can affect our perceptions. Can this be true of cable testing, or anything else for that matter? The odd thing is that most people do prefer Coke because we don't buy it in a blind test. To me at least, there are significant implications for audio here. If I know I'm listening to a Valhalla does it change the perception I would have had if I thought it was a Cardas or if I didn't know the brand at all? In court the least reliable evidence is frequently that of eye witnesses. For instance, even though a group of people witness the same event their perceptions of the event usually vary. I think that objectivity can be extremely difficult to achieve because we heve so many more factors wired in. Another instance I find humorous is when an audio componant tests one way with sophisticated instruments (admittedly this can be less than objective, depending on the application and methodology used and the biases of the human tester) and the human perception is directly opposite. This seems yo happen more with tube equipment for some reason. Then there's that school of thought that the simple fact that something is being tested can affect the outcome of the test. Just some thoughts.
Tvad, I applaud your dedication to this hobby and I truly wish you derive an enormous degree of personal satisfaction for being a practicing audiophile, but I still strongly disagree with you on a key issue. Tapping your toes and grooving to the music is great, but even non-audiophiles tap their toes. As I see it audiophiles are about listening to music reproduced with a high degree of fidelity to the source material. In your 6/13 post you state that you are not interested in fidelity, only whether it makes you feel good. It's real easy to put together a system that sounds good. Pump up the bass, give it a big syrupy midrange and roll off the high end and even well schooled audiophiles will be tempted. It's even easier putting together an "accurate" system with vanishingly low distortion and ruler flat frequency response. What makes our hobby challenging is putting together an accurate system that also sounds good. Just tapping your toes won't get you there.

Boa2, how do you know that I'm not one of the brothers in the SUV?
That was YOU, Onhwy61? I couldn't see you through the cloud of doobage. Welcome aboard, bro.
I even suspect the people in the deeply tinted window SUV with the fancy wheels that was absolutely booming "urban youth music" were lost in the music and tapping their toes. I should have rolled down my window and said hello to my audiophile brothers.

When I read this. I was actually picturing somewhere in suburbia! :-)
Onwhy61, you wrote:
In your 6/13 post you state that you are not interested in fidelity, only whether it makes you feel good.

I believe some of your consternation with my post of 6/13 stems from your misunderstanding of what I wrote. If you carefully re-read the post you will discover that nowhere did I state that I was not interested in fidelity. Rather, I wrote:
I don't believe I can reproduce the live event. I don't believe I can reproduce the sound the mixing engineer heard in the recording studio.

That is the only reference I made to anything remotely having to do with fidelity. I said I don't believe I can reproduce the live event. Indeed, this is what I believe. Starting with the acoustics of my room versus the acoustics of the room in which the music was recorded. Reproducing the live event is a Utopian goal that is ultimately impossible. Therefore, I do not endeavor to do this. However, this statement does not mean I don't care about fidelity. For many of the same reasons involved with the impossibility of reproducing the live event, I also do not believe I can reproduce the sound the recording engineer heard in the studio: equipment, room acoustics, etc. are all different.

Please don't critique my opinions without first understanding my statements. If you're not sure about the point I am attempting to make, please ask me and I will try to explain. Once we understand each other, you're welcome to fire away.

Finally, you wrote:
Just tapping your toes won't get you there.

Again, I am interested in fidelity, but I do not and will not get mired down in discussions of double blind tests, statistics and technical specifications. I can hear differences in my system produced by swapping various elements of that system, and this is sufficient for my purposes.

As I stated in an earlier post, the music is everything to me, and if that priority does not make me an audiophile according to your definition of the word, then so be it.

Tvad sez:
I said I don't believe I can reproduce the live event. Indeed, this is what I believe
Of course you can't. You can only reproduce what the recording process produced and stored on the medium used...

Add to that, the imperfections & losses due to the recording process, the imperfections & losses due to the storage medium and the imperfections & losses due to the reproduction system.

In all of our rantings, we are addressing the last of these (the repro system)

At its best, a reproduction system aims at coming close to the original, i.e. what's on the RECORDED medium (not the live event); this seems to me a reasonable target for us audiophiles.

For the live event, you go to the concert hall.
Gregm, faithful reproduction of the live recorded event has been mentioned in the audiophile press as a goal of an audiophile playback system, and as a subjective measure of a system's fidelity. That's why I brought it up.

I should also add, the argument has included whether or not the goal is to reproduce the sound of the actual live event, or the sound of the recorded live event. I'd say based on some of the carefully/simply engineered recordings made by, say, Chesky, the goal of some recording engineers is the faithful reproduction of the actual live event. Whether it can be done is another debate: one that has been discussed before in these threads.

Although I value fidelity, I don't spend hours carefully dissecting the sound of my system trying to determine if what I'm hearing is the best reproduction of the live event I can achieve. I'd rather spend the time relaxing, tapping my toes, and enjoying the music. It's just a choice I've made.

Good to see we agree.

Tbg: For someone who "teaches statistics," you express a rather narrow perspective on the field. Think about how you would use statistics to determine whether a coin is fair. (You do agree that you can use statistics to do this, don't you?) The problem of determining whether a certain subject can hear a difference between two components is precisely the same. Do his results suggest that he was just guessing which was which (the equivalent of flipping a fair coin), or that he could indeed hear a difference (flipping an unbalanced coin). At any rate, it really doesn't matter whether you think statistics is applicable here. People who actually study hearing and do listening tests use statistics for this purpose every day of the week.

I would define undeniable differences as those for which measurements would lead us to predict such differences. If there are measured characteristics of two components that are above the known threshold of human detection, then there's no real need to do a DBT to determine whether they sound different. For example, if one amp has a THD of 0.1%, and the other is at 3%, we can safely assume that they are audibly different. Transducers typically measure differently enough that we can assume they sound different. Ditto many (but not all) tube amps. Solid state amps, unless they are underpowered for the speakers they are driving or have a non-flat frequency response (perhaps due to an impedance mismatch) generally do not.

Before I get tagged with the "measurements are everything" slur, let me say that these measurements can only predict WHETHER two components will sound different. If they do sound different, the measurements cannot tell us (at least not very well) which you will prefer, or even in what ways they will sound different to you.

For more info on DBTs, see the ABX home page, mirrored here:

http://www.pcavtech.com/abx/
Palelson, perhaps we just have a language difference. I would certainly concede that for a coin to be heads 15 out of twenty tosses is improbable. This probability is at the root of statistical inference which, of course, seeks to assess support for a hypothesis in the population from a sample. There is always the possibility that the sample is unrepresentative and that we might wrongly reject the null hypothesis when it is actually true.

I just think the proper hypothesis should be that a sample of people can hear a difference between cables or amps. The null hypothesis is that they cannot.
It would be very difficult with a sample of one to achieve statistical significance, so you are apt to accept the null hypothesis. However, a sample of 25,000 would assure you statistical significance.

I am only concerned that the choice of the sample size may be determined by what the researcher's intended finding might be. I think it is a far more interesting hypothesis to suggest that those with "better ears" would do better. I don't think most audiophile would be convinced or should be convinced that all amps or wires sound the same.
As I recall, statistics can be very useful.

Stat 101....Intro to Statistics
Stat 102....Statistic Applications (How to fool others using statistics).
Stat 201....Advanced Statistics (How to fool yourself using statistics).

Just kidding. In my work with balistic missile inertial guidance systems, such as the estimation of CEP (circular error probability) based on a couple of hundred modeled error sources, I have been exposed to the most arcane forms of statistics. One must always remain aware of the risk of fooling yourself, and be able to laugh about it.
I just think the proper hypothesis should be that a sample of people can hear a difference between cables or amps.

Well, that's one possible hypothesis. Another possible hypothesis is that one particular individual can hear a difference. That's the equivalent of testing the fairness of one particular coin. Note that the sample size isn't one. It's the number of listening trials/coin flips.

I am only concerned that the choice of the sample size may be determined by what the researcher's intended finding might be.

The choice of sample size isn't what's critical here. The statistical significance is. Granted, larger samples reduce the possibility of false negatives, but it's not as if there have never ever been any ABX tests with large sample sizes. The Stereo Review cables test had a sample size of 165. The possibility of a false negative is very low with a sample that big. (Since you teach statistics, I'll let you do the math.)

And if you think the reason these tests come up negative so often is sample size, you as a "scientist" ought to know how to respond: Do your own experiment. Complaining about other people's data isn't science.

I think it is a far more interesting hypothesis to suggest that those with "better ears" would do better.

Then test it. The SR panel was a pretty audio-savvy bunch, as I recall.

I don't think most audiophile would be convinced or should be convinced that all amps or wires sound the same.

Are you saying they're all close-minded?
Pabelson, frankly I don't care enough about this question to expend the time necessary to do such work. I am more concerned with find a great loud speaker.

I just do not understand the expectation that all individuals are the same in these tests. It is not statistical significance, it is improbability that you are talking about.

How do you know when you wrongfully reject the null hypothesis?
Tbg: If all you care about is finding a great speaker, why'd you start this thread???

All individuals are not the same. I never said they were. I think you're hung up on the idea of a hypothesis about what a majority of people can hear (in which case it would be necessary to test a random sample of all people). But the more common question in audio is, can anybody hear it? To answer that question in the affirmative, all you have to do is find *one* person who can hear a difference between two components. That's why testing a single individual can be appropriate. (Just remember that, in a single-person test, the null hypothesis relates to that single person; if he flunks, you can't conclude anything about anyone else.)

Here's a good example of the kind of testing that researchers do:

http://www.nhk.or.jp/strl/publica/labnote/lab486.html

Note that one of their 36 subjects got a statistically significant result. In a panel that large, this can easily happen by chance. The check this, they tested that individual again, and she got a random result, suggesting that her initial success was merely a statistical fluke.
I started the thread because I am curious about those who doubt others' abilities to hear the benefits of some components and wires. As many proponents can point to few examples of DBT and nevertheless seem confident of the results, I assumed that they saw DBT as endorsing their personal beliefs. Furthermore, my personal experiences with DBT same/different setups has been that I too could not be confident that my responses were anything other than random. But my experiences with single blind tests with several components which were compared have been more favorable with a substantial consensus on the surprisingly best component.

Speakers have always been a problem for me. Some are better in some regards and others in other areas. I suspect that within the limits of what we can afford, all of us picks our poison.

I did read you reference article and found it very interesting a troublesome as I use a Murata super tweeter with only comes in a 15k Hz and extends to 100k Hz. I am 66 and have only limited hearing above 15k Hz, yet in a demonstration I heard the benefits of the super tweeter, even though there was little sound and no music coming from the super tweeter when the main speakers were turned off. Everyone else in the demonstration heard the difference also. I know that the common response by advocates of DBT is that we were influenced by knowing when they were on.

I must admit that I am confident of what I heard and troubled by my not hearing a difference in a DBT. Were this my area of research rather than my hobby, I would no doubt focus on the task at hand for subjects in DBTs as well as the testing apparatus. My confidence is still in human ears, and I suspect that this is where we differ. I guess it is a question of the validity of the test.

For a sincere DBTer, such as yourself, I am not being truculent. For those embracing DBT as simple self-endorsement, I am dismissive.
For those embracing DBT as simple self-endorsement, I am dismissive.

No objectivists of my acquaintance (and I am acquainted with some fairly prominent ones), "embrace DBT as simple self-endorsement." A number of them, myself included, were subjectivists until we heard something that just didn't make sense to us. I know of one guy (whose name you would recognize) who was switching between two components and had zeroed in on what he was sure were the audible differences between them. Then he discovered that the switch wasn't working! He'd been listening to the same component the whole time, and the differences, while quite "obvious," turned out to be imaginary. He compared them again, blind this time, and couldn't hear a difference. He stopped doing sighted comparisons that day.

Research psychologists did not adopt blind testing because it gave them the results they wanted. They adopted it because it was the only way to get reliable results at all. Audio experts who rely on blind testing do so for the same reason.

Final thought: No one has to use blind comparisons if they don't want to. (Truth be told, while I've done a few, I certainly don't use them when I'm shopping for audio equipment.) Maybe that supertweeter really doesn't make a difference, but if you think it does, and you're happy with it, that's just fine. Just don't get into a technical argument with those guys from NHK!
Tbg...I also can "hear" the effect of tweeters/supertweeters operating well above the measured bandwidth of my 67-year old ears. (I first noticed this general effect, at higher frequencies, when I was much younger). My explanation is that the ear senses RATE-of-change of pressure, as well as change of pressure. The high rate of change of a 20 KHz signal can be sensed, even if the smoothly changing pressure of a 14KHz signal is inaudible. The experience we share is common. Have you heard any other explanation?
ELdartford sez
the ear senses RATE-of-change of pressure (...)Have you heard any other explanation
Well, 1) about 20yrs ago a french prof (forgot the name) claimed findings that the bones contribute to our perception of very high frequencies. 2) There seems to be a case for the interaural mechanism working together -- not ONE ear alone, but both being excited.

OTOH, it's also been established that the audibility of PURE tones diminishes with age in the higher frequencies. So here, we're talking about "sound in context": i.e. say, harmonics of an instrument -- where the fundamental & certain harmonics are well within our pure tone hearing range and some of the related info is outside an individual's "official" (pure tone) audible range.

The strange thing is that our ears work as a low pass; so, some people speculate that it's the COMMON interaural excitation that does the trick...
For this to happen (let's ignore the possible contribution of the bone structure for now) would'nt it mean that our interaural "mechanism" is situated in the DIRECT path (sweet spot) of those frequencies (remember, our acuity falls dramatically, ~20-30db, up there). If so, then moving our head slightly would eliminate this perception.

So, let's assume a super high frequency transducer with excellent dispersion characteristics and thereby eliminate the need for that narrow sweet spot (a Murata is quite good, btw).

It is my contention (but I have no concrete evidence) that three things are happening in conjunction:
a) the high frequency sound is loud enough to overcome our reduced acuity up high (at -60db perception our ear would basically reject it)
b) the sounds in our "official" audible frequency range are rendered more palpable (for wont of a better word) because the super transducer's distortion points (upper resonance) have moved very far away (it's ~100kHz for a Murata) -- hence "perception" of positive effects. This still relates to our "official" range of hearing.

b) there is a combined excitation of aural and other, structural, mechanisms that indicate the presence of high frequencies -- that we cannot, however, qualify or explain (our hearing is a defense and guidance mechanism geared towards perceiving and locating).
Even at B there is a dilemma: in a small experiment in France some subjects were asked to put one ear close to a super tweet and declare whether they perceive anything. Inconclusive (some did, some didn't, no pattern. BTW, I did a similar thing & did perceive energy or lack of it with some DELAY however when the tweet STOPPED producing sound -- joining Eldartford's idea).
Subjects were then asked to move away from the transducer & listen normally (stereo), just by casually sitting on a couch in front of the speakers as one would do at home. Everyone "heard" the supertweet playing. Amazingly, only the s-tweet was connected (at 16kHz -- very high up for sound out of other context).
I find this fascinating.
Gregm, I do not know how many out there experienced the Murata demonstration at CES 2004, but it was a great deal like what you describe. Initially, the speakers played a passage. Then the super tweeters were used and the passage replayed. The ten people in the audience all expressed a preference for the use of the super tweeters. There was much conversation but ultimately someone asked to hear the super tweeter only. The demonstrator said, we already were hearing it.

When we all refocused on the sound, all that we could hear was an occasional spit, tiz, snap. There was no music at all. The Muratas come in at 15k Hz. I left and dragged several friends back for a second demonstration with exactly the same results.

Would there be any benefit to having this done single or double blind? I don't think so. Do we need to have an understanding for how we hear such high frequency information, without which it might be a placebo or Hawthorne Electric phenomenon? I don't.

But this experience is quite at odds with the article that Pabelson cited. What is going on? I certainly don't know, save to suggest that there is a difference in what is being asked of subjects in the two tests.
I teach a course on the philosophy of color and color perception. One of the things I do is show color chips that are pairwise indistinguishable. I show a green chip together with another green chip that is indistinguishable. Then, I take away the first chip and show a third green chip that is indistinguishable from the second. And then I toss the second chip and introduce a fourth chip, indistinguishable from the third. At this point, I bring back the first green chip and compare it with the fourth. The fourth chip now looks bluish by contrast, and is easily distinguished from the original. How does that happen? We don't notice tiny differences, but they add up to noticable differences. We can be walked, step-wise, from any color to any other color without ever noticing a difference, provided our steps are small enough!

Same for sound, I bet. That's why I don't understand the obsession with pair-wise double-blind testing of individual components. Comparing two amps, alone, may not yield a discriminable difference. Likewise, two preamps might be pairwise indiscriminable. But the amp-pre-amp combos (there will be four possibilities) may be *noticably* different from one another. I bet this happens, but the tests are all about isolating one component and distinguishing it from a competitor, which is exactly wrong!

The same goes for wire and cable. It may be difficult to discern the result of swapping out one standard power cord or set of ic's or speaker cables. But replace all of them together and then test the completely upgraded set against the stock setup and see what you've got. At least, I'd love to see double-blind testing that is holistic like this. I'd take the results very seriously.

From the holistic tests, you can work backward to see what is contributing to good sound, just as you can eventually align all color chips in the proper order, if presented with the whole lot of them. But what needs to be compared in the first place are large chunks of the system. Even if amp/pre-amp combos couldn't be distinguished, perhaps amp/pre-amp combos with different cabling could be (even though none of the three elements used distinguishable products!). I want to see this done. Double blind.

In short: unnoticable difference add up to *very* noticable differences. Why this non-additive nature of comparison isn't at the forefront of the subjectivist/objectivist debate is a complete mystery to me.

-Troy
Troy: Psychoacoustics is well aware of the possibility that A does not necessarily equal C. That hardly constitutes a reason to question the efficacy of DBTs.

And you are quite correct that changing both your speaker cables and interconnect(s) simultaneously might make a difference, when changing just one or the other would not. But assuming you use proper level-matching in your cable/wire comparisons, there probably won't be an audible difference, no matter how many ICs you've switched in the chain. (And if you don't use proper level-matching in your cable/wire comparisons, you will soon be parted from your money, as the proverb goes.)

You might be interested to know that Stereo Review ran an article in June 1998, "To Tweak Or Not to Tweak," by Tom Nousaine, who did listening tests comparing two very different whole systems. (The only similarities were the CD player and the speakers, but in one system the CD player fed an outboard DAC.) The two systems cost $1700 and $6400. The listening panel could hear no difference between the two systems, despite differences in DACs, preamps (one a tube pre), amps, ICs, and speaker cables.

So, contrary to your assertions, this whole question has been studied, and there is nothing new under the sun.
My point was not to call into question the efficacy of blind testing. I am quite in favor of it. Even when only one element of a system is varied, the results are interesting, and valuable. For instance, if I can pairwise distinguish speakers (blindly) of $1K and $2K, but not be able to distinguish similarly priced amps, or powercords, or what have you, then my money is best spent on speakers. Likewise, if preamps are more easily distinguishable than amps, I'll put my money there. A site that's interesting in this regard is:

http://www.provide.net/~djcarlst/abx_data.htm

I never said DBT is ineffective. It's just that *most* testing ignores the phenomenon that I cited: sameness of sound is intransitive, i.e., a=b,b=c, but not a=c. If the question is whether a certain component contributes to the optimal audio system, this phenomenon can't be ignored.

Of course scientists studying psychoacoustics are already aware of the phenomenon. I don't think I'm making a contribution to the science here. But the test you cite above is an exception, and for the most part, A/B comparisons are done while swapping single components, not large parts of the system. This is fine, when you *do* discover differences. Because then you know they're significant. But when you don't find differences, it's indeterminate whether there are no differences to be found OR the differences won't show up until other similar adjustments are made elsewhere in the system.

But I am *very much* in favor of blind testing, even in the pair-wise fashion. For instance, I want to know what the minimum amount of money is that I could spend to match the performance of a $20K amp in DBT. Getting *that* close to a 20K amp would be good enough for me, even if the differences between my amp and it will show up with, say, simultaneously swapping a $1K preamp with a $20K preamp. So where's that point of auditorily near-enough for amps?

I've also learned from DBT where I want to spend my extremely limited cash: speakers first, then room treatment, then source/preamp, then amp, then ic's and such. I'll invest in things that make pair-wise (blind) audible differences over (blind) inaudible differences any day.

Still, for other people here, who are after the very best in sound, only holistic testing matters. Their question (not mine) is whether quality cabling makes any auditory difference at all, in the very best of systems. Same for amps.

Take a system like Albert Porter's. Blindfold Mr. Porter. If you could swap out all the Purist in his system and put in Radio Shack, and *also* replace his amps with the cheapest amps that have roughly similar specs, without his being able to tell, that would be very surprising. But I haven't seen tests like that... the one you mention above excepted.
In theory, I like the idea of double blind testing, but it has some limitations as others have already discussed. Why not play with some other forms of evaluating equipment?

My first inclination would be to create a set of categories; such as dynamics, rythm and pace, range, detail, etc.. You could have a group of people listen and rate according to these attributes on a scale of perhaps 1 to 5. You could improve the data by having the participants not talk to one another before completing their ratings, by hiding the equipment from them during the audition, and by giving them a reference audition where pre-determined ratings are provided from which the rater could pivot up or down across the attributes.

Yet another improvement would be to take each rating category and pre-define its attributes. For example, ratings for "detail" as a category could be pre-defined as: 1. I can't even differentiate the instruments and everything sounds like a single tone. 2. I can make out different instruments, but they don't sound natural and I cannot hear their subtle sounds or noises. 3. Instruments are well differentiated and I can hear individual details such as fingers on the fret boards and the sound of the bow on the violin string. Well, you get the picture. The idea is to pre-define a rating scale based on characteristics of the sound. Notice terms such as lush or analytical are absent because they don't themselves really define the attribute. They are subjective conclusions. Conceivably, a blend of categories and their attributes could communicate an analysis of the sound of a piece of equipment, setting aside our conflicting definitions about what sounds 'best', which is very subjective. Further, such a grid of attributes, when completed by a large number of people, could be statistically evaluated for consistency. Again, it wouldn't tell you whether the equipment is good or bad, but if a large number of people gave "detail" a rating of #2 and you had a low deviation around that rating, you might get a good idea of what that equipment sounds like and decide for yourself whether those attributes are desireable to you or not. Such a system would also, assuming their were enough participants over time, flush out the characteristics of equipment irrespective of what other equipment it was used with by relying upon a large volume of anecdotal evidence. In theory, the characteristics of a piece of equipment should remain consistent across setups or at least across similar price points.

Lastly, by moving toward a system of pre-defined judgements one could create some common language to rating attributes. Have you noticed that reviewers tend to use the same vocabularly whether evaluating a $500 piece of gear or a $20,000 piece of gear. So, the review becomes judgemental and loses its ability to really place the piece of gear in the spectrum of its possible attributes.

It's not a double blind study, but large doses of anecdotal evidence when statistically evaluated can yield good trend data.

Just an idea for discussion. If you made it this far, thanks for reading my rant :).

Jeff
My apologies. I took you for the typical DBT-basher. As for amps, assuming you are talking about solid-state amps designed to have flat frequency response, I seriously doubt it matters (in a DBT) what preamp you use, or how expensive it is. If it has the power to drive your speakers, it will sound like any other good solid state amp with enough power to drive your speakers. Or so the bulk of the research suggests.

To your final point, I'm not sure what's in Mr. Porter's system, but the Nousaine experiment at least suggests that he would NOT notice such a swap, assuming you could finesse the level-matching issues. That's not to say that Mr. Porter's system is not right for Mr. Porter--merely that it might be possible for someone else to achieve a Porter-like sound for somewhat less money. And swapping out amps and cables is one thing; I wouldn't even dream of touching his turntable!
I find this a very interesting topic.

On one hand, it is somewhat accepted that the perfect component imposes no sonic qualites of it's own on the passing signal, but yet voicing of components is often referred to - particularly in the case of cables.

So, if a component is purposely voiced, then the reproduction cannot be true to the source can it? Further, if the differences are so obvious as many anecdotally state, it should be no problem to pass BT, DBT, or ABX tests...
On one hand, it is somewhat accepted that the perfect component imposes no sonic qualites of it's own on the passing signal, but yet voicing of components is often referred to - particularly in the case of cables.

So, if a component is purposely voiced, then the reproduction cannot be true to the source can it?
It seems to me there is a spectrum of Audiophilia, which at one end lies the "no-coloration/neutral-is-best" goal, and at the other end is found "coloration-for-musicality". Proponents of each goal will argue their method provides the best reproduction of music. Between the extremes is infinite possibility for variation.

I believe the root of many disagreements in the Audiogon threads regarding various components lies in differing goals and methods preferred by those who comment. For example, the Nuforce amplifiers are a topic of some debate, and I am reasonably certain that the members who find these amps revolutionary prefer a sound that differs from that preferred by the members who are less than enthusiastic about the Nuforce amps.

So, the definition of the perfect component will vary from audiophile to audophile, and therefore the reproduction of the recording may not be true to the source, but it may be true to the music according to the preference of the hobbyist, IMO.
I have a huge problem with the concept of DBT, with regards to trying to determine the differences or lack there of, with audio products. Maybe I'm just slow but, I often have to live a piece of gear for awhile before I can really tell what it can and cannot do.
DBT is great for something like a new medicine. However, it would be worthless if you gave the subjects one pill, one time. The studies take place over a period of time. And that is the problem with DBT in audio. You sit a group of people in front of the setup. They listen to a couple of songs, you switch whatever component and then play a couple of songs. That just doesn't work. The differences are often very subtle and can't be heard a first.
Which, of course, is the dilemma of making a new purchase. You have to base your decision on short listening periods.
The concept of a DBT for a audio component is great. But, I have yet to see how a test would be set up that would be of any value. Looking a test results based on swapping components after short listening periods would never influence my buying decisions. I wouldn't care how large the audience was or how many times it was repeated. Anymore than I would trust a new drug that was conducted with a one pill dose.
Agaffer, I agree. I have participated in DBTs several times and have found hearing differences in such short term to be difficult, even though after a long term listening to several of the units, I clearly preferred one.

I think the real question is why do short-term comparisons with others yield "no difference" results while other circumstances yield "great difference" results. Advocates of DBT say, of course, that this reveals the placebo effect in the more open circumstances where people know what unit is being played. I think there are other hypotheses, however. Double blind tests over a long term with no one else present in private homes would exclude most alternative hypotheses.

The real issue, however, is whether any or many of us care what these results might be. If we like it, we buy it. If not, we don't. This is the bottom line. DBT assumes that we have to justify our purchases to others as in science; we do not have to do so.
DBT as done in audio has significant methodological issues that virtually invalidate any results obtained. With improper experimental design methodology, any statistics generated are suspect. Regularly compounding the statistical issues is sample size, usually quite small, meaning that the power of any statistics generated, even if significant, is quite small, again meaning that the results are not all too meaningful. Add to this the criticism that DBT, as done so far in audio, might be introducing its own set of artifacts that skew results, and we have quite a muddle.

I'm not at all opposed to DBT, but if it is to be used, it should be with a tight and valid experimental design that allows statistics with some power to be generated. Until this happens, DBT in audio is only an epithet for the supposed rationalists to hurl at the supposed (and deluded) subjectivists. Advocates of DBT have a valid axe to grind, but I have yet to see them produce a scientifically valid design (and I am not claiming an encyclopedic knowledge of all DBT testing that has been done in audio).

More interestingly, though, what do the DBT advocates hope to show? More often than not, it seems to be that there is not any way to differntiate component A (say, the $2.5K Shudda Wudda Mega monster power cord) from component B (stock PC)or component group A (say, tube power amps)from component group B (transistor power amps). Now read a typical subjectivist review waxing rhapsodically on things like, soundstage width and height, instrumental placement, micro and macrodynamics, bass definition across the sepctrum, midrange clarity, treble smoothness, "sounding real," etc., etc. Can any DBT address these issues? How would it be done?

You might peruse my posts of 8/13/05 and 8/14/05 about a power cord DBT session, carried out, I think, by a group that were sincere but terriby flawed in how they approached what they were trying to do to get an idea of how an often cited DBT looks when we begin to examine critically what was done.

http://forum.audiogon.com/cgi-bin/fr.pl?fcabl&1107105984&openusid&zzRouvin&4&5#Rouvin