Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
For the record, I am not opposed to rigorous DB tests; they can provide useful information. However, I do NOT have a high level of confidence in definitive interpretations of a negative result of a short-term DBT involving 2 components that may have subtle differences. As noted in my previous posts, the underlying complexity has not been unravelled yet.

I'll try one last time to hint at the complexity involved. In wine tasting, if you taste two samples one after the other, you should rinse the mouth with water to minimize the influence of the "after taste" of the first sample on the second one. If you look at a bright yellow object and then close your eyes, you will see an "after image" of a complementary color. As long as that "after image" persists, it is a "noise" that may influence some subtle subsequent visual experiences. Our brain circuitry and chemistry is not like electronic circuitry. I does not start and stop with the stimulus; and it has it's own variable "noise floor". The "after effect" that persists may mix with the subsequent stimuli. This added "noise" may smear the more subtle characteristics. A SHORT-TERM DBT may not allow enough time for the "after effect" of the previous sample to subside. That "noise" in the neuro-biological environment may smear SUBTLE differences.

Those of you with high level of confidence or faith in the negative results of short-term DBTs have yet to address this and other complexity issues. Hopefully, these issues will be sufficiently addressed as neuroscience and psychoacoustics develop. The reason why tremendous amount of research is still going on is because there is a lot that is not yet known. At least not enough is known for me to be very confident.

In the meantime, a rigorous DBT, among other things, should: 1)provide sufficient time between samples; 2) reduce the room effect that may smear differences; 3) make sure the participants pass a comprehensive hearing test, demonstrating that they can hear the frequencies in the audible range and can percieve dynamic gradations; 4) make sure the tested material includes a full spectrum of frequencies and a large variety of harmonic textures and dynamic shadings; 5) adjust the level of sound, preferably without adding any other components into the signal path that may smear differences; etc. After all, a meta-statistical analysis on a lot of flawed DBTs is not good science.
Puremusic, that's a good start on coming up with a test you would find satisfying. What would the "other things" you mention be? Would any of the other "subjectivists" in the crowd care to propose changes to the acceptable methodology? What would you find convincing?
Puremusic: Psychoacoustics and neuroscience are already way ahead of you. In fact, the kinds of things that get argued about in audio circles aren't even being researched anymore, because those questions were settled long ago.

Just to take one example, you insist on "sufficient time between samples." The opposite is true, in the case of hearing. Our ability to pick out subtle differences in sound deteriorates rapidly with time--even a couple of seconds of delay can make it impossible for you to identify a difference that would be readily apparent if you could switch instantly between the two sources. (Think about it for a second--how long would a species survive in the wild if it couldn't immediate notice changes in its sonic environment?)
Citations please, Pabelson. I don't follow this literature any longer but your mere saying we know is not convincing.
An explanation of why we pick up auditory differences closely spaced in time but not those spaced out over time:

The auditory system works like most of our perceptual systems, by detecting differences and similarities, rather than absolute values. What we detect, for the most part, are differences from a norm or differences within a scene itself (synchronically). The norm gets set contextually, by relevant background cues. This is more evolutionarily advantageous than detecting absolute qualities, because the range of difference we can represent is much smaller than the range of possible absolute value differences. By setting a base rate relevant to the situation and representing only sameness and difference from the base rate, one can represent differences across the whole spectrum of absolute values, without using the informational space to encode for each value separately.

For instance, we can detect light in incredibly small amounts -- only a few photons -- and also at the level of millions of photons striking the retina, but we can't come close to representing that kind of variation in absolute terms. We don't have enough hardware. What does our visual system do? Well, the retina fires at a base rate, which adjusts to the prevailing lighting condition. Below that is seen as darker, above that is seen as lighter. A great heuristic.

As it gets completely dark, you don't see black, but what is called "brain grey", because there is no absolute variation from the background norm. You see almost the same color in full lighting when covering both eyes with ping pong balls, to diffuse the light into a uniform field. With no differences detected, the field goes to brain grey.

Ask yourself why the television screen looks grey when it's not on, but black when you're watching a wide-screen movie. Black is a contrast color and true black only exists in the presence of contrast. Same for brown and olive and rust.

Same for happiness, actually. The psych/econ literature on happiness shows that most traumatic or sought after events are mere blips on the happiness meter, as we simply shift base rates in response, adjusting to the new conditions. Happiness is primarily a measure of immediate changes, bumps above base rate. So minor things, like good weather and people saying a friendly hello, are more tightly correlated with happiness than major conditions like having the job or the car you've been wanting.

Think about pitch. We can tell whether pitch is moving, but only the lucky few have any sense of absolute pitch... and this is usually a skill developed with a lot of feedback and practice. Why? Because it's more useful and economical to encode that information.

Far from cleansing the auditory taste of one note from one's mind and then playing another, you need to play them immediately back to back for comparison purposes. Perhaps you can switch the order around to eliminate after-effects.

By the way... wine-lovers *do* take blind taste tests. And experts can readily identify ingredients in wine, as well as many other objectively verifiable qualities. So it is perhaps not the best analogy for audiophiles who cannot do the same, and won't deign to try.