Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
WARNING: LONG POST -- LIFE HISTORY AND ITS ILLUSTRATION OF BIASES -- YOU MAY WANT TO SKIP

I went through grad school with a $150 boombox. As a classical music lover, I obviously wasn't happy with it, but what was I going to do? Sell my '87 Buick and walk? Audio was not *that* important to me. So it wasn't until I got a job that I decided to invest a little something in a decent 'stereo'. Still, I was married and my wife was in law school, racking up debts. Not knowing anything about hifi, I decided to get a simple home theatre setup. I went to Best Buy, dropped three hundred on a Yamaha receiver, and another couple hundred on a 5.1 speaker set-up. ($100 off b/c I bought the two together.) Some cheap cables, and I was headed home to set up my new rig. Hooray! And man, this thing came with a sub!

You know what happened, of course. The system actually did pretty well with movies. I don't care all that much about HT being perfect. Eminem's 'Eight Mile' was the first movie I watched with the new setup, and it ROCKED! Highs were crystal clear. Very sharp. And the bass, or actually, it was mid-bass, b/c that sub doesn't go too low, was nice and full in my apartment. Gladiator was great too. Cool.

Then I popped in my cds. I wanted *so* badly to like what I heard. After all, my wife was already pissed that I had spent $500. "$500? And you don't like it? What's wrong with you? If you're going to be so picky, you should have gone to law school rather than taking forever to write a dissertation." (I still wasn't done at the time.) She didn't even know about the extra 100 I had spent on a Monster surge protector and cables.

But it sounded terrible. Mid-range just sucked. There's no other way to say it. And treble portions were highly highly annoying.

I stuck with the system for the next year or so. After a separation from my wife, I did what any lover of sound would do, and finally allowed myself in the local hifi shop. (It was only two blocks from my apartment.) I walked in with the idea of purchasing new monitors for the fronts and leaving the rest of the 5.1 system in place. Explaning my situation, the staff (quite helpful, really) suggested Paradigm monitor bookshelves. They were a few hundred bucks and sounded great in the store. There it was -- lifelike voices, not the tinny, metalic sounds I heard at home. Ahhhh!!!

Before I left with the Monitors, one of the sales guys said he had a pair of Studio v.3's I should listen to before making a purchase. Well... I listened, and *wow*. Incredibly accurate sound. It was nothing you could hear with any combination of Best Buy equipment. I bought the speakers on a pretty hefty discount, with stands, and charged home to listen.

The improvement *was* dramatic, don't get me wrong, but still not anything like I heard in the store. Hmmm... Could it be the other stuff in my system? Nah... Cd players are all the same. And amps too. The Yamaha was rated *way* above the requirements for my new Paradigms. And so what if my source was an old dual VHS/dvd player? Bits is bits, right? So it must be my room.

I spent another several months trying to like the sound. Very quickly, I discovered that speaker positioning mattered, and room treatment too. I made a lot of adjustments, but my sound was never *smooth*, as it was in the store. Hmmm...

About this time, I started researching audio. I was relieved to find out people liked my Paradigms for a "budget" speaker. "Budget? Seriously?" I thought. But everyone seemed to think that source and amplification were also important. And there was this thing called a "preamp".

I went back to the audio store and tried some better receivers -- Pioneers with room eq -- but the sound still wasn't to my liking. Sure it was loud, dynamic, and even full. But it left me cold. I pointed to some shiny gear across the room. "What about that?"

"Oh, you don't want that. It's just two-channel. You want home theatre, right?"

"Well yeah, but first and foremost, I want something that sounds good."

So he played me a Musical Fidelity integrated and cd player (around $1,500 each), with my Paradigms. Unbelievable. I just sat and listened for about two hours, entranced, letting the music work its magic on me.

I couldn't afford the MF, but there was a demo Classe, which sounded very similar in the store, only significantly less cash. I brought that home and auditioned it. Definite improvement on the Yamaha, or so I thought.

I bought it and sold the sub + sats on Ebay. Now I was *there*, right? No. I still got annoyed. But closer. Definitely closer.

Anyway, about this time, I discovered Audiogon. I also started talking to my brother-in-law, who had tried dozens and dozens of combinations of amps and speakers to get vocal music right. I realized I was only at the beginning. I was just started on the audio path. Damn. I thought I could just walk into Best Buy, walk out, and be done with it. I had no idea this would be a hobby, and a long-lasting costly hobby at that.

Anyway, I still have the Classe. And now I wonder whether it actually sounds any better than my old Yamaha. Even if it doesn't, objectively speaking, I think it does, subjectively. Because it's a really pretty amp. It has this super-heavy milled steel remote and the display, volume knob, and everything, just ooze quality. (Ok, the outputs don't. They seem cheap.) I can't help but look at my setup when I listen, and I much prefer looking at the Classe.

Maria Callas had a magnificent voice, but she was also hot, and I'm sure that added to the experience of opera-goers of the time. Speaking for myself, I prefer a grotesquely fat and ugly soprano who sounds good to a waifish beauty who sounds strained, BUT, other things being equal, a beautiful soprano actually *sounds* better in the typical soprano role. I once saw Angelina Gheorghiu in the role of Mikaela in Carmen at the Met. Gorgeous coloratura soprano, but also, she was beautiful, at least from the cheap seats where I sit. Took the breath out of my chest. I bet Gheorghiu wouldn't prove that much better than her fatter and uglier peers in blind comparison. But at the opera, you ain't blindfolded.

Maybe what happened when I looked across that showroom and spotted the shiny MF gear was just love. Just as hunger is the best sauce, love makes things sound better. A *lot* better.
Greg: Your generally thoughtful and balanced letter was, in my opinion, a little too balanced. Here's where you went astray:

The proponents of dbt...want to engage in very short tests conducted by the uninitiated. Most proponents of dbt use it to try and prove what they already have concluded.eg cables and amps all sound the same.

This reflects a basic misunderstanding. Objectivists don't want short tests, we want good tests. (All the research suggests that short tests are in fact better tests, but DBTs can be any duration you want). And a requirement of good DBTs is that you provide the subjects with adequate "training," meaning that they are familiar with the sound of the equipment they are comparing. The "uninitiated" make very poor test subjects.

Finally, no one argues that all cables and amps sound the same, and that's not the purpose of DBTs. The purpose of DBTs is to determine *which* components sound the same, and which do not.

How about a dbt between vinyl and digital. Or electrostatic and dynamic speakers- tubes and solid state.

All of these have been done, at one time or another. Vinyl and digital are easily distinguishable--unless the digital is a direct copy of the vinyl. Speakers are always distinguishable in DBTs. Tube and solid state amps are often but not always distinguishable. When they are distinguishable, it's usually because the tube amp is underpowered and clipping (though very mellifluously, as tubes are wont to do!), or because the output impedance of the amp is interacting with cable and speaker to produce frequency response errors.
Pabelson, why do you willfully ignore the truth? It is only your CLAIM that DBT gives the reality, when you say "DBT--because it usefully separates reality from illusion" Certainly you don't claim that DBTesting is isomorphic to reality. It is a well structured experiment that differs greatly from what we normally hear and how we hear it. I would say that DBT is an illusion of reality and that reality would be found in the amp that most preferred, especially were personal ownership and manufacturer hidden.

I suspect that this discussion has gone as far as it can. You insist that double blind same/different testing is valid and I say it is not because it is an invalid assessment of people's hearing differences and saying what they like. I am not saying I like what I like and I reject it any more than you are saying I know there are no differences among amps, etc. and therefore anything that shows otherwise is not science as represented by DBT.
It is a well structured experiment that differs greatly from what we normally hear and how we hear it.

This is just an astounding statement. How in the world can simply not telling someone what they are listening to affect what they *hear*? I'll grant you, it can certainly affect what they think about what they hear, but that's just the point. What they think is a function of things besides what they hear, and DBTs isolate the non-sonic effects.

Note that there's no necessary contradiction between these two statements:
1) Harry can't hear a difference between A and B.
2) Harry prefers A to B.
Both can be true. All it means is that Harry prefers A to B for some reason other than its sound (even if he thinks the sound is the reason).
Qualia8, I think the blind testing of women orchestra players has been a wonderful thing. So I'm cool with that and think it's a wonderful breakthrough for women musicians and for fairness.
But audio testing isn't single-player auditioning is it? ... and probably another way of auditioning that would be great would be to have auditonist X blind play along with the rest of her section or the rest of the orchestra! ... I'm willing to say that blind auditioning is a good way of rooting out sexism in big-city orchestra hiring, but I'm not willing to conclude that this addressese the problems with audio auditioning.
The problem again is synergy--and time and acquired taste and listener quirkiness.
You write, "So, if two amps cannot be distinguished unless you're looking at the faceplates, why buy the more expensive one? Now who finds fault with that reasoning?"
To which I would say you would need to test the amplifiers in question with at least 3 different types of speakers, including speakers with different sensitivity levels, different ohlm levels before you can go anywhere near calling it a valid test. Beyond that, I know that my listening preferences, food preferences, beer preferences, beauty preferences are not static and frozen. I have a close friend who is not drop-dead gorgeous on a first look, but keep looking at her, and over time, you just are drawn more and more to her face. It's a beauty that takes time to emerge and when it hits you, you're deeply enthralled because you keep searching and studying her beauty. Yet, if you put her picture in a mag, maybe I wouldn't pick it out. This woman's beauty increases over time, and this is different from the "now that I'm with her/him, of course, he/she's good looking effect."
So the point: should we we blind audition for 1 month? 3 months? 6 months? .....time is critical here.
Ultimately, I reject the idea of auditioning a single component other than a source component. Speakers and amplifiers have to be auditioned as a team. And teams, we all know, often combine in ways that are more than the sum of their parts or less than the sum of their parts.
to Jeff Jones, hey, it's OK if someone wants to test something; it's just let's be honest about the very real limits of the tests! ... This reminds me of modern presidential polling ... pollsters call 1,000 people around the USA and get hangups, etc., and eventually come up with a number of people voting for Bush vs. Kerry. And the polls have a margin of error of 3 to 5 percentage points .... So the poll one day says Bush 50, Kerry 50; this really could mean (figuring in the margin of error) Bush 55, Kerry 45 or Kerry 55, Bush 45. The poll is basically useless. But the way out of this is that pollsters take samples every day as the campaign advances and there are so many different polling agencies ... and so it's only under conditions of repeated poll taking and repeated polltaking by multiple and antogonistic entities that gives us any confidence that yes, the 2004 Presidential race is neck and neck. When we see polls over 10 straight days, taken by 10 different entities, clustering around Bush 50, Kerry 50, and clustering over a period of weeks, ONLY then can we have some confidence. But even so, voter turnout is always the X factor, and most pollsters admit that's the one they can never nail down. A higher than usual turnout among Democrats or Republicans or evangelicals or blacks or whomever will make the poll results pretty much invalid.
I just don't think the tests can get us the information we're looking for--without doing something like the equivalent of daily tracking by multiple entities ... And by the way, in polling, all these entities have an incentive to get the numbers right because they will make more money and win more acclaim. The polls by the candidates themselves have this incentive in a big way. But where's the equivalent incentive for audio?
I frankly celebrate the wonderful elusive complicated complexity and quirkiness of listening to audio.
I'm quite willing to enjoy that.
Sooner or later someone is gonna start advocating db testing for cars...yikes!
Put it another way, double blind testing cannot work IMO in audio precisely because there is no "absolute" definition on "what sounds better" to begin with (with apologies to HP). How do we define it the first place? Measurements are fine but in only in terms of allowing us figure out what aspects of sound (jitter, frequency range, channel separation etc etc)that contribute to what we are hearing...but in the end its a VALUE judgement. Some like tubes, some like SS. Some need gigantic floorstanders for frequency extension, some prefer mini-monitor accuracy & imaging. Some prefer transparecy, air etc others image solidity, warmth etc. The evaluators who comment after a DBT are including their own subjective value judgements as well. You cannot answer a question that is ill defined in the first place. In this light someone else perhaps half joking said what about cars, wine, literature whatever....and in a way he is right on the money: in all of these pursuits value judgements occur.

If we are to just focus aspects of sound and have no value judgement...i.e just want to know say the level of jitter of one against the other etc...then we can just measure for the most part, no need for DBT again.

The one aspect I think DBT can be used perhaps is not comparing one brand vs another, but within brand and the same model: for example you have the exact same system set up, and then test a new model: one has more jitter than the other...then one has upsampling switched on, the other doesn't etc etc...by the manufacturers....this way one may try to investigate what matters most, a priority schematic, if you will one, to audiophiles or the public at large and then devise products or even array of options on the same product to maximize revenue or provide tailor-made solutions for various segements. Obviously this is more of marketing strategy than an answer to the "holy grail".
With regard to the DBT on two amps that were indistinguishable in a DBT...even if I accept the results as is, and not impose my own sense that "I could have heard the difference unlike those participating" etc and even if I accept the results as valid, that yes one could not tell the difference....still doesn't tell me much: this is because all it has shown me that these two amps are indistinguisable in a particular system set up (especially speakers!), in a particular room, etc...so adding that piece of information a review for example doesn't help me unless I have the exact same set up sans the amps. Not arguing reviewers are perfect either...far from it...they are inherently subjective and bound by their own experiences and equip as we all are...but at least he or she can frame their observations in context which allows me as a potential buyer what to look for, what to investigate, what things I may need to consider etc.
the illusion is the false reality. Kinda by definition
That is correct, semantically. It's also kinda philosophical.
IMO we should distinguish between semantics and philosophical extrapolations and the simple PRACTICAL application of DBT in our (restricted) context.

BTW, I also suggest that certains things CAN be indicative of performance or INFLUENCE things, in OUR context, such as:
*measurements -- as long as we measure what correlates to what we're looking for (i.e. we would have to determine in advance which measurement indicates what aspect, in terms of perceived sound; little has been done there)
* wires for example -- because they link two electrical circuits, active / passive & combinations thereof
* active components: their circuit design, power supplies, input & output stages, components used... influence the distortion levels AND how well these components interact with the load. Change the load (what the output stage "sees") and things change electrically; if we change something in the system, we've modified the system "circuit" fer pete's sake. Things may also change in the audible range...

...etc.

So, maybe we are discussing whether it's worth setting up dbt to help notice differences in the audible spectrum?
Or whether perhaps dbt is not the most efficient/reliable method of doing so in this particular context?
Or, perhaps, discussion is a way of communicating -- a marvellous, human activity that we all need. And the subject of dbt allows us to do just that -- so what we really want to do is to talk regardless and dbt offers us just that opportunity, whether it is or isn't panacea.
I go for the latter -- my take of course! Cheers:)
Pabelson, thanks for yr kind words, but the quotes you make refer to another poster -- or are they there to illustrate yr previous points?

Qualia sez:
other things being equal, a beautiful soprano actually *sounds* better in the typical soprano role
Good point! Matter of fact, I read s/where that a nice-looking piece of equip was invariably "heard" to sound "better" than itself unsighted. Amazing!
Wattsboss:

I *do* like your analogy of the beauty that grows on you. I've had that experience, as well as its opposite -- the superficial beauty that fades quickly (or immediately upon conquest). True. Typically, it's because facial expressions take on a representational character; they come to stand for the moods and traits of the person. And in the case of a good-to-the-core person, that goodness starts to shine through. In the hot-bitchy type I usually go for, the nastiness gets associated with what I previously thought was cute.

Anyway, it may be that the beauty of an audio system takes time to appreciate fully. But distinguishing between looks doesn't take time, even if the full evaluation of those looks does. Maybe the analogy here is identical twins who no one can tell apart initially, but whose family and close friends can... immediately.

After all, I'm not sure I could distinguish the sounds of two violins immediately, in the hands of a skilled violinist. Each violin makes a wide range of sounds, and I'm not sure what's due to the violin and what's due to the violinist. Yet one violin might be $1K and the other $10K, because violinists themselves can immediately hear the difference. Maybe it's like this with audio. But I have no reason to think so, given the studies I've read, in which audiophiles who are familiar with the equipment, do no better than non-audiophiles (who are also familiar with the equipment).

Also: there is a long-term in-home disguised cable experiment going on right now. It has a few more months. We'll see how that goes.
Not arguing reviewers are perfect either...far from it...they are inherently subjective and bound by their own experiences and equip as we all are...but at least he or she can frame their observations in context which allows me as a potential buyer what to look for, what to investigate, what things I may need to consider etc.

As part of that context, wouldn't you like to know whether this reviewer can actually hear a difference between the product he is reviewing and some reference? And if he can't, what does that tell you about his review?
Pabelson, no I have no interest in whether a reviewer can hear a difference between components using DBT using the usual same/different format. Since I don't think it is valid, I would rather continue my present procedure of find whose reviews prove on target in my estimation. Frankly I don't think there are enough DBT proponents out there to make a magazine using them viable.
Yes, beauty can grow on you. But notice that it's not the lady who's changing. It's you. What does that tell us about long-term comparisons?

TBG: Yes, there are far too few objectivists to make a market. That's why the largest-selling magazine in the US that reviews audio equipment is that subjectivist redoubt . . . Sound & Vision.
Pabelson,

Thanks for your posts. Although I have never done blind tests, my experience with audio equipment matches the DBT reports perfectly. I simply can't detect audible differences between different cables, CD players and solid state amps provided, of course, they are of a minimum quality.

I do, however, detect large audible differences between different speakers, their placement and the room in which they are played. I also notice large differences in low powered SS amps versus high powered SS amps but only when driven at a high demanding levels.

On my Anthem AVM 20 preamp/processor I am able to play music either direct anaolg or through the A to D and then D to A circuit....a kind of test....again I hear no difference as long as all the tone controls and settings are the same. So I typically use the A to D and D to A in the circuit becuase it allows me to use base management for my subwoofer...something that does need adjustment (for room) and makes a tremendous difference when adjusted to my taste (digital filters providing great flexibilty). Furthermore, I can use the digital out or the analog out from my various CD players and again I hear no audible difference provided tone control/volume is the same...although I will admit that the digital output can have less background hiss S/N at extremely high levels, but this is not always audible either (setup dependent).

What does this mean for me when buying equipent?

1) Choose CD's and SS Amps for their features (speaker protection, tone controls, souund processing capability, warrenty etc.) For example, I like CD changers as I don't have to mess with CD cases. As for amps, I prefer overly powerful SS amps with significant headroom in order to minimize distortion from clipping.
2) Spend most or your money on the speakers, as this is the single biggest variable and adds the most distortion in the whole setup.
3) Buy good quality (shielded) but not exhorbitantly priced cables and inteconnects.
4) Pay attention to which room is used for playback and what are the wall/floor coverings and speaker placement.
With apologies to Shakespeare and all logicians:
"To DBT or not to DBT is or is not the question."

Hi Pabelson,

Your quote points us to the central point of this discussion:
"Yes, beauty can grow on you. But notice that it's not the lady who's changing. It's you. What does that tell us about long-term comparisons?"

It tells us what neuroscience has discovered. The brain is much more plastic than once believed. It is not static like electronic circuits. The brain circuitry and its chemistry change. New interneuronal connections are formed and concentrations of neurotransmitters and other brain chemicals change. So, what the brain could not distinguish one day, it may LEARN to distinguish in subsequent exposures to the experience. We have experienced this learning phenomenon as students, as professors, and as audiophiles. This is part of our growth and evolution. A double-blind test based on short-term listening sessions may not allow enough time for the brain circuitry and chemistry to reconfigure itself to discern the difference. Therefore, if a short-term double-blind test does not show a difference between two amps, it would not be correct to conclude that there was no difference between the amps, only that that particular test did not reveal a statistically significant difference. A double-blind test showing a positive difference may be useful for audiophiles, while the test showing no difference is an inconclusive statement about the amps.

Incorrect interpretations can also be made for long-term double-blind tests. History of science shows us that even the hard sciences like physics are not immune from making incorrect interpretations. A committment to truth and critical thinking helps purify science to better the human condition. Otherwise, our implicit assumptions may yield tautological statements similar to the very first statement in this post. Although it is logically valid, it does not contain useful information for the audiophiles.

Best Regards,
John
Let's not be so selective about what neuroscience has been discovered, John. It, along with psychoacoustics, has indeed discovered that it can take time to learn the sound of something, and the difference in sound between things. But they've also discovered that, once you've learned those differences, the best way to confirm that those differences are really there is through short-term listening tests that allow you to switch quickly between the two components. So why is it that a reviewer, who supposedly spends weeks "getting to know" a component, and who also owns a reference component which he also knows well, can't hear a difference between the two in such a test?

My point about how we change was aimed at the reviewer who reports differences between the component under review and something else he may have heard months before, but doesn't have now. He's claiming to do something that your neuroscientist/psychoacoustician has found to be impossible.
my new policy on audiogon is to post my opinion and let it stand on it's own merit. I no longer feel the need to respond to every competing opinion. I'll let the readers make thier own conclusion. I do however reserve the right to respond crticism directed at me.

My approval of DBT is in no way an endorsement of the ABX test.

Just because you beleive in DBT or ABX testing does not make you an objectiive. DBT proponents have yet to show me where they have used it to advance the state of the art. They are as biased as anyone else. In fact the inventer of the ABX gave this as his reason for inventing the ABX box-he was upset that audio companies could be destroyed by audio reviewers who did not know what they are talking about. Thus DBT/ABX was invented to attack the integrity of audio reviewers. Not as an objective scientific tool.

The intial tests where short term on inexperienced listeners. That is a fact. To furhter demonstrate thier lack of objectivity it is the proponents of DBT/ABX, when confronted with the fact that reviewers like Michael Fremer were in fact able to match A&B to X, attacked the validity of thier own test. In effect they concluded that because they knew there was no difference between amps he must be using some trick formulated by his knowledge of the amplifiers under test.
No one argues that amps and cables sound the same. Nothing is further from the truth. That is exactly what they argue calling it snake oil and making vial insults to those who design, sell, buy and review it.

Feel free to remain wedded to frequency response, distortion figures and output impedance if you like. You don't need a blind test for that because it is so easily measured. You may clean your palette with the occaisional blind test. Ultimately you are going to have to listen. This is what all the manufactures of good equipment do.
Perhaps because the neuroscientist/psychoacousticians don't intend their testing to deal with what is most accurattely replicates music as the experimental context necessitates tight and brief controls.
Hi Pabelson,

"But they've also discovered that, once you've learned those differences, the best way to confirm that those differences are really there is through short-term listening tests that allow you to switch quickly between the two components."

Any neuroscientist who would claim he/she discovered "the best way" to confirm differences would not be very credible with me on at least two counts. First, it is the "best" amongst which collection of methods? Have ALL POSSIBLE methods been tested? Perhaps some heretofore untested method could be even better. So, the scientist overstated the result. Although such hyping occurs, it is hardly scientific. It would also lead me to question if the scientist's methods also lacked precision and other high scientific standards.

Second, to determine that this method is the "best", it must be different from the rest. But how can the neuroscientist determine this difference? By DBT, the "best" method that determines differences??? But then the neuroscientist will be using the very method he/she is attempting to validate. In other words, the neuroscientist would hang himself/herself in a logical loop of circular reasoning.

Your statement appears to be based, at least in part, on faith in neuroscience and psycho-acoustics. These are important sciences but they are not hard sciences like physics and chemistry. Compared to physics, they are sciences in infancy. Their levels of rigor, accuracy, predictability, and reliability are not yet in the same league as those for physics and chemistry. So, my level of confidence in them is not as great as what yours appears to be in your posts. It's the complexity.

The complex substratum involved in auditory perception is not yet sufficiently understood to shed light on the finer aspects. A large number of neurons form millions of possible pathways that a particular "encoded song" can travel in our brains to yield the perception of its sound and our reaction to it. The same song or piece of music produced by the same audio system a few moments later may not travel the exact same pathways in our brain and hence may produce a different experience. This variability is compounded by the non-constant chemical environment that influences our experience. (For example, the amount of endorphins available at any one time.) Emotional changes, expectations, suggestions, levels of alertness, fleeting nature of memory, etc. add to the variability. Also, the brain circuitry is not as rigidly set as it once was thought to be. It can change with experience and learning. At the current state of neuroscience, there is insufficient organization, understanding and integration of this variabile milieu to shed light on the finer issues about DBT. That may be reason enough for some opponents of DBT to claim that "to DBT or not to DBT" is an irrelevant question. I, for one, am in favor of rigorous DBT and would find the positive results useful but the negative results inconclusive for reasons given in my previous post.

Best Regards,
John
Suffice it to say that Gregadd's history of DBTs is almost entirely false. They were in use long before the invention of the ABX Comparator, which was just a convenient tool to do what we already knew how to do. If you go back and look at what 'reviewers like Fremer" were able to distinguish, you would find no mystery at all.

TBG owes us an explanation of how two things that sound identical can replicate music differently.
As they say in the neighborhood where I grew up..."it's on now."
Pay attention Pabelson. I did not give a history of DBT which I concede can be a useful tool in revealing prejudices but ignores the real issue. I gave a history of the ABX Comparator. In fact DBT and ABX arrived on the Audio press simultaneously.
You are quite correct Mr. Fremer's results are in fact not a mystery. He proved he could hear the difference.
If you want to know my principal source for the history of ABX it was primarily the Audio Critic written and published by one of the most serious advocates of DBT/ABX Peter Aczel.
Both Mr. Fremmer and myself suffered a personal attack in the letters column. It appaears Mr. Fremmer was not qualified to criticize DBT/ABX because of his sloppy wiritng style. I of course was unworhty because I was a DC trial lawyer.
Like you, when confronted with Mr. Fremers' test results Mr. Aczel refused to acknowledge that Fremer could match A&B to X in a test designed by others, where frequency reponse distortion , output levels,etc. were all accounted for. He tried to wiggle his way out but was unable.
I of course dared to challenge Mr. Aczel to put his money where his mouth was. He was using a top of the line Boulder amp. I offered to trade him his Boulder for the generic Radio Shack amp of his own choice with an equlizer so he could compensate for any frequency reponse violations. In fact I told him he could keep his amp and I would give him the Radio Shack amp of his choice if he promised to use it as a primary reference.
His response was that I knew he could not accept the offer becasue he needed a top qulity amp for his test. He then accused me of conducting a cheap trick. He said this trick may work on a DC jury but not on him. This transpired in the late'70's or early '80's. Harry Pearson published the first letter in the absolute sound.
Pableson while I may make mistakes, I don't indulge in falsehoods.
Pabelson, you owe us an explanation of how two amps that replicate music differently can sound identical in the restrictions of DBTesting.
Gregadd: If you were talking about the response of a single objectivist, you should have named him right up front. Instead, you tagged all objectivists as dishonest, based on your interactions with one man. That's an understandable error, but it's an error.

There have been dozens of published DBTs of amps. Some have been positive, some have not. Fremer claims to have done a positive test. So what? He ain't the first, and won't be the last.

I'll pose to you the question I've posed to others: Shouldn't a reviewer, before he reviews an amp, confirm that he really can hear a difference between this amp and his reference amp when he doesn't know which is playing? Ever wonder why none of them do this?
TBG: Two amps that reproduce music differently enough to be heard will NOT sound identical in a DBT. But how do we know that two amps reproduce music differently? You say they do, but how do we know you are right?

Let me pose the question a bit differently. Here we have two amps that are not distinguishable in a level-matched, quick-switching ABX test, generally regarded in scientific circles as the gold standard for determining audible differences. A subjectivist claim that these two amps reproduce music differently. How would he prove that they do? Whatever he does, he has to use a blind test, because a sighted test can prove nothing about audibility. That's settled science. So what kind of test should he use?
The standoff between Pabelson and Tbg reminds me of the stalemate between the external-world skeptic and the dogmatist.

Skeptic: You don't know that you're not a brain in a vat of nutrients, being stimulated by a computer simulation, carefully monitored by a team of scientists, to think you're in a real, concrete world... the world you *think* you're in. Since you don't know you're not a brain-in-a-vat, you don't know anything mundane about the external world, e.g., that you have two hands.

Dogmatist: I know I have two hands! If I know I have two hands, then I know I am not a handless brain-in-a-vat. Therefore, I know I am not a handless brain-in-a-vat.

One man's modus ponens is another man's modus tollens, as the saying goes.

(For non-logicians, modus ponens is: If P then Q. P. Therefore Q. Modus tollens is If P then Q. Not-Q. Therefore, not-P.)

Pabelson: DBT shows no audible difference between cables, therefore there is no audible difference.

Tbg: There is an audible difference between cables, therefore, DBT is flawed.

Logic alone (formal logic) cannot settle the dispute, any more than logic can settle the skeptic/dogmatist dispute.

But in this case, it's odd to think of Tbg's favored cables being a/b'ed with cheapos, without his being able to tell the difference, and then, only when told the true identity of the cables, his insistence that there *is* a perceivable difference. Very odd.

Here's a question for the doubters of DBT-ing. Given that there are perceptual biases at work (expectation, confirmation, endowment effect, etc.) how would one test for such biases? That is, what *would* count as two components sounding the same?

Suppose you have two amps that are identical except one of them has a beetle put inside and the beetle runs around, I don't know, defecating in there. And then reviewers praise the beetle effect: "Widened the soundstage by meters! You don't need golden ears to hear this one!" How would you go about evaluating the beetle effect?
As I have said at least 5 times, your statement that, "ABX test, generally regarded in scientific circles as the gold standard for determining audible differences" is not true. But neither of us will ever convince the other, so why don't we just drop it. I can accept your statement that those advocating it would not be numerous enough to justify a magazine so it is a moot point.
Interesting discussion but I think the main point or lesson of DBT tests is being missed through the discussion of semantics and detail.

DBT tests are significant for audiophiles because they show that, for the handful of those people and equipment tested, differences cannot be audibly detected between some types of equipment, at least not easily, even when people are trying. DBT's show that it is typically hard to discern differences with good quality SS amps, CD players and a variety of quality cables.

This raises questions about reviewers abilities to hear strong differences between certain types of equipment, but it does not conclusively prove that they cannot hear these differences (unless they were DBT tested with the very equipment under review).

It also suggests that some types of equipment are either less crticial (cables) or of universally engineered high quality (many amps and CD players); in these cases, different equipment choices are unlikely to make a significant difference to the sound that is heard. DBT tests also tell audiophiles that some things are a big factor...speakers, speaker placement and tube vs SS amps, for example. In essence, DBT's confirm what is probably a gut feel for any serious listener who has played around with a variety of equipment over the years.

However, it remains possible that some people out there might be able to hear a difference....for differences there surely are...however minute and undetectable to those tested so far. So those with "golden ears" can keep on searching for nirvana, everywhere and anywhere, including but not limited to fancy power line conditioners and other tweaks.

If DBT test result reports do not conclusively prove that a cheap amp is just as good as a very expensive one to your own ears...then exactly what use are they????

I suggest they simply offer some guidance as to where an audophile might spend a higher proportion of their effort/money in improving the sound of their system. They also suggest that the "emperor may have no clothes" in some cases....so, for certain types of equipment, be a little wary of exhorbitant prices and rave claims by audio salesman/reviewers!
Pabelson: we've said the same thing... what *would* count as a test of audible difference if not dbt? How would we ever know a beetle in the box wouldn't make an audible difference? Or what if we were simply to change the faceplate on an amp. Nothing more. Would that change the audible sound? How would you know? What if the reviewers rave?

Shadorne: I said much the same earlier in this thread. Why anyone, whether or not they think DBT is the *final* word, would ignore DBT as a way of determining where to spend their own money (speakers, room treatment first, then other stuff) is beyond me. In other words, I really don't understand someone who would spend more on power cords, conditioners, and interconnects than speakers, given the DBT results. And there are plenty such people!
Shadorne:

Another way of making your point is this. Even if ABX tests do not reveal all audible differences, somehow, they do reveal *degrees* of difference. Components that ABX as different, and clearly so, are different to a *greater* degree than components that are indistinguishable under ABX conditions. Therefore, they are more deserving of audiophile evaluation. Likewise, ABX-distinguishable gear that is perceived as clearly better in DBT-ing, is more deserving of audiophile cash than gear that is not perceived as clearly better in DBT-ing.

This is independent of whether or not there is some perceiveable difference between components that are ABX indistinguishable. (Although I still can't understand how that could be.)

Yet, ABX opponents seem to ignore this more modest lesson. They reject ABX as a way of ultimately distinguishing components, and therefore decide it is unworthy as a reviewer tool at all, even in deciding where to drop their cash. Why?
Shadorne, I understand your moderate position. Please take no offense when I simply say that I strongly suspect that DBT is an invalid testing of what people hear. I am not really concerned either that some, myself included, cannot hear differences in the typical same versus different format so commonly used in DBT. People do hear differences when double blind testing is just a which do you prefer of amps A, B, and C. I don't really have much trust in many reviewers and don't need their inability to hear difference in same/different tests to be convinced.

I absolutely concur that we need to be wary of exorbitantly high priced equipment and rave reviews and claims by salesmen and reviewers. But we should equally be ready to hear true quality in some more expensive equipment. Quality parts cost money and research and design work has to be paid for. Too often I have heard some expensive gear that truly is excellent in my opinion and which I remain thrilled with. My Reimyo PAT777 amp and Shindo Labs turntable are but two examples. I also have a relatively inexpensive line stage, phono stage, and universal player that are at least the equals of much more expensive equipment, again in my opinion.

I once heard a $350,000 amp at CES. I listened with no intention of ever buying one. It was the best sounding amp I ever heard. The Stereophile reviewer also loved it, but it measurements looked bad and so they dismissed it. The objectivists ranted that Stereophile should not have even reviewed it. I ranted that they should have heeded their ears rather than their inadequate instruments. I still would not consider buying it largely because I just cannot afford it.
Let me try this one last time.
Just because you beleive in DBT/ABX doesn't make you objective. Every so called objectivist uses BDT/ABX to prove what they already beleive. when the test fails to prove what they already belive i.e all amps sound the same except for some easily measurted and compensated for parameters t deem the rewsults statisically insignificant. To me that is bias which is what we are talking about.
DBT has only two purposes, to eliminate perssonal bias and the placebo effect. When you have done that you are just getting started.
The real question is does a the component under consideration simulate real music. As crazy as it may make you only the human ear can evaluate that.

That I never conceded that A/B testing of any kind was signifcant. Buying A because it is better than B is a Madison Avenue trick! I was lucky enough to be taught that. It has saved me a lot noney.
Yes, Qualia, we are asking the same question. It's the same question that subjectivists have been asked for years, and they don't have an answer, so they have to stoop to insulting people's intellectual integrity, as Gregadd has just done yet again. Why do they bother?
Qualia, you say, "Why anyone, whether or not they think DBT is the *final* word, would ignore DBT as a way of determining where to spend their own money (speakers, room treatment first, then other stuff) is beyond me. " It is totally beyond me why anyone would have such distrust of what they hear to rely on DBT. If you wish say that I just choose to dump cash even when there are no differences. Basically, I find DBT invalid and have to otherwise proceed hoping that I can hear a side by side comparison of what I am interested in. On occasion I have been able to bring the desired components into my own home and do a comparison, some times I can rely on the ears of others I trust, one being a reviewer, one a distributor, and one two manufacturers, but most just audiophiles; and sometimes I just take a flyer, such as with the RealityCheck cdr burner. As I have repeatedly said, this is not a matter of rejecting science, it is a matter of rejecting a methodology as it obviously lacks face or conceptual validity. Also as with automobiles and wine, I do not base my buying decisions on double blind tests.
It is totally beyond me why anyone would have such distrust of what they hear to rely on DBT.

Apparently so. The explanation is simple: If you understand what scientists have learned about human hearing perception over the course of decades, then you will understand why we shouldn't always trust what we hear, and why in these cases listening blind is far more reliable than listening when you know what you're listening to. I suspect that you don't want to understand this, because it will upset the beliefs you've acquired over the years.

Now, there's nothing wrong with not knowing (or not accepting) this. After all, you don't have to understand the principles behind an internal combustion engine to buy a car. And if you can afford a multi-thousand-dollar audio system, it doesn't really matter. You'll probably get good sound regardless.

But if you can't afford that kind of an audio system, it can matter a lot.
I don't understand all the talk about "flawed methodology". If the methodology is flawed, make a suggestion as to how to improve it. In other words, for those who dispute the validity of DBT, please suggest a test that you would find convincing and yet would still control for the same factors (primarily listener bias) that DBT is designed to control for. Would you be convinced if a reviewer did a one month test of disputed component A in his or her own home, followed by one month with disputed component B? What about comparing a one month test of equipment with the reported price ranges and labeling reversed? What would it take to convince you?

Don't do what my friend did. He agreed to participate in a double-blind test that we discussed with him in advance. Only when when the test didn't show what he expected to find did he question the methodology. So agree on the methodology first, then live with the results.

I mean this seriously. A DBT won't change anyone's mind if the testers are not convinced in advance that the test will measure something. So please help to design an objective test that you AGREE IN ADVANCE will work.

If you believe that there is no such test, then you should question your own assumptions about the validity of the scientific method in general.
Okay, objectivists, one more try. I have participated in same/different DBTs and found that I could not hear differences. I have also participated in double blind tests that merely selected which preamp sounded best. In this case differences were obvious and most agreed on which preamp we preferred. I valued neither testing but the latter was more fun.

I am engaged in a social science and teach research methods at the graduate and undergraduate level so I am not anti-science. But there is good science and bad. More importantly there is the question of whether the concepts in the hypothesis are tested by the variables in the data. I am merely stating that I am unconvinced that questions such as whether there are differences among amps in their sound are not validly assessed by the short-term same/difference methodology commonly associated with DBTs.

A methodology that fails to hear differences among amps, wire, etc. heard by so many even in double blind circumstances is not convincing. It may sooth those who cannot afford more expensive equipment who can dismiss those who buy more expensive equipment as just impressed with face plates or bells and whistles or sold by hype, but it does not prove their delusional behavior.

I don't mind people keying their behavior on the most common "no difference" findings of DBTs, but objectivists feelings of superiority based on bad science are unjustified and likely to convince very few.

I really have failed in my first posting to suggest why DBTesting has failed to catch hold and why so many of us could care less that it has. No amount of casting aspirsions about subjectivists being unscientific will convince us and obviously no patience in presently my perspective will convince you. So why don't we just drop the issue and get back to enjoying life?
I use to think wires made no difference. Shoot, if I go back far enough in time, I didn't believe their were differences in amps. Consumer report's DBT articles agreed with me. I absolutely heard a difference in front ends, and speakers, so that's where all my money went.

A while back, an audio buddy brought over his new Shunyata power cords. We did a DBT, as best we could, and heard no difference between the expensive cord, and cheap stock. I had a so-so front end, and solid state amp running ribbon speakers.

The shocker came, when we all went to my buddy with cords place, and easily heard the Shunyata PC's superiority. He has clean sounding TacT gear.

When I switched out my ss amp for a "digital" one, we ran another DBT test. The PCs made a huge difference now. The same went for all wires. Everything left it's imprint on the end sound.

We noticed similar results in OTL systems.

My conclusion is, with "golden" systems I've heard, engineered to squeeze out the last distortion free musical morsel, one can discern small differences in all component rolling.

Maybe there is so much noise in lesser systems, despite what THD measurements say, small wire and amp differences are smeared over, and can't be heard.

In my case, the more I peeled off signal junk, the more I learned what devices produce said junk.
For the record, I am not opposed to rigorous DB tests; they can provide useful information. However, I do NOT have a high level of confidence in definitive interpretations of a negative result of a short-term DBT involving 2 components that may have subtle differences. As noted in my previous posts, the underlying complexity has not been unravelled yet.

I'll try one last time to hint at the complexity involved. In wine tasting, if you taste two samples one after the other, you should rinse the mouth with water to minimize the influence of the "after taste" of the first sample on the second one. If you look at a bright yellow object and then close your eyes, you will see an "after image" of a complementary color. As long as that "after image" persists, it is a "noise" that may influence some subtle subsequent visual experiences. Our brain circuitry and chemistry is not like electronic circuitry. I does not start and stop with the stimulus; and it has it's own variable "noise floor". The "after effect" that persists may mix with the subsequent stimuli. This added "noise" may smear the more subtle characteristics. A SHORT-TERM DBT may not allow enough time for the "after effect" of the previous sample to subside. That "noise" in the neuro-biological environment may smear SUBTLE differences.

Those of you with high level of confidence or faith in the negative results of short-term DBTs have yet to address this and other complexity issues. Hopefully, these issues will be sufficiently addressed as neuroscience and psychoacoustics develop. The reason why tremendous amount of research is still going on is because there is a lot that is not yet known. At least not enough is known for me to be very confident.

In the meantime, a rigorous DBT, among other things, should: 1)provide sufficient time between samples; 2) reduce the room effect that may smear differences; 3) make sure the participants pass a comprehensive hearing test, demonstrating that they can hear the frequencies in the audible range and can percieve dynamic gradations; 4) make sure the tested material includes a full spectrum of frequencies and a large variety of harmonic textures and dynamic shadings; 5) adjust the level of sound, preferably without adding any other components into the signal path that may smear differences; etc. After all, a meta-statistical analysis on a lot of flawed DBTs is not good science.
Puremusic: Psychoacoustics and neuroscience are already way ahead of you. In fact, the kinds of things that get argued about in audio circles aren't even being researched anymore, because those questions were settled long ago.

Just to take one example, you insist on "sufficient time between samples." The opposite is true, in the case of hearing. Our ability to pick out subtle differences in sound deteriorates rapidly with time--even a couple of seconds of delay can make it impossible for you to identify a difference that would be readily apparent if you could switch instantly between the two sources. (Think about it for a second--how long would a species survive in the wild if it couldn't immediate notice changes in its sonic environment?)
Puremusic, that's a good start on coming up with a test you would find satisfying. What would the "other things" you mention be? Would any of the other "subjectivists" in the crowd care to propose changes to the acceptable methodology? What would you find convincing?
An explanation of why we pick up auditory differences closely spaced in time but not those spaced out over time:

The auditory system works like most of our perceptual systems, by detecting differences and similarities, rather than absolute values. What we detect, for the most part, are differences from a norm or differences within a scene itself (synchronically). The norm gets set contextually, by relevant background cues. This is more evolutionarily advantageous than detecting absolute qualities, because the range of difference we can represent is much smaller than the range of possible absolute value differences. By setting a base rate relevant to the situation and representing only sameness and difference from the base rate, one can represent differences across the whole spectrum of absolute values, without using the informational space to encode for each value separately.

For instance, we can detect light in incredibly small amounts -- only a few photons -- and also at the level of millions of photons striking the retina, but we can't come close to representing that kind of variation in absolute terms. We don't have enough hardware. What does our visual system do? Well, the retina fires at a base rate, which adjusts to the prevailing lighting condition. Below that is seen as darker, above that is seen as lighter. A great heuristic.

As it gets completely dark, you don't see black, but what is called "brain grey", because there is no absolute variation from the background norm. You see almost the same color in full lighting when covering both eyes with ping pong balls, to diffuse the light into a uniform field. With no differences detected, the field goes to brain grey.

Ask yourself why the television screen looks grey when it's not on, but black when you're watching a wide-screen movie. Black is a contrast color and true black only exists in the presence of contrast. Same for brown and olive and rust.

Same for happiness, actually. The psych/econ literature on happiness shows that most traumatic or sought after events are mere blips on the happiness meter, as we simply shift base rates in response, adjusting to the new conditions. Happiness is primarily a measure of immediate changes, bumps above base rate. So minor things, like good weather and people saying a friendly hello, are more tightly correlated with happiness than major conditions like having the job or the car you've been wanting.

Think about pitch. We can tell whether pitch is moving, but only the lucky few have any sense of absolute pitch... and this is usually a skill developed with a lot of feedback and practice. Why? Because it's more useful and economical to encode that information.

Far from cleansing the auditory taste of one note from one's mind and then playing another, you need to play them immediately back to back for comparison purposes. Perhaps you can switch the order around to eliminate after-effects.

By the way... wine-lovers *do* take blind taste tests. And experts can readily identify ingredients in wine, as well as many other objectively verifiable qualities. So it is perhaps not the best analogy for audiophiles who cannot do the same, and won't deign to try.
Hi Pabelson,

You did not address some critical issues I raised in my posts. In particular, that the "after effects" of sensory experience can combine with subsequent stimuli to smear differences. The "after effects" result when the brain circuits don't start and stop with the the stimuli. If you continue to evade by brushing aside the issues, then there is no reason for me to continue with this thread.

Best Regards,
John
Citations please, Pabelson. I don't follow this literature any longer but your mere saying we know is not convincing.
Qualia8,

"Far from cleansing the auditory taste of one note from one's mind and then playing another, you need to play them immediately back to back for comparison purposes. Perhaps you can switch the order around to eliminate after-effects"

Switching the order around doesn't eliminate after-effects, it only replaces one "smeared" event by another; possibly different from the first. For example, if you follow a yellow image by a red one, the complementary after-image of yellow (violet) will "combine" with red. If you switch the order and show red first, followed by yellow, then the complementary after-image of red (green) will "combine" with yellow.

Best Regards,
John
But if these "after effects" mattered, John, then we'd see listening test results showing that putting gaps between samples improved subjects' sensitivity to differences. I don't know of any such test results. Do you?
Qualia8,

There is a vast array of specializations among the neurons in the brain. Some, as you pointed out, detect differences, others sameness; yet others, change or motion or timing, etc. Ignoring that complexity, may lead both sides of this discussion to over-simplification at best and to closed-mindedness at worst.

With that in mind, let me add the flip side to my previous post to you. The after-effects may not only smear differences, but they may also distort sameness. Take for example the two abstract amorphous paintings containing a rich array of colors in my living room. Everyone who looks at either one, reports the same phenomena. The colors change, the amorphous shapes change and those shapes move. Now, we know the painting remains the same. The changes are the result of the brain's processing. It appears the after-images of the various colors "combine" with the direct stimuli to produce a change in the perception, which in turn forms it's after-images which "combine" with the subsequent direct stimuli, etc. What follows is a sequence of illusory changes which create a dynamic that is not there.

This perceptual phenomena of after-images has been studied but it has not been eliminated. The temptation to reduce it's effect by taking micro-second intervals of music, automatically prejudices the methodology against percieving differences that require longer intervals; for example, decay and rhythm.

The debate with probably go on. In the meantime, it's good to have a discussion that produces more illumination than heat.

Enjoy the Music,
John
Qualia, yes there is some minor DBTesting in wine, but as in audio no one pays any attention to it. Like audio tastes rather than DBT rule in the buying decision. Please understand that I see nothing wrong with your making decision based on this methodology, but I do recent those of your school calling other "anti-science" or fools.
TBG: So who called you a fool? Who called you "anti-science"? Citations, please.
Hi Phredd2,

You asked for some additional elements for rigorous methodology. In addition to the acuity tests I mentioned previously, participants should pass reasonable memory tests. Otherwise, their inability to distinguish 2 amps may not be a statement about the amps but about the participants. It is fine with me if an audiophile wants to listen privately just to see if he/she likes or prefers a component. But this is not acceptable for rigorous testing. Therefore, participants should be able to demonstrate their critical listening skills. If they aren't accustomed to listening consciously for nuances in harmonic textures, changes in micro-dynamics, phrasings, ambience, decay, etc., then they may miss subtle differences in how 2 amps reproduce the different musical elements.

"After-effects", as pointed out in my previous posts, are inherent to our perceptual mechanisms and brain circuitry/chemistry and may smear differences between 2 components in a short-term DBT. Consequently, a negative result of a short-term DBT may have an interpretation other than "no difference in the amps". Allowing enough time for the "after-effects" to subside, is one way to reduce their effects. However, this may add to some degradation of memory, as pointed out in one of the posts above; but that just re-inforces my contention that the underlying complexity has not been unravelled enough yet to make definite determinations. Please see my exchanges with Qualia8 for additional comments.

Great Listening,
John