Clever Little Clock - high-end audio insanity?


Guys, seriously, can someone please explain to me how the Clever Little Clock (http://www.machinadynamica.com/machina41.htm) actually imporves the sound inside the litening room?
audioari1
Again, I don't disagree with the point everybody makes about auditioning and testing. But I still say an experience like the one Audioari1 relates about the $10K preamp vs. the $200 one -- if true and valid (meaning if this actually happened, and if the test was done well) -- is a valuable reminder to any audiophile about not just the limitations of A/B testing (which I think audiophiles sometime tend to overblow, while ignoring the equally significant foibles of long-term auditioning), but also the quite real limitations of what we're actually doing in high end audio.

But I'm getting a little off the track here. There is most definitely a way to test the CLC that doesn't raise the possibility of criticisms like you guys are mentioning (and I already thumbnailed it somewhere here before). All you would need are, say, three outwardly-identical clocks: One would be an actual CLC, with its supposed "proprietary technology" and "special" batteries, while the other two would be the same model of clock, unmodified except for having identical stickers placed on their fronts as the CLC, and with "regular" (but same brand) batteries. The test administrator would need to have some kind of identifying mark to reference the CLC; I'd suggest maybe tiny pieces of tape placed inside the battery compartments of the two stock clocks only.

Then simply leave all three clocks with an audiophile who maintains he can hear a positive effect from the CLC, to audition however he pleases, at his leisure (with the understanding of course that he wouldn't try to open up the clocks or otherwise try figure out which is which through non-auditory means, and the proviso that he removes the two clocks not currently being auditioned from the listening environment in accordance with Machina Dynamica's guidelines). When he's finished and indicated his preference, the administrator would remove the three clocks and note which one he chose, then bring them back mixed-up and do the same thing over again (without, of course, letting the subject know the running results while the test is still in progress).

If, after maybe 10 times around with this routine, the subject couldn't correctly identify the CLC significantly more frequently than 1/3 of the time, I don't think that audiophile could argue about its lack of audible effect. And if he could identify it reliably (and hadn't cheated), no one could argue that it's probably really doing something after all. (I think the single best candidate to run this test with would be Mr. Kait, were it not for the fact that he would have an infinite incentive to cheat, and the means to easily do so!)

I'm not advocating going through this kind of crap for every choice an audiophile makes (I've stated above why the CLC [and "Intelligent Chip"] deserve a higher level of skeptical scrutiny), I'm just saying that in principle it's hard to criticize or dismiss this test (or at least, in the case of negative results only, as it could apply to that one listener). And yes, I've stipulated before that this whole debate is likely nothing but great for Mr. Kait's business -- while it lasts (meaning the business, and the debate! ;^)
And BTW, about that preamp shootout, I once lassoed my girlfriend into doing the same kind of blind test. She's not an audiophile and couldn't care less, but she was easily able to immediately and consistently discern at least some of the sonic differences between the $3K and $6.5K solid-state preamps I was A/B-ing (volume-matched, of course). To my slight chagrin she preferred the former even though I preferred the latter (not that I told her so until we were done, although it's possible she could've picked it up from my body language since the test wasn't double-blind). But I did agree 100% with her descriptive assessment of what she heard, and could see why she might've preferred it with those particular test recordings. In fact, she blew me away by stating the overall situation much more congently and concisely than I had been able to form it in my own mind. When I told her why I liked the more expensive one, she blithely dropped something like, oh sure, she could tell that you could hear much more through that one, but that was why she'd found it less pleasant to listen to (I had played CD's). Just amazing, aren't they fellas?
This is evidence of how flawed double blind testing is. Despite the attempts people put into giving so much value to blind testing, it is inherently flawed. The advise above to listen for a period of time is a good one. Often times the differences are very clear, other times it has more to do with musicality than sonics. Musicality is tapping your foot, and only time can identify this important aspect of our hobby. Too often we rely on quick comparisons, when this is never how we actually listen to music. Those looking to acquire equipment might put great value in blind testing. Those who just want to enjoy music must take the time to discern the attributes of a system
audioaril,

"For some reason, rapid A/B switching doesn't allow the brain to make adjustments quickly enough."

Perhaps, there is an analogous phenomenon for aural experience that exists for our visual experience. Namely, if you look at a colorful object for a while and then close your eyes, you see an "after-image" of the complimentary color that lingers for a while. While it lingers, the after-image color interacts and mixes with your subsequent visual experience (with eyes open). The brain is not able to instantaneously wipe itself clean between two sucessive visual experiences. I have two paintings that demonstrate this phenonon dramatically. Anyone who has looked at them for a few seconds reports the same thing: colors blend, new colors and forms emerge and there is movement of the nebulous forms. If our brain reacts in a similar dynamic way to aural experience, that may explain in part why rapid A/B switching is not an appropriate methodology to testing audio components. If the "after-image" of A lingers in the brain and mixes with the experience of B, we may get a more homogeneous result that clouds differences.
Puremusic: That might seem a reasonable hypothsis, but I doubt it's actually true -- listening to a piece of music once and then again is not analogous to staring at a static image until it's 'burned' on your retina. Otherwise, if it were, we not only couldn't hear changing sounds such as music very well, we'd have trouble seeing changing images within a constant environment, which as far as I know is exactly the opposite of how we actually respond. Maybe a better analogy would be listening to a static sinewave tone for minutes, although I don't know this. Anyway, I believe the ways the ear and the eye operate as sensors, and how the brain processes each, are too substantially different for such analogies to hold much water. The other problem with that conjecture, for me, is that I personally find A/B testing to more often highlight than to obscure subtle differences. Of course, I'm doing this by myself in my own system, which means it's not blind, so you could always object that I'm fooling myself.