How to assess and quantify system improvement and changes?


In one of the currently popular threads, a number of members (all of whom I respect) have made significant claims regarding system improvement.

For example: A recent post mentions "a double or triple improvement in my system." A conversation a few days ago with two audio buddies had them quantifying a DAC change as a 10% improvement by one and 30% by another.

This, by virtue of our pursuit, occurs throughout our discussions in the various threads.

Starting this thread to see if we can mine the collective and come up with guidelines and outcomes that are reproducible, relevant and reliable for comparisons and discussions between audiophiles.

Thanks!
david_ten

There's a fundamental problem here, as there's no benchmark that any of the judgments can be keyed to.  These are subjective judgments being made by individuals about unique combinations of components in unique listening spaces.  Try codifying that!

From a different perspective.  Replace a small, inexpensive 2-way with a large, high-quality floorstander.  More bass, right?, and better sound generally.  How much more bass?  How much does the increased bass enhance the overall listening experience?  (These are two different judgments.)

Then replace some throw-away 39c ICs with really expensive ICs.  Is there are much of a change?  Where?  In what areas do you hear it, and how do you quantify it?  Compare experiment 1 and 2.  Does the change in 2 equal that in 1?

Most improvements are incremental. People exaggerate, often without knowing they're doing it, to make a point.

A few years ago I dropped a pair of $200 speakers into a system that I'd previously been running with 5K speakers.  What struck me most was how decent, relatively speaking, they sounded.  No, not as good as the 5K, but as everyone always points out, not nearly 25x as good.

What I will say is that reviewers' hyperbole set people up for hearing more significant improvements than they actually achieve when they bring the component home.  Then the search for mitigating circumstances begins...

One other question is?: Why do you care? It seems the only problem with the inability to quantify the level of 'WOW' is due to wanting information one can use for oneself. Constantly gleaning tidbits in the attempt to be able to use them for oneself? Otherwise who would care?              
To me the main way to do some quantifying is multiple claims in different sources. If several different places have different folks making 'some' sort of improvement claim.. Then I would say OK, seems to be something there.  (Even so, the only way to know is to try for yourself)            
If all I see is one thread (even though it has 2,000 plus posts to it, like some here) touting some product as miraculous.. I pretty much doubt it. (IE if it really WAS that good, plenty of others elsewhere who are NOT connected would also be saying something.            
So what can one do to sort out the claims and the levels of claims being made? Not much. Accept it as part of 'The Human Condition'.
Art is not science let alone exact science. Or, if you wish, it might be a science, that is beyond our comprehension. So yes, human condition, condition we can live with, I think.
I make changes in equipment and equipment positioning and the result comes down to...........better AAAH, or worse DOH! Percentages are for bank loans.
My problem with trying to quantify this sort of thing is that in why experience the process is very non linear

By this I mean it goes something like ...
  1. "My systems sounds great, can't imagine it getting much better (diminishing returns etc etc)"
  2. Introduce component X or tweak Y
  3. "wow, I never paid attention to/I was not aware of that (coloration, timing error, gross soundstage aberration etc) before - it's amazing how much better the system sounds without it"
  4. continue tweaking to improve this dimension
  5. Return to step 1 and repeat
So each step 3 can feel like a very large change, then followed by a series of smaller incremental changes and the large change is driven less by the specific quality of the change/new component that highlighted it but more by the fact that it became highlighted

Said another way we train ourselves to appreciate (and to some extent compensate for) the shortcomings in our systems, we "listen around" them and it's only when the need to do this is removed that we realize how much mental effort we were putting into doing this audio error correction ... ultimately then the best test of any change is does it make it easier to listen into the music and get to the intent of the artist