Above, I said I wasn't going to try to go into detail about *why* 'naturally'-recorded live acoustic music might make superior audition material. But it seems to me like one of the main reasons should be specifically addressed at this point. From the comments so far, I make the observation that most of what we're talking about seems to boil down to questions of tonal balance, and the timbral signature of any individual instrument's (or voice's) unique harmonic structure. All of this is valid, but let's not overlook the issues of phase and time.
Only in minimally-mic'ed live recordings do we get a good portion of the original phase and time relationship information preserved in the document. Despite our lack of familiarity with the actual instuments and venue used, our ears can still make use of the phase and time coherence captured. This translates - provided our systems can maintain and transmit the information mostly unscathed - into a better comprehension of spatial relationships and transient events.
In typically multi-mic'ed, multi-tracked studio recordings, where there might be no one original performance captured live, this information either doesn't exist in a relational sense (as in the case of purely electronic 'instruments'), or is distorted, or is in conflict between the various elements in the cut, or is artificially manipulated in the mix, or very likely is a combination of all of the above. The result is a playback performance containing no coherent spatial or transient unity to reproduce, which yields a muddled message no matter how we might try to configure our systems for convincing effect.
So, only if program material succeeds in capturing some of this original performance integrity which we would hear live (no matter what the venue, or where we were inside it), will a recording be able to illuminate much about what our systems might be doing to transmit or corrupt it. This is especially valuable for assessing transducer performance, and for investigating speaker/room set-up possibilities, but can be helpful for listening to the spatial and temporal linearity of any device in the playback chain. No, you still won't be able to know exactly what the original performance sounded like, but you'll still be able infer more about what your system is doing, because your ear/brain can detect and interpret a coherent signal, and therefore recognizes compromises to or absence of same.
Only in minimally-mic'ed live recordings do we get a good portion of the original phase and time relationship information preserved in the document. Despite our lack of familiarity with the actual instuments and venue used, our ears can still make use of the phase and time coherence captured. This translates - provided our systems can maintain and transmit the information mostly unscathed - into a better comprehension of spatial relationships and transient events.
In typically multi-mic'ed, multi-tracked studio recordings, where there might be no one original performance captured live, this information either doesn't exist in a relational sense (as in the case of purely electronic 'instruments'), or is distorted, or is in conflict between the various elements in the cut, or is artificially manipulated in the mix, or very likely is a combination of all of the above. The result is a playback performance containing no coherent spatial or transient unity to reproduce, which yields a muddled message no matter how we might try to configure our systems for convincing effect.
So, only if program material succeeds in capturing some of this original performance integrity which we would hear live (no matter what the venue, or where we were inside it), will a recording be able to illuminate much about what our systems might be doing to transmit or corrupt it. This is especially valuable for assessing transducer performance, and for investigating speaker/room set-up possibilities, but can be helpful for listening to the spatial and temporal linearity of any device in the playback chain. No, you still won't be able to know exactly what the original performance sounded like, but you'll still be able infer more about what your system is doing, because your ear/brain can detect and interpret a coherent signal, and therefore recognizes compromises to or absence of same.