Upsampling. Truth vs Marketing


Has anyone done a blind AB test of the up sampling capabilities of a player? If so what was the result?

The reason why I ask because all the players and converters that do support up sampling are going to 192 from 44.1. And that is just plane wrong.

This would add huge amount of interpolation errors to the conversion. And should sound like crap, compared.
I understand why MFG don't go the logical 176.4khz, because once again they would have to write more software.

All and all I would like to hear from users who think their player sounds better playing Redbook (44.1) up sampled to 192. I have never come across a sample rate converter chip that does this well sonically and if one exist, then it is truly a silver bullet, then again....44.1 should only be up sample to 88.2 or 176.4 unless you can first go to many GHz and then down sample it 192, even then you will have interpolation errors.
izsakmixer
Mathematically, there are no differences between upsampling and oversampling. Upsampling is basically a marketing term and it is NOT coincidental that it was conjured up during the redbook lull prior to DVD-A format agreements. Really, what is so special about 96kHz or 192kHz?? Why not 88.2kHz or 176.4kHz? For that matter, why not 352.8kHz or 705.6kHz? The choice of resampling a 44.1kHz signal to 96kHz or 192kHz is entirely about piggy-backing on the new high rez formats for marketing purposes. In fact, there is potential for loss of information by resampling assymetrically rather than by integer multiples.

Please refer to Charles Hansen (Ayre) or Madrigal, or Jeff Kalt (Resolution Audio), or Wadia, or Theta. All have made multiple statements that upsampling is nothing more than a marketing tool. Maybe it's good for high end in this sense...certainly high end redbook CD sales jumped after the "upsampling" boom. Magazine reviewers seemed eager to turn a blind eye since their livelihood depended on a healthy high-end market. Waiting 2-5 years for decent universal players certainly wasn't attractive, nor would reviewing the latest $20k redbook CD player when the general consensus at the time was that even bad high rez would blow away great redbook.
Phillips used 4 times oversampling in their first CD players so that they could achieve 16 bit accuracy from a 14 bit D/A. At that time, 16 bit D/A, as used by Sony, were lousy, but the 14 bit units that Phillips used were good. The really cool part of the story is that Phillips didn't tell Sony what they were up to until it was too late for Sony to respond, and the Phillips players ran circles around the Sony ones.

In Sean's explanation the second set of 20 dots in set B should not be random. Those dots should lie somewhere between the two dots adjacent to them.

Here is my explanation.

Assume there is a smoothly varying analog waveform with values at uniform time spacing, as follows. (Actually there are an infinite number of in-between points).

..0.. 1.. 2.. 3.. 4.. 5.. 6.. 7.. 8.. 9.. etc

If the waveform is sampled at a frequency 1/4 that of the example, (44.1 KHz perhaps) the data will look like the following:

..0.......... 3.......... 6...........9..... THIS IS ALL THERE IS ON THE DISC.

A D/A reading this data, at however high a frequency, will output an analog "staircase" voltage as follows:

..000000000000333333333333666666666666999999999

But suppose we read the digital data four times faster than it is really changing, add the four values up,
and divide by 4.

First point……..(0+0+0+3)/4 = 0.75
Second point…. (0+0+3+3)/4 = 1.5
Third point…… (0+3+3+3)/4 = 2.25
Fourth point….. (3+3+3+3)/4 = 3.0
Fifth point……. (3+3+3+6)/4 = 3.75
Sixth point……. (3+3+6+6)/4 = 4.5
Seventh point…. (3+6+6+6)/4 = 5.25
Eighth point…… (6+6+6+6)/4 = 6
….And so on

Again we have a staircase that only approximates the instantaneous analog voltage gererated by the microphone when the music was recorded and digitized, but the steps of this staircase are much smaller than the staircase obtained when the digital data stream from the disc is only processed at the same rate that it was digitized at. The smaller steps mean that the staircase stays closer to the original analog ramping signal.

Note also that we are now quantized at 0.25, instead of 1, which is the quantization of the data stream obtained from the disc. A factor of 4. That’s like 2 bits of additional resolution. That’s how Phillips got 16 bit performance from a 14 bit D/A.
The term "Error Correction" applies to a scheme where redundant data is combined with the information in such a way that a decoding algorithm can recover the original information WITHOUT ANY LOSS, provided that the number of transmission errors, and their distribution in time, does not exceed what the encoding algorithm is designed to deal with. This is not a "bandaid" for poor transmission. It is a way to make it possible to run the hardware at much higher bandwidth because errors can be alowed to occur.

"Interpolation" is not "Error Correction". Interpolation is what you can do if the errors do exceed what your algorithm is designed to deal with. Depending on what the signal is doing at the time that transmission glitches occur interpolation may or may not result in significant error in recovery of the information.
Thanks for the feedback Sean. Putting your orig. & 2nd post together, I see what you were trying to say.
El said: "In Sean's explanation the second set of 20 dots in set B should not be random. Those dots should lie somewhere between the two dots adjacent to them".

By placing the "extra" dots ( sampling points ) "mid-point" between the previously adjoined dots, the end result would look MUCH smoother and far more predictable. While this "could" be the case if playing back sine waves of varying amplitude and duration, music is anything but "sinusoidal" by nature. There are very rapid peaks and dips that take place, sometimes completely changing the direction that the signal was previously headed just a split second previous. These peaks and dips can can switch randomly back and forth across the "zero line" or they can remain above or below the "zero line" for extended periods of time. On top of that, these waveforms may not be symmetrical at all i.e. much bigger peaks on the positive side than there are dips on the negative side or vice-versa. It is for this reason that "industry standard test tones" aren't quite as revealing as we would like as far as revealing how a component performs during normal use reproducing musical waveforms. This is why several different types of tests have to be used in order to obtain any type of meaningful relationship between test bench performance and real world performance.

If music was more like a sine wave i.e. with predictable amplitudes, polarities and durations, error correction algorithms could be much simpler and far more accurate. However, musical notes are anything but predictable in terms of amplitudes, polarities, durations or patterns. As such, the potential to read an error from anything but a perfect disc is not only high, but the potential for further errors to take place when data is lost and the machine is trying to "fill in the blanks" becomes even higher.

Somewhere in one of the old IAR's ( International Audio Review ), Moncrieff covered quite a bit on the flaws of how "Redbook" cd was designed and how their "error correction" and / or "interpolation" techniques were far from all-encompassing. Then again, this was all newer technology at the time, so they were kind of winging it as they went along. As such, the potential for a newer, much better digitally based format is definitely there, especially if we learn from past mistakes and take advantage of the more recent technology that we have.

Germanboxer: As far as certain manufacturers supporting / slagging specific design attributes, did anyone ever expect a manufacturer to support a design / type of product that they themselves didn't already take advantage of? Would you expect a company that didn't use upsampling to say that upsampling was superior or a company that did use upsampling to say that the technology that they were using was a poor choice?

Bombay: Glad that you were able to see where i was coming from after further explanation. Hopefully, others could follow along here too.

As a side note, read the description of this DAC as listed on Agon. You'll see that the designer not only played with various types of filtering, but gave the end user the option to accomodate their personal preferences / system compatibility at the flip of a switch. Bare in mind that this unit was out long before Philips came out with their SACD 1000, which also gave users the options of various filter shaping and cut-off frequencies, etc... Sean
>