Question for recording artist/engineers


Let's say you have a jazz band who wants to sell cds of their music with the best quality of sound they can achieve at the lowest out-sourced cost or do-it-yourself. If one wants to do a just-in-time type of manufacturing of their cd, how can they improve things?

Currently they are recording at 48k in Pro-tools, mastered in Sonic Solutions by Air Show Mastering, and then they use top of the line cds (Taiyo Yuden) with a Microboards Orbit II Duplicator. This has produced average cds but we want to do better.

What would you engineers do to improve this so it gets closer to audiophile quality? Would you recommend using a different mastering house, different cds, or a different Duplicator? Or would you just bite the money bullet and go directly to a full-scale manufacturer? We are trying not to have that much money tied up in inventory.

If this is the wrong place to post this question, please suggest another message board to post.

Thank you for your feedback and assistance.
lngbruno
If you throw out every other sample of an 88.2khz signal, you will have a 44.1khz data set, but with all the frequencies over 22.05 khz aliased into the audio band. The low pass filter is used to attenuate signal energy above 22.05khz (Nyquist) before reducing the sample rate.

Not just archival; it actually sounds better to record at higher frequencies and then downconvert to redbook. Current belief about high resolution says it sounds better than redbook mainly because of reduction of the distortions caused by filtering, especially steep brick wall low pass filters. When you start with an 88.2/96khz/24b signal, you still need a steep filter at the downconvert stage, but there are steps in the filter design that can be taken to roll the filter off more gently, keep ripple very low, and dither the 24b signal down to 16b. I can recommend papers if you're interested.
Thanks for the info! You obviously know your stuff. Are you saying that 88.2 has no advantage at all over 96k. Would it not still be a simpler conversion. I'll do some listening.
Flex - I don't know the answer to this myself, but if there were no advantage to using the simplest possible algorithm (i.e., "throw out every every other sample") as opposed to something more complex, than why do you suppose we have inherited standards that represent frequency doubling (48KHz to 96KHz to 192KHZ, 44.1KHz to 88.2KHz) and tripling (44.1KHz to 132.3KHz)? Are these just remnants of a time when computing power was more precious?
All of the downconversions from 96k and 88.2k use the same algorithm, it's just that the non-integer conversions are computationally greater. 96k->48k and 88.2k->44.1k are one phase filters, while 96k->44.1k and 88.2k->48k are multiphase filters of 147 and 80 phases respectively. This means that 147 or 80 sets of filter coefficients have to be stored instead of just one set, and the math has to be written to rotate regularly through all of the phases. Multiphase filters are correspondingly more software and memory intensive to implement than single phase filters, and can be much harder to do in realtime. This is a good reason for consumer manufacturers to stay away from them. Professional equipment usually has more software horsepower. As far as 88.2khz having an advantage over 96khz, most current players can play at 48khz as well as 44.1khz so there seems to be no compelling reason to stay with multiples of 44.1khz (assuming dvd release format).

Frequency doubling is not related to compute power. The most important reason for it is clocking. All equipment, consumer or pro, has to process both new and old formats using the same system clocks in its hardware and software operations, and system clocks can usually be divided or multiplied by factors of 2 fairly easily. Second, pro equipment needs to maintain compatibility with previous audio and video recording frequencies in order to deal with archival as well as new material. Ease of sample rate conversion is a factor but probably a distant 3rd in comparison to the first two.

'Throwing out every other sample' is something even the worst of software writers knows better than to do. You should listen to this kind of aliasing sometime in order to know why it's wrong.
Thanks for the tutorial (I wasn't necessarily being literal about the 'throwing away' part). What I'm still curious about is this: How did we end up with two standards so close together as 44.1KHz and 48KHz? I understand the reason for picking a frequency in this area, just not how we got to both of these...