Question for recording artist/engineers


Let's say you have a jazz band who wants to sell cds of their music with the best quality of sound they can achieve at the lowest out-sourced cost or do-it-yourself. If one wants to do a just-in-time type of manufacturing of their cd, how can they improve things?

Currently they are recording at 48k in Pro-tools, mastered in Sonic Solutions by Air Show Mastering, and then they use top of the line cds (Taiyo Yuden) with a Microboards Orbit II Duplicator. This has produced average cds but we want to do better.

What would you engineers do to improve this so it gets closer to audiophile quality? Would you recommend using a different mastering house, different cds, or a different Duplicator? Or would you just bite the money bullet and go directly to a full-scale manufacturer? We are trying not to have that much money tied up in inventory.

If this is the wrong place to post this question, please suggest another message board to post.

Thank you for your feedback and assistance.
lngbruno
Lngbruno,
Is this jazz band recording material in a studio, or, recording live performances? Why at 48K only to downsample to 44.1 redbook standard for producing CDs?

Your Pro Tools, Sonic Solutions sure ain't the problem, so I would want to take a good hard look at how the data is getting laid down. Duplication is only going to be as good as the original bit stream.

I have made a number of live recordings and subscribe to the minimalist mic technique: (1 HQ matched pair)>Grace Design Lunatec V2 preamp>Apogee a/d1000 analog to digital converter>Tascam DA P1 DAT deck

My recordings were very natural sounding, good soundstage, plenty of depth... If I had access to the Pro Tools/Sonic Solutions WHOA!
Your question seems to focus on the rear end of the process when the greatest sonic gains are made at the front end. There are several keys to getting a high quality recording. A great recording engineer is probably at the top of the list. Right below that are a good recording room, great microphones, great mic preamps and great A/D converters. ProTools can produce high quality results, but stay away from all but their latest generation of converters. Also why record at 48kHz, the sonic advantage over 44.1kHz is minimal and it requires an added sample rate conversion stage. Either go ProTools HD (88.2 - 192kHz) which will offer a substantial sonic benefit or stay at 44.1kHz throughout.

You might want to consider posting on the following forum:

http://www.musicgearnetwork.com/cgi-bin/ultimatebb.cgi

Place your post in the George Massenburg or Roger Nichols section. Best of luck.
My friend who built his own recording studio uses the following a lot as a great resource for these types or questions:

http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&q=rec.audio.pro
Slipknot1 - the band is recording in the studio. 48k is what Air Show Mastering uses with the HDCD format. Could the speed of the Duplicator has a negative effect on the sound quality?? I know on consumer recorders and PC drives it does have a negative effect.

Thanks again for the input to mic techniques.
you guys should direct this thread to tom wright at audio forest.com. he has 104 gold records and builds and designs speakers , cables,etc. he is an honery cuss but a great guy .
Check out Mapleshade records and the way Pierre Sprey records the artists on his label. These recordings are the most dynamic, exciting recordings I have ever heard, and I collect DCC Gold discs, MFSL Gold Discs, Sheffield Labs, AudioQuest, Reference Recordings, and, now, Super Audio and DVD-Audio. The Mapleshade recordings just kill them all. Whatever Pierre is doing is the RIGHT thing to do. I know it is live-to-two-track, but there is way more to his methods. He uses custom made gear and cables, as well as gear he modified. You just need to listen to hear that his way is better. I have purchased about half of his catalog. There are many titles that I would have not purchased, were it not for the fact that they were on Mapleshade. So, do what he is doing and you will sound real and dynamic!
Onhwy61 is spot on. As an engineer I used to work with once told me "You can't polish a turd". The most important element in the chain is the mic, its placement relative to instruments and the room. I've used ADAT and 2" analogue and both can sound either lousy or great depending on whether the mic'ing is done correctly or not. I have never heard any amount of "aural exciters", EQs or any such tool create a good sound from a mediocre source.
I've had best results from Neumann and AKG mics, and it seems that you get what you pay for ... expect $1000 up for a really good mic.
Some informative answers, except the nature of the question makes it seem to me that this material has already been recorded and will not be redone. In which case you are limited to either remastering, or if that seems to have been performed competently the first time, then remixing beforehand as well. The quality of the mixdown engineer and facilities is absolutely crucial, and even a surprising amount of 'turd-polishing' (if needed) can be achieved by a great pairing here. It'll cost, but not nearly as much as beginning over with rerecording. A worthwhile fact-finding mission might be to take the studio 2-track master (plus a mastered CD) to a prospective remix studio or two, and see what the band and you - and especially the resident engineer - think of what you've got to work with. I would do this particularly if you all remember the sound as you experienced it in the studio you've used so far as seeming somehow much better during the recording and mixing process than what you ultimately came out with when you listen to the finished product at home.
Thanks indeed. Yes my original post does indicate the material in question was already recordered but I will attempt to revisit that project. We just recorded the latest cd (which we are now finishing for a spring release) into Pro-Tools H.D. at 96k and I used my Avalon Tube pre-amp with a Neumann M 147 Tube Mic ( which I didn't have for last cd. I had a different Neumann at that studio but it wasn't the tube version) and Manley Stereo "Variable-Mu " Compressor on my horn, and the sonic difference is dramatic at 96k . Everything sparkles and the rhodes and synth pads don't decay as quickly. To my ear, there is a pronounced difference at 96K but what you are saying about the front end and what gear is used makes all the difference in the world. I also a/b'd the sonic difference from using an Apogee Rosetta A/D converter at 96K into Pro-Tools versus slaving the Pro-Tools converters to an Aardsync Clock at 96K and experienced a dramatic increase in sonic purity as well. Man, there is a ton of stuff to learn and I am still just trying to get my horn to play!

Thanks again for all your input. You folks are great and that is why I really value your comments, because without them, I am much more prone to repeating mistakes without even realizing it. For me music is the only game in town.

Happy listening.
Minimalism will always gain you alot asuuming what gear is left in the signal chain is good. I routinely use a battery powered Crown SASSP that I've heavily modified incorporating a built in hand made minimalist battery powered mic pre feeding a modified Alesis Masterlink through a one meter pair of Maplshade/Insound ribbon interconnects at 88.2k sampling at 24 bit later reduced to redbook standard. All editing is done on the Masterlink. The sound becomes very dependent on the room and mic/musician placement. Sound is superb. Price is very reasonable.
Again, 96k is a waste of reconversion. 88.2 is just halved into 44.1 without total reconversion. You gain the higher res for recording, signal processing, if at all, editing and archiving with no down side.
That's not correct Piedpiper. 88.2k is not just halved to 44.1k. Conversions from 96k and from 88.2k to 44.1k both need a low pass filter and rate conversion algorithm. Also the conversion from 96k to 44.1k involves no losses relative to 88.2k to 44.1k. It just needs the right high quality conversion algorithm, which is present in the Sonic Solutions workstation.
Thanks for the input. The low pass filter makes sense but why wouldn't they just throw out every other sample? Why go with higher sample rate then? Archival only?
If you throw out every other sample of an 88.2khz signal, you will have a 44.1khz data set, but with all the frequencies over 22.05 khz aliased into the audio band. The low pass filter is used to attenuate signal energy above 22.05khz (Nyquist) before reducing the sample rate.

Not just archival; it actually sounds better to record at higher frequencies and then downconvert to redbook. Current belief about high resolution says it sounds better than redbook mainly because of reduction of the distortions caused by filtering, especially steep brick wall low pass filters. When you start with an 88.2/96khz/24b signal, you still need a steep filter at the downconvert stage, but there are steps in the filter design that can be taken to roll the filter off more gently, keep ripple very low, and dither the 24b signal down to 16b. I can recommend papers if you're interested.
Thanks for the info! You obviously know your stuff. Are you saying that 88.2 has no advantage at all over 96k. Would it not still be a simpler conversion. I'll do some listening.
Flex - I don't know the answer to this myself, but if there were no advantage to using the simplest possible algorithm (i.e., "throw out every every other sample") as opposed to something more complex, than why do you suppose we have inherited standards that represent frequency doubling (48KHz to 96KHz to 192KHZ, 44.1KHz to 88.2KHz) and tripling (44.1KHz to 132.3KHz)? Are these just remnants of a time when computing power was more precious?
All of the downconversions from 96k and 88.2k use the same algorithm, it's just that the non-integer conversions are computationally greater. 96k->48k and 88.2k->44.1k are one phase filters, while 96k->44.1k and 88.2k->48k are multiphase filters of 147 and 80 phases respectively. This means that 147 or 80 sets of filter coefficients have to be stored instead of just one set, and the math has to be written to rotate regularly through all of the phases. Multiphase filters are correspondingly more software and memory intensive to implement than single phase filters, and can be much harder to do in realtime. This is a good reason for consumer manufacturers to stay away from them. Professional equipment usually has more software horsepower. As far as 88.2khz having an advantage over 96khz, most current players can play at 48khz as well as 44.1khz so there seems to be no compelling reason to stay with multiples of 44.1khz (assuming dvd release format).

Frequency doubling is not related to compute power. The most important reason for it is clocking. All equipment, consumer or pro, has to process both new and old formats using the same system clocks in its hardware and software operations, and system clocks can usually be divided or multiplied by factors of 2 fairly easily. Second, pro equipment needs to maintain compatibility with previous audio and video recording frequencies in order to deal with archival as well as new material. Ease of sample rate conversion is a factor but probably a distant 3rd in comparison to the first two.

'Throwing out every other sample' is something even the worst of software writers knows better than to do. You should listen to this kind of aliasing sometime in order to know why it's wrong.
Thanks for the tutorial (I wasn't necessarily being literal about the 'throwing away' part). What I'm still curious about is this: How did we end up with two standards so close together as 44.1KHz and 48KHz? I understand the reason for picking a frequency in this area, just not how we got to both of these...
In the not-so-distant past, 44.1khz was the consumer standard and 48khz the professional standard. High sampling rates were something a few engineers experimented with, but were nothing like an accepted standard as recently as ~10 years ago.
Thanks for taking a stab Flex, but the reply only reiterates the question...anybody else have an insight?
Zaikesman, this may be closer to what you are looking for.

44.1khz came about because of its relationship to NTSC and PAL tv line rates. Early digital audio was recorded using versions of video recorders, and the audio frequency had to be related to the horizontal video frequency in order that both video and audio frequencies could be derived from the same master clock. 44.1khz was the original PCM-F1 format, which I believe was adopted first in Japan and ultimately became the compact disc standard.

The use of 48khz is based on its compatibility with tv and movie frame rates (50Hz,60 Hz), and with the 32khz pcm rate used for broadcast. 48khz has integer relationships with all of the above and therefore makes it easier to set up time code for studio sync. Looking at an article on 48khz, the author mentions your original idea of sample rate conversion as a primary reason for the concern with integer frequency relationships in the early days of audio.