Will An Attenuator Help Reduce This Hiss?


i've got a little bit of tweeter 'HASSSHHHH' that i'm looking to reduce. as soon as i turn on the amp and preamp it becomes audible. placing the preamp into standby, mute, or on an unused input does nothing to change the level. changing the volume of the preamp does nothing to change the level. the only thing i've found that changes it is fully powering down the preamp - this eliminates it entirely.

i've been advised to insert a line level attenuator (endler, goldenjack, etc) between the amp and preamp at the amp inputs to bleed off some gain and reduce this noise.

will an attenuator reduce the 'HASSSHHHH' sound when reducing the level, or even muting the preamp, does not change things at all?

thanks for any inputs,
Scott
128x128srosenberg
Hi Al, one of the things that causes confusion is the difference between PVCs and TVCs, and the various configurations of each and their interactions with the preamp, cables and amplifier.

The result is there is no single exact solution for every situation, often thus resulting in loss of bass, perhaps brightness, loss of dynamics, etc. One has to be careful!
If it were me I'd shy away from buying an attenuator until going over the entire system architecture from top to bottom and, as thoroughly as possible, give it the full measure of attention paid to impedence and gain matching that (from what you've described) was evidently never successfully done to start with. Review and replace any and all components that can be said to be violating Ohm's Law so that you'll be able to get back to square one and you won't be eternally confronted with trying to buy expensive bandaids to put on something that needs surgery, knowwhaddamean, Verne?? I really think this is actually the best way forward for anyone in this situation and it will give you better performance for it, to boot!
A two-resistor attenuator like the one Al describes seems simple and elegant, but there is another serious side-effect besides the issue of excessively loading the source -- it's the fact that raising the source impedance driving the amplifier will introduce much more thermal noise (a.k.a. "Johnson noise") at the same time you're trying to attenuate the noise coming from the preamp.

Consider typical values of a preamp with a 150-ohm output impedance, an amp with a 50K input impedance, and a 12-dB attenuator. Even with a perfect, noiseless source and preamp, the absolutely lowest noise level that can occur at the amplifier input is about -133dBv (noise from 150-ohm source at 20KHz bandwidth). This sounds like an incredibly low number, but the noise performance of a well-designed conventional solid-state amp (and even a few exceptional tube units!) can get within a handful of dB to this level (equivalent input noise or EIN).

So let's add the attenuator, and keep the 50k loading on the preamp the same . . . That would be about a 36k series resistor and a 16k shunt, making the amplifier's source impedance about 9.7k. The resulting input noise is now about -115dBv . . . a whopping 18dB worse. And even though it's still a 50k input impedance, the preamp must still put out four times the output current, because it has to put out four times the voltage as it did without the attenuator.

Scaling down the impedances seems like a good idea at first, but it doesn't really help as much as one might think. If we decide that the preamp's okay with a 12k load, we can change the values to 9.1k and 3.3k, and the amp then sees about 2.5k . . . input noise is now about -121dBv. 6dB better is certainly nothing to sneeze at, but don't forget the preamp must now deliver sixteen times the current (4x from lower loading, 4x from increased output voltage) . . . which should NOT be taken lightly from a distortion standpoint. And the Johnson noise is still 12dB higher than with no attenuator.

All of these figures assume the electronics are completely noiseless . . . in the real world, noise performance will (definitely, NOT probably) be worse. So the only time you get a net improvement in noise performance is if the driving source is itself both pretty noisy (completely overwhelming the Johnson noise), and at the same time doesn't mind being heavily loaded . . . . an example would be a typical 1970s pro broadcast console. But in consumer/high-end . . . It'll probably make things worse all the way around.

Atmasphere does make a good point regarding a transformer volume control . . . In fact, a Jensen JT-10KB-D 4:1 input transformer will solve the above example nicely. 12dB attenuation, 47k-ish input impedance, 250-ohm-ish secondary source impedance, excellent noise figure. They also sound fantastic . . . I've used them in several designs.
Hi Kirk,

Thanks for the characteristically knowledgeable and thoughtful analysis.

However, while I recognize that the high source impedance presented by the resistive attenuators may significantly degrade amplifier noise performance in terms of numbers, assuming it is excellent to start with, is that degradation really going to be audible? For unbalanced inputs, amplifier sensitivity will typically be in the rough ballpark of 0 dbv (i.e., 1 volt or so). So -115 dbv into the amplifier would result in a noise level out of the amplifier that is 115 db below full power. Let's say that full power corresponds to an SPL at the listening position in the vicinity of 110 db. In that situation -115 dbv at the amplifier inputs would result in an SPL at the listening position of -5 db, surely not audible. And that is without A-weighting. And I would expect that overall upstream noise performance would be considerably worse than that as well.

Concerning the distortion effects that may result from the increases in voltage and current that have to be provided by the preamp if a resistive divider is used, yes that is certainly an effect that can occur. But isn't it generally considered to be sonically preferable for the preamp's volume control to be operated at higher points within its range, rather than at lower points, to minimize the sonic effects of the volume control mechanism itself? It seems to me that the overall effect on preamp sonics resulting from inserting a resistive attenuator would reflect a net balance of multiple effects, that in any given case may net out unpredictably for the better or for the worse or without significant difference.

In any event, thanks again for the good inputs, that wouldn't usually be thought of. Best regards,

-- Al
Let's say that full power corresponds to an SPL at the listening position in the vicinity of 110 db. In that situation -115 dbv at the amplifier inputs would result in an SPL at the listening position of -5 db, surely not audible.
It's important to remember that the noise voltage from every noise mechanism from every part of every piece of electronics adds up, in the fashion of the square root of the sum of the squares. For the simplest analysis of a single opamp gain stage, there are seven:
1. Johnson noise from source impedance of non-inverting input
2. Non-inverting input's input noise voltage
3. Non-inverting input's input noise current, times its source impedance
4, 5, 6. Same as 1, 2, & 3 for the inverting input
7. Output "build-out" resistor.
The noise voltages from 1 thru 6 are multiplied by the circuit's noise gain (usually simplified to be the circuit's closed-loop voltage gain), and then added together (square-root-of-the-sum-of-the-squares). Any one of these sources by themselves is a pretty small number, but every single one of them (from every stage) contributes to the final result.

In the real world, optimising noise performance usually boils down to choosing the input device(s) and their operating parameters so that their voltage noise/current noise characteristics fit the input source impedance, then going through everything else and shaving it down a few dB at a time . . . and when it's done properly, it's the noise of the preceding device or transducer that dominates.

So with my previous attenuator example, it's not the absolute number that matters . . . it's the fact that we've taken the previously insignificant noise mechanism of amplifier source impedance and turned it into a huge one. If we use the common expression of Noise Figure (NF) . . . (the difference between an ideal amplifier and the real one for a given source impedance), sticking the attenuator in the back can change a good amp's NF (for a 150-ohm source) from i.e. 6dB to 24dB! I think in the majority of cases this will be instantly noticable with in a quiet room with no source playing and one's ear near the speaker. But at the very least it seems awfully ham-handed to instantly nullify all the hard engineering work it takes to build a low-noise power amplifier.
But isn't it generally considered to be sonically preferable for the preamp's volume control to be operated at higher points within its range, rather than at lower points, to minimize the sonic effects of the volume control mechanism itself?
In general, I would say no, and definately not from a noise perspective. The possible exception is if the volume control is operated in such a low range where channel-balance and wiper contact resistance is an issue. But in a well-designed conventional (input-pot followed by active stage) line preamp, the Johnson noise from the volume control is the dominant noise source. And it's output impedance increases as the volume is turned up until it reaches it's maximum value at -6dB (plus electronic gain).

Also, keep in mind that all the noise we've discussed is "post-fader" . . . that is, it's unattenuated by the volume control. So from a noise standpoint, the best way to reduce a conventional preamp's gain by passive, resistive means (assuming 12dB, and keeping input/output impedances the same) is to reduce the value of the volume pot to one-quarter of what it was, and insert a series resistor with the input to bring the impedance back up. Now, all the resistor's noise is attenuated along with the signal, and the active gain stage sees a lower source impedance to boot. But if it's the active stage that's noisy . . . This of course won't help.