Hey audio professional….yes you….picture this scenario!

You’ve been working quietly at your boss’ studio. You know, the daily drudge; setting up microphones, setting up drum kits, getting the vocal mic ready for the latest singer dreaming of stardom. Hey, you even double up duty as the “lunch grunt”. Still though, by watching, imitating and learning, you have even managed to be able to cobble together a good mix of the recorded instruments. Nice and clear, not clouded and muddy. You can hear each instrument clearly. Good for you. Now, how exactly is this done? Or more to the point, how do you do it? The mechanics, the math…you know, the step by step methods? If you can’t answer, don’t be embarrassed. Read on!

Frequency Fundamentals

Now, let’s get back to our scenario. Your boss calls you in and announces that the studio picked up a major artist. Let’s call her “Jill”. Your boss’ best friend in Nashville referred her to your quiet town, where she can cut her latest opus in peace, away from the shutter bugs. Your boss’ friend has faith in him, and so everyone is onboard. (The label, the band, and our starlet “Jill”.)

Naturally, your boss expects your “A” game. No problem. Jill’s musical director who also doubles as her guitar player flies in a couple weeks ahead of schedule to handle pre-production duties with you. You are handed lists of songs, instruments to be recorded for each song, possible additional instruments for “sweetening” and other details concerning placement of sounds and instruments in the sonic field. You discuss the best way to blend Jill’s voice with a clean electric guitar. You discuss her foray into the world of heavy metal. She wants to cut a new song with a dense, drop tuned, and heavily distorted guitar sound. And, not just one layer…she wants three of those guitar sounds. Guitar man fills you in on the type of bass sound they want. Now come the drums. They have very specific sound they are seeking; one that will work with those death drone guitars as well as with the opposite extreme; Jill’s tender piano duet with her cousin, the contralto.


Alright, these are demanding professionals. They know what they want musically. Now let’s go back to the original question. How exactly is this done? Or more to the point, how do you do it? The mechanics, the math…you know, the step by step methods?

How do you process that guitar so that it doesn’t interfere with Jill’s articulation? That distorted guitar takes up a lot of space. Jill’s voice also takes up a lot of space. How do you strike sonic peace between two opposite sound sources fighting for dominion? That’s only two instruments. How do you blend in the bass player who uses that nice heavy “marble” sound? Balance all this with a ten piece drum kit? Don’t forget the miscellaneous percussion. Oh, I almost forgot……Don’t forget to blend in all of the keyboards/pianos. Piano? Lots of sonic real estate required there!

Well? How do you pull it off? How do you fit everything in AND make it sound clear, AND make to sound natural….oh, AND the most important….how do you make it sound pleasing to the ear? After all, this music is being recorded for people right?

If you answered proper EQing: Congratulate yourself! You are correct. Alright, what do you EQ? What frequencies? What Hz do you cut, boost, or alter to correct that overly booming guitar? How do you EQ Jill’s voice to bring out clarity with out sibilance? Which frequencies need adjusting? See a pattern developing?


As an audio recording engineer/producer, it isn’t enough just to know proper recording levels, or signal to noise ratios, or what microphone is best for vocals as opposed to violin, or lap steel. You need to know and recognize the exact frequency of all of the sounds you record and mix. Musicians know frequencies in the form of pitches. You need that knowledge too!

Where is Jill’s low midrange point? Is it 250Hz? 325Hz? Don’t know? Well, how can you reduce that overly wooly sound in her voice? How about that death guitar? Is it really prominent at 700Hz? 1000Hz? Or is it rumbling a mess at 185Hz? If you don’t know, how can you move it out of the way of the main vocal, while still keeping it in the mix?

Audio engineering is more than just “panning”, or the physical placement of sound in the mix. It is the intimate knowledge of each sounds’ frequency that allows careful manipulation. This isn’t just musicians’ knowledge…this is for YOU!

Of course, with our above scenario, a “throw it against the wall” trial and error method could work, but, be realistic: That type of experimentation would waste an enormous amount of time, not to mention budget, which would lead to Jill and company taking their business elsewhere. That would probably result in you losing your job! This information is not optional. As a pro, you need it.

So, what is frequency? What is Hertz (Hz)? What is a sound wave? And the big one….how do I learn all of this?

Well step this way, and hopefully you can write a successful ending to our little “scenario”.

This way please……..

Whether you’re a musician or an audio pro, you need to know what’s going on with the frequencies in your sound – We’ll be diving into Fotios’ series on frequency training next week here at EasyEarTraining.com, be sure to come back and check it out. You can subscribe by RSS or sign up for email notifications to make sure you never miss a post.