EQ With your Ears (and Eyes) Using the First EQ Modeled on Human Hearing
“EQ with your ears, not your eyes.”
When it comes to mixing, you’ve probably heard this advice before. Or you might have seen a digital/plugin EQ advertised as having a particularly “musical” or “analog” flavor, often with a skeuomorphic (hardware-like) GUI to match. Maybe there’s even virtual wooden endcheeks!
This seems like obvious advice, but 90 percent of the information transferred to the human brain is visual so we all deal with this natural bias.
The Problem with EQing with Your Eyes
First, let’s understand why it’s usually not good to “EQ with your eyes.” Many plugins have extremely sophisticated metering and UI representations of what a particular band or filter is doing. Depending on the advice you’re heard—or even just the look of an EQ—you may be directed towards showing caution with how much you boost/attenuate and with how narrow or wide you make your bands.
For example, here are settings that result in roughly equal results to audio, as they appear first in Pro Tools EQ III, and then as they appear in Pro Tools Pultec EQ emulation:
Whoa, that's way too much boost!
Whew, now I feel more comfortable.
While there are likely differences between these EQs other than UI, you can, however, see where EQing “with your ears” versus “with your eyes” might change the decisions you make.
Critical Bands, Critical Knowledge
Closing your eyes and turning the knobs might improve things, but we still get most of our information from our eyes, so wouldn’t it be nice if an EQ had a UI and DSP based on how our ears actually work?
Wait, how do our ears work?
Enter critical bands. Let legendary Prince producer and professional audiologist Susan Rogers explain the concept:
Critical Bands and Auditory Filters
Essentially, your ears are not the EQ plots and FFT displays we’re used to looking at. Those are based on analog circuits and mathematics. Instead, think of your ears like a bank of filters. Sounds that hit within the same filter band can clash and make everything sound muddy. This is called masking.
One of the main jobs of EQ at the mix stage is to avoid this masking, but it’s hard to do, and our tools aren’t helping. Even if you can find where masking is occurring, it’s hard to tell how wide or deep the filter should be, because what you see in the UI doesn’t match what your ears hear.
Auditory filter bank model of the human ear
How can we make this easier? Don’t EQ “with your ears,” EQ for your ears. You need an EQ built with human auditory filters (a.k.a. critical bands) in mind for mixing. This is where EQuivocate comes in.
How to EQ Using Critical Bands
The auditory filter bank in EQuivocate
EQuivocate (along with Elevate, Punctuate, and Saturate) is designed to boost and cut using the critical bands. By default, the filter frequencies and shapes are based on the Mel Scale (developed by Harvey Fletcher of the Fletcher-Munson Curves). Rather than being spaced based on the slopes given by electrical circuits (like vintage shelving and parametric EQs), or based on octaves (like most graphic EQs), EQuivocate is based on human perception and how the ear and the brain receive and perceive sounds.
In practice, this means that you can EQ with both your eyes and ears together, and you’ll be confident that nothing is going to muddy up your mix. Since the filters are spaced and shaped properly, EQuivocate gives you the perfect resolution at every frequency. Boosts and cuts are never too narrow, or too sharp, and always sound natural.
Get EQuivocate free as a Pro Tools Inner Circle member
Dan Gillespie
Dan Gillespie founded Newfangled Audio to develop previously unheard audio technologies. Before that he led DSP and plug-in development at Eventide and has published papers, received patents, and released products based on his many audio technology innovations.