Mixing Music Basics For Composers
by Tim Juliano
I come from a family of engineers. My father is an electrical engineer, my older brother is a software engineer/music engineer and my younger brother is a mechanical engineer. All three of them are also musicians. Even though I didn't follow suit and become an engineer I feel their presence and skill sets really helped me to understand the gaps in knowledge (or lack of care) I generally find when I talk to composers about mixing. Obviously there are exceptions in that we find a rare composer that is extremely talented at mixing as well as composing. Of course there are also the polar opposites who are just all around terrible at music and mixing. However, not counting these 2 extremes, we find that most composers fall into 3 categories on the mixing music spectrum.
On the far left side we have producers/engineers who's music has amazing production value, but lacks completely in compositional value or depth. On the far right side we have composers who can compose really well, but are lacking in production value. Finally, we have those who are down the middle and are okay or adequate at both.
This article is mainly for the composer who has little to no knowledge of mixing music. The two reasons for it being focused on the novice mixing composer, is that:
1)Producer/Mixers and Composer Mixers will most likely already know what I'm about to share
2)Producer/Mixers and Composer Mixers have an upper hand because as of today, good, bad,or indifferent production usually wins out over composition.
The reason behind production winning over composition is there's an industry standard expectation for everything to sound amazing, regardless of whether it's a good piece of music or not. Just listen to the radio. There are several examples of forgettable songs with amazing production value.
I'm not trying to diminish production value, because it's one hundred percent important. I'm also not trying to make a composer minded only individual, try to become an amazing mixing engineer either. If you are not into learning or have the passion for the ins and outs of audio engineering then you really won't learn it. It will just be a drag on your life!
What I do hope to do is point out the major missteps that I hear and give solutions, so that your demos will get you gigs. Essentially, if you are an incredible composer, and can pull off a decent production, you'll have a better chance getting a job composing. Then you can budget for an engineer to really help your music shine. You'll also be able to better communicate your wishes to the engineer during the final mix down.Okay let's get to it.
The very first issue I come across with poor mixes from composers is the lack of panning. On an actual mixing console there is a knob that says "Pan" usually above the volume fader. The "Pan" is the shortened name for Panoramic Potentiometer. (Pan Pot) The pan pot directs a mono signal left, center, or right of a 270 degree sound field.
Basically, if the pan pot is in stationed down the center at a 90 degree angle then the audio will come out equally between the left and right channel. If the the pan pot is at 0 degrees then the signal will mostly come out the left channel. If the pan pot is at 180 degrees then the audio will mostly be heard out the right channel. If you turn the pan all the way left or all the way right, the signal will completely come out those sides. Of course there are incremental degrees in-between.
The Pan is an underutilized knob, but it is important mainly for 2 reasons.
1)When you pan a signal/instrument it can allow for separation from other instruments allowing it to be heard better.
2)When you pan a signal/instrument you're helping to recreate how the human ear hears audio in the real world. This contributes to a more natural a appealing mix.
In the real world when you're enjoying music, instrumentalists do not line up one behind the other in a straight row. Why then would you mix them like that in your compositions? Instruments are usually arranged by their tone and volume. Essentially the louder more penetrating instruments will go behind the ones that aren't. Below is a diagram of a general orchestral layout. The direction or degree you set your instrument pan to has already been decided. Really, your only job is to finesse the degree to taste.
All genres have a pretty standard expected placement of instruments. For example pop/rock music guitars are usually stage left, drums are usually stage up center, bass is usually stage right and vocals are usually stage down center.
When you're panning your instruments within a mix, it's best to pan them from an audience perspective. When you're mixing music, imagine that your are row 1 center in seats looking at the stage. Where are the instruments in relation to you? When you ask yourself that question, you'll be amazed at what specialty and depth you can create with just a simple panning of an instrument.
One thing to keep in mind when your panning virtual instruments is knowing whether or not the sample library you have has already panned the instruments for you. For instance East West sample libraries have their instruments already panned for you. They usually set their instruments pan according to the instruments standard placement on a stage. There are plenty of other libraries that do this as well, so it's best to check to see your pan is already set or if you need to change it.
The next thing on my list I'd like to talk about is EQ. Many people do not know that equalizing frequencies by raising or lowering their amplitudes cannot make a poorly recorded piece of audio sound good. Essentially, if you didn't have a knowledgeable person using good gear to record the instrument to begin with. Consequently an EQ cannot make your audio sound better.
My older brother, who's a software engineer, has a term for poorly thought out processes. It's called garbage in, garbage out. In his case he's referring to poorly written computer code that will not be useful or work even if you have a nice graphic user interface. The same applies to recording anything. If your audio sounds like crap after it's been recorded, then adding EQ won't help it. In fact it can make it worse.
Moreover, the first step to EQ'ing properly is having a decently recorded piece of audio to begin with. This can easily be achieved for 1 person recording 1 live instrument.(I recommend that larger groups that need to be recorded simultaneously be handled by professional studios. You alone will not suffice. You'll be stressed, and you'll be really unpopular by the people you're recording.)
These days there are many inexpensive great sounding pieces of equipment that help you achieve really great live recordings. Most audio professionals and enthusiasts already have computers and audio interfaces. In my humble opinion you just need to have a decent audio interface then you can easily get good tones with it. All you need in addition to any decent interface is 1 mic pre, like a Golden Age Pree-73and 1 good mic, like an SM81. This mic pre and microphone are not the only game in town, these are just products that I think do a great job for the money. Also there's no rule to having to buy equipment brand new. You can easily find these items used on eBay as well
Now that we have a good piece of audio recorded how do I EQ it? Well, I like to take the simple approach when it comes to EQ'ing. Less is definitely more. Specifically, I think that cutting or lowering frequencies is much more effective than raising them. First and foremost is, cutting unnecessary low frequencies.
The human ear can only hear 30hz at it's lowest. Unless you are an enigma, most people will not hear below this frequency. However, many instruments and tones have frequencies and amplitudes below that. Consequently you have all this extra noise that's adding unwanted volume to your mix. Everything then starts to compete for a space to be heard, and it just ends up sounding very muddy. Although there is an argument that you can feel the frequencies below 30hz and that can add a dimension to your mix, most of the time people do not have a bass or sub-woofer speaker that's capable of playing those frequencies, so it would just be lost anyway.
What I do is apply a High-Pass EQ. A high pass filter does exactly what the name says. It lets higher frequencies pass while cutting out lower frequencies. All you have to do is turn the gain all the way down on your EQ around 30hz. If you have a spectrum analyzer(some are already built into EQ's) you'll be able to see the frequency range of instrument. If you insert this into your channel you'll be able to see if you can set the 30hz high pass filter to a higher frequency. The instrument may only have a range to let's say 80hz, and by cutting the frequencies below that you can potentially increase the overall headroom of your mix.
This technique can still be applied to sample libraries. I put high pass filters on every single one of my tracks and adjust accordingly regardless of the track being live audio or a sample. This significantly increases the clarity of my mixes as well as gives me more room to increase the volumes of the tracks that need it.
I don't recommend raising the frequency gain of instruments, live or sampled, if you are not skilled or practiced in EQ's. You will absolutely make your mixes sound overly bright or piercing. It takes time to learn when or when not to adjust the gain louder for frequencies. Start with cutting frequencies, as more often than not, the cutting of lower frequencies will allow for higher frequencies to stand out a bit more anyway. With time and practice, you'll start to hear when it's necessary to raise the amplitude of frequencies.
The next thing that I'd like to address is reverb. What many novice audio engineers do not realize is that reverb is actually a very very fast delay. Think of being at the Grand Canyon and yelling hello. You would obviously hear an echo as the audio hits the canyon walls and bounces back at you repeating itself.
The difference between a delay and reverb is you cannot discern the time difference between the signal returning back to you, yet you still have a sense of the space the signal is in.
Now my reason behind talking about the difference between delay and reverb is that quite often poor mixes do not realize that they are artificially applying the reverb effect. What I mean by this is that unless you are intentionally trying to create an unnatural sound, a good mixes goal is to recreate a sound based upon how it occurs naturally. When you're applying reverb, you need to think about how the instruments would sound in a live environment.
For example if you are mixing an orchestral track, do you want it to sound like it's an orchestra playing in a large concert hall, a scoring stage, outdoor venue etc.....? This will help you to determine how much reverb to apply to your mix. If you don't have an idea in mind, you may put too much reverb on your mix and it's like have multiple delays piling up on each other. Consequently this starts to dull the mix, and wash out clarity. When applying reverb think about the space the instrument would be playing inside of with regards to composition and adjust for that. If you're not super confident on how to adjust reverbs, it's always better to err on the side of less. Your mixes may sound more intimate and close in perspective with less, however that's far more acceptable than an indiscernible mess.
Okay the last bit of business is compressor/limiters. For this article I am not going to go too in-depth into compressors as that can be a whole book in it's own right. A compressor in it's simplest definition is an automatic volume control. It's basic function is to even out dynamics in a recording by making loud amplitudes softer and softer amplitudes louder. It's very easy to screw up audio with compressors either by distorting the audio or changing the tone into something undesirable. For this reason I recommend that novice mixers don't use them. I'm sure there's plenty of people that disagree, but I really think that the only thing novice mixers should start out with is limiters.
A limiter can be thought of as an extreme compressor. It has many uses, but if you're just getting into mixing, then only place you'll want to worry about using it, is on the master output of your mixer. What the limiter allows you to do is to "limit" what frequencies and volume level will make it into the final mix down. This is necessary because even when you adjust the volumes and place high pass filters on your audio tracks the summing of all the audio tracks together can still cause the output to have an overload and the track will distort.
The limiter allows you to to adjust the over all gain of the mix as well as what frequencies it will limit from exiting the mixer. The limiter achieves this through it's Peak Reduction (also known as threshold) function. The most basic explanation is that the frequencies with the loudest amplitude will first start to be quieted when adjusting the Peak Reduction. The further you change the amount the more frequencies are affected. Keep in mind that it's necessary to listen and not blindly set the limiter. You are basically trying to use as little of the limiter as possible as the tone of the mix will be colored the more you use the limiter. You are trying to adjust the limiter as little as possible just so that it does not distort or "Peak" in the mix down.
Obviously, I am only touching on a very small part of all the infinite amount of techniques and skill needed to create an amazing mix, but these are still important. Your job is to get your music into a presentable form to get you noticed or at the very least understood. If people can hear your music and at least see the potential in it, that will weigh heavily in your favor for present and future work.
Mixing Music Basics For Composers
Tim Juliano | Posted on
Mixing Music Basics For Composers