Sends, clarity and phase distorsion

Since I dive into mixing these year, which I never did that much, I see a lot of tips here and there. Some are good, some are bad, and they are almost all situational.

My question is about volume shaping instead of sidechaining. I discovered Au5 this year and, even if his music doesn’t always appeal to my non-EDM trained ears, I love this guy, the way he approaches music and his very precise production/mixdowns.

So one of his tips is to make a bus for everything but kick and snare, and to use a volume shaper to draw some very precise sidechaining.

My question is : why routing all the mixdown to a single bus doesn’t cause issues ? Like phase cancellation, or phase distortion, why this thing works ? I begin to dive into phase, I learn that I EQ’d way too much and that was the main issue of my mixdowns, that lacked air partly because of this.

And superposing the exact same sound looks to be a bad idea, but I have also a friend who simply double his kick track for punch, so I’m not sure about these techniques.

1 Like

Some of that depends on what DAW you’re using. Some DAWs will automatically compensate for even the tiniest delays and keep everything in phase, while some don’t. And you can usually turn that off or turn different versions of it on (I’ve use this in the past to create minor phasing). If your DAW gives you trouble with this, you can get something like this: to get rid of it.

As for why submixes work, it’s because you’re just taking tracks that would get mixed together anyways and mixing them beforehand. Remember, everything gets mixed together at the master bus, or you can’t hear it. So if you don’t have any phasing issues on your master bus, why would you have any in a submix that goes before it? This is very common in cases like Au5 using a compression bus, or in other cases you might group your instruments in convenient groups so that you can very quickly change balances in the mix. For example, I always use a drum bus because if I want to turn my drums up or down it is a lot easier to move one fader where I want then six, and I don’t have to worry about the balance of my drum kit changing as I move it. People who mix rock would also probably have a guitars bus and a vocal bus with all guitars and vocals respectively, so that they can quickly and easily push the vocals forward without worrying about the balance of the lead vocals against backing vocals, or move the guitars around without having to think about the balance of the bass against the rhythm against the lead.

In all cases, these are instruments that are just being added together in groups before being added together again in the master bus, rather than being added all at the master bus. Think of it this way:




The order you do the math in doesn’t matter when you’re adding signals, you get the same result.


Ah, I was not very clear : I also use busses and submixes, it’s very useful

I was worried about having all the track send to a Send bus, but the kick and snare : because this way, you basically double the whole sound and I don’t understand how it works without phase cancelling everything or causing issues :slight_smile: In my mind that just doesn’t check

Oh I see. I can’t speak for everyone else, but I don’t send the original audio to the master bus if I’m doing compression. I just send my submix. So, say all my synths go into a bus, and that bus goes to the master, I don’t send the original synth channels to the master. That avoids doubling. The only place I run them in parallel is if I have a bus like a reverb bus with a 100% wet effect on it. I don’t see how it could work any other way, if you have the dry sound running in parallel with the sidechained signal, the sidechained signal just isn’t going to be what it should be. I’ll have to try running that in parallel sometime.

1 Like

That is exactly why I ask hahaha, usually I do things like you just explained

I might ask him directly, it’s just bugging me

Yeah honestly I just bounce and freeze the stem
Using edison to render to audio…from there I just mix and apply compression to taste…I do this because the routing gets wierd and complicated and sometimes the routing maxes out my cpu when I design the sound…also I rarely use stereo imaging anymore because of phase cancellation…the better way is to pan a sound slightly but not too much… but also if you have some bass freq…below 50hz that will also tend to drown out the other elements which is why I high pass certain things depending upon the style I’m writing for…but whenever I compress some melodic midrange sounding synths my problem is that they are in the same frequency range therefore I would have to tune or eq the element accordingly…also sometimes my sound designy percs also drown out the synth sounds…I’m by no means pro but I mix according to the space where the sound sits…so that means lowering or boosting certain frequencies to mix the sound…if I’m not being lazy to overcome phase issues I combine the dry and wet signals by eqing each separately and then sending both signals to a bus and eqing it further…but yea stereo imaging would be the cause of most phase issues if you dont know how to use…also mono(centering the sound) is your friend…as for mixing stuff sometimes you frequency band split and sometimes you dont…its all about how you use your layers…and your right it is situational…so yea bounce and freeze you layers both the dry and wet signals then mix with eq, properly place wherever in the stereo field, eq, and compression, and then more eq…no shortcuts.

I totally misread the OP, lol but precise volume automation/envelopes on the sound…helps give room for the other layers and you dont have to worry about latency issues from a vst effect or frequency overlap because you gated the reverb/delay or had the chorus effect mix at 20% and eqed it…

I.e. what do you get when you add a sine wave with a cosine wave?

Also if you take a sine wave and invert it what happens to the amplitude when you add the inversion with the regular, it cancels out…

Its literally applied mathematics, imo

1 Like

My current projects that are meant to be used in the same vein, or to go together(first time I’m using this method actually), are being sent through 4 channels(possibly 5). So lets say my drums need “shaping” together. They’re all sent to 1 track. Even if I have 5 different kits/samples running. Then bass elements, the same. “mid” stuff, the same. And I do a lot of piano stuffs, so they get their own.

Again, take this with a grain of salt…After I’ve ran those to their own tracks, from there I use returns knobs to add in more elements. My thoughts, at least for me, I’m basically condensing projects into 4 tracks, with an additional number, which I’ll explain. All to remedy openess with contrast and restriction.

From the 4, or unlikely 5, so far…I then send the main tracks to a new audio track and and of the returns i’ve added knob-wise, to their own track. This way I can turn that up and down independantly and then the main lines, the same. I also have mulitple points to level things that are harsh, loud, soft, etc.

My goal is to limit myself while also, giving myself more opportunity to fix things. And additionally, I can add effects to give cohesion to these tracks, in a full release nature.

At least that’s my end goal. Might not work but out of my 10+ years writing stuff, it’s something I’ve never really attempted either.

1 Like

in CAN cause phase problems. however the timings are usually lined up perfectly (as mentioned in a DAW with PDC). so the phase will usually line up and the signals will just sum to a higher value.
if a delay is introduced to one of the signals, or if the phase of one of the signals is affected then it will cause phase problems.

try it out. create a track with a send and mix bus then place an all-pass filter on the send.
the all pass filter will appear to do nothing when on its own, but what it actually does is shift he phase of the frequencies above the cutoff by 180 degrees, so when the signals are mixed it will act like a low pass filter.