Why are synths so difficult to mix (or, rather, the complexity of sound)

Really great video.
It’s not like there’s anything in here that will blow your mind. Most of us know this information already.

But it’s a really good meditation, however, to put your mind into a good place creatively for your next piece of work.
He just so eloquently walks through the topic piece at a time, and then there’s the pure enjoyment of watching science and art converge as he builds a pad sample from scratch as he’s walking us through the logic of the considerations and reasoning behind what he’s doing.


1 Like

Yeah, I saw that, and I had no idea that synths have a reputation for difficulty in mixing. To me, who has only worked with about 99% digitally generated synthesis and the occasional sample, mixing acoustic instruments is hard.

You mean to tell me that the instrument is going to generate some resonances that don’t change with pitch or change without relation to it? That the notes are going to change in volume independent of how hard they’re played because the instrument is optimized to produce certain frequencies? And it just might produce sound below the fundamental of the note I’m playing? All of that sounds so completely foreign to me I know I’d struggle to mix any acoustic recording. I mean, I’m sure I could learn, I learned how to mix synths after all, but it seems more like a matter of perspective to me than to just say “synths are more difficult to mix than physical instruments”.

And I know that’s not 100% what the video’s about, but I can’t help but have some issues with the statement the video is built on.

The difference is the context.

Are synths harder to mix with other pure didgital sound?

Do synths present a challenge when you’re trying to mix them with analogue sound?

His axiomatic environment is the second case and not the first.
Which you can see as he’s working on a digital synth backing pad addition to a field sampled piano tone.

Rock bands from the 70’s found this out real fast when their organists switched to moogs even, and that wasn’t even yet purely digital. It was just able to concentrate the signal much more tightly than anyone else’s gear and it took work to mix the two different worlds.

I’ve ran up against this on the rare occassion where I plop in guitar playing into the mix. The difficulty in balancing the mix immediately shoots up.


I relate to this so much. Play guitar, play synths, have never been able to get the two to play nice on the same track. I assumed it was something about the playing style/timing of live guitar vs. quantized synth sequences, but it does make sense if it’s actually more of an EQ/frequency density thing. Maybe that’s also why bass guitar fits in a bit better (in my experience) - stronger fundamental, fewer higher harmonics, so easier to isolate and build space in a mix?

I’ve been thinking of ways around it, and one idea I haven’t tested, but kind of conceptually mapped out in my head to try at some point, is to take the synth line, export it, push that through a speaker and record that back in.

At that point, the synth track becomes more “acoustic” in that it’s traveled through the air and come back through, which the hope is would make it easier to mix with something like a guitar.

The downside is that you’d have to go back through and do clean up work on the synth to mitigate any artifacts just like you do for guitar, and you’d need to learn to treat it with a whole panel of compressor, delay, etc… whatever, just like you do with guitars normally.

Basically what I’m saying here is that you export the purely digital out into the real work and suck it back in via mic and in so doing move yourself back to the 1970’s where at least the issue was mostly about the shear volume of the concentrated power rather than the purity lacking any scattering, and it trying to be mixed with things that have more messy profiles from being part of the real world.

There’s clear downsides to this, don’t get me wrong. You’re openly downgrading the quality level of that synth by doing this (however, to be honest…I don’t think the common ear will be able to tell at all that much…provided you have reasonable speakers and mic).

Conversely…now that I think of it…I wonder if there’s a VST out there that adds harmonic resonances and overtones to sounds. Because that would sort of do the same thing without a lot of this hassle.
I mean, pipe it through an overdrive and then clean the overdrive back up on the other end (synth > overdrive > harmonic resonance generator > overdrive cleanup) and you’ve pretty much got a profile that should be well scattered instead of isolated.

To be honest, this is why I use flangers so much.
I don’t even turn them up that high a lot of the time. It’s just that it’s on. Most often barely on.
But just having it pipe through one with rate turned all the way down, or nearly, just allows a bit more breath in the spectrum and L/R space.


1 Like

Ive thought about and heard of people “amping” and recording synths. In fact you mentioned 70s music w synths and wonder if anyone worked that way. Any just +1 ing the idea. I am certainly n mix wizard.

1 Like

I was definitely reminded of you when I was watching him work in his studio.

I have a simple mic and don’t have any speakers worth a damn at the moment, so I can’t give it a shot.


1 Like