Besides being my new discovery that I totally have fallen in love with: maaan, how do you achieve such a clean mix??
I’m not even hearing reverbs or delays here, but it’s deep and full. And clean.
PS This guy is using Reason!
Besides being my new discovery that I totally have fallen in love with: maaan, how do you achieve such a clean mix??
I’m not even hearing reverbs or delays here, but it’s deep and full. And clean.
PS This guy is using Reason!
Reason actually has very good compressors (or effects with compressors on) and limiters.
I switched from FL studio, and I tried for so fucking long to get thick full sound out of the stock compressors and limiters in FL. I managed to get it pretty good in FL, but something still sounded thin about the mix in totality, and I’d just get nasty distortion and obviously audible compression when trying to compensate with EQ using those plugins. If I wanted a thick full sound, the mix would just be really quiet.
I decided to give the free trial of Reason 10 a go on Saturday, and man am I so glad I did. Within 4 hours I had a basic 2 min tune that sounded 10x better than anything I’d managed to get out of FL studio’s stock tools.
The whole analogue feel to the software is really good too, and the tools and their flexibility and intuitive nature is perfect.
I downloaded Pro tools to give it a go before Reason, and fuck it has a horrible interface, not intuitive at all. I haven’t needed to refer to one manual since using Reason 10. Because of the physical analogue interface, you just use your common sense and existing knowledge of how the effects work and you’re all good.
Maybe it’s not the compressors/limiters in FL at fault, but rather me, and perhaps Reason’s tools are opinionated to sound better? Who knows, but the feel I get from watching videos/interviews with people who own actual hardware and dreaming one day I’ll have that, they all seem to use the compressors that are known for sounding really good. And then picking the right compressor for the job to get the sound they want under the given circumstances.
So I think really, just hunt for good tools, and good plugins that sound good and use those.
Also, the Stereo Imager in reason just widens sounds really well. I don’t know why but it just seems to do it better than the tools FL had to offer.
Also, personally, I don’t like how obviously audible the compression is when the kick drum hits in the track you posted.
Honestly as someone that’s been producing music for a long while now. A short while in comparison to some members here, actually.
A cleaner mix is something decided by instruments, levels, octave, and direction.
I’ll say this, beyond figuring out the exact notes to play you can, should, find out some eq methods. I think a lot of us learn to play what sounds good. That’s pretty simple. But, fitting what already sounded good into things you’re working on can be difficult. Whether it is stuff that simply becomes too busy or something that has too much of the same frequencies.
I’ll simply mention, in some cases, layering sound can be as simple as eq. Such as rolling off the high or low ends and pitting the mid portions. Visually, making a bowl.
I was thinking about this and I wanted to add some more.
As @bbb said, the actual sounds you’re using are important too. Filter out unneeded frequencies, but also know that frequencies of a sound that aren’t necessarily clearly audible in the mix can still contribute to it’s depth and warmth in the bigger picture.
But you’d be surprised at how many sounds with similar frequencies can be mixed together if you employ a number of mixing strategies to help separate them. I’ve not mastered it, but I listen to a lot of music that’s quite complex in its arrangement with many different sounds, but produced quite well and very clear. Shpongle being my main example. Also mixing together sounds of similar frequencies to merge together to make a new sound can result in some interesting little details.
But also, phasing, stereo separation and panning.
As well as creating a horizontal sound stage with left and right, the stereo width of the sounds should also be considered. Drums typically sound good with a closer to mono body, with the high-mid/high range a little widened. Some pads you might want nice and wide, while the main melody is closer to the centre. But that does depend on the rest of the mix and the other sounds.
Too many sounds out too wide, and too many sounds too mono can sound bad and blur the mix.
So, for each sound, consider whether it needs:
There have been occasions where a nice stereo phaser that adds some glistening left/right details was exactly what the mix needed for a particular sound. Sometimes a light flanger has done that too. Using an effect to get this can, at the same time, add another dimension of texture to the sound itself. Though I’ve found not all phasers and flangers have the same stereo phasing effects but do all sound similar overall of course.
@bbb - i totally agree. This mix sounds clean in large part due to the selection of instruments in the composition.
when producing, consider where your instrumentation is sitting in the freq spectrum and eventually the stereo field.
I’m producing a release for an artist who is more of a songwriter, despite him building arrangements in Ableton for those songs. He sends over the sessions and I tend to use his production as a road map but replace all of the elements so that all of them sit in spots frequency-wise to not fight one another in the mix.
i get a lot of “i wanted it to be heavy so i doubled the bass w this other low end sound etc” - rather than explaining I just get him material that I’ve produced for the same tune and most of the time rather than splitting hairs over which we use its “oh, yeah thats much better”
i digress, but the point is: a well thought out composition/production is going to be easier to mix.
A good clean mix starts with your sound choices, and how they’re processed.
If you process everything extremely loud, and intended to have a clean, crisp, nice mix which will be mastered. It will be ruined from the start.
It’s all about getting nice solid levels from the start. Some people say aim for -18dBs while processing, others say otherwise. Some say always bare in mind your projects entire spectrum when designing the audio to ensure all frequency ranges are covered. While others still think it’s based on the gear you have. While all of these may hold some weight in the desired outcome, it all goes back to getting a clean sound sourced from the start. If you mic up a amp poorly, and record tons of feedback, delays, and so on you will be left cleaning things up when you start to process. Then that will lead the same overlaid effects on to the mixstage. Get it right from the start!
More over recently, newer ways to measure PLR(peak to loudness ratio) have been developed to measure audio dynamics and indicate if amplification or reduction may be needed. Tools aren’t so important imo in home stuidos, as the “artist” will loose creative focus, and do more engineering. Versus focus on creating. Just keep your levels, tracks and gains low, and when you get to a mixstage, turn up the volume on the speakers, not the tracks.
It’s not the size of the tool, it’s the fool who uses it.
Just my 2stones worth
@TvMcC - I think the tool does make a bigger difference than you say.
I used FL Studio for about a year, and watched videos on proper compressor use, and tried countless combinations for many hours on end, of compressors (both that stock FL offers), EQs, distortion (just a tad to add some other harmonics to thicken the sound), as well as mixing drums into a single channel and tuning compression there to fix the peaks caused by the sum of the drums. And I found it exceedingly difficult to make the entire track sound good, clear, thick and well rounded.
However, when I switched to Reason, the compressors just seem miles better, they make the sound thicker and fuller. As well as the ‘Pulveriser’ which has a built-in compressor, distortion and filter - that thing makes any sound thick and full if you dial back the default values. I got sounds to sound thick and full in no time. Reason’s limiter has this soft clip thing and you can’t hear the distortion from the compressors or limiters when you ramp things up unless you get silly with them.
And I was able to get a mix as loud and full as the pro stuff I usually listen to.
Maybe I am the fool, but a difference that stark was complete unexpected for me, as I was already convinced it was my engineering shortcomings resulting in a poor sound. I say poor, I got it pretty good in FL, but you could hear the distortion from the limiters and compression, otherwise it would be very quiet.
So, considering that, in your opinion, do you think Reason’s tools are opinionated to sound better, and FL studio’s tools are clinical and don’t colour the sound? And the clinical nature of FL’s tools is the reason why my mixes sounded poor, as the sound is left in the full control of the user?
I was watching a video on YouTube about how to use compressors a good while back, and I found one specific to FL Studio, and the guy said there that he thinks FL’s compressors aren’t that good, and that he uses a 3rd-party plugin. At the time, I thought perhaps it was just him, since he wasn’t some well known artist, just some guy like us with a decent SoundCloud user base and a bandcamp page - his mixes did sound very good though.
But now I’m more inclined to agree that the tool does matter as much as the fool who uses it.
I would also argue It is completely possible to turn a shit sound into a good sound. Chaining a series of effects to morph and shape the sound into something completely different is a bit like building a custom granular synthesizer in an abstract sense. The input into the effects chain is like the oscillator, and the effects chain is like series of filters on the synthesizer itself.
Using compression, EQ, distortion, delay, filters, ring mod, phasers, flangers etc - all of the effects at your disposal - along with automation, you can craft cool sounds out of a shit snare drum from a 15 year old sample pack. Just use whatever effect that transforms the sound in whatever way is needed for the current step and all kinds of weird and cool sounds come out of that. I’ve been doing that a lot recently and getting good results. Though in Reason the outcome is even better for the reasons already explained above.
number one thing that will prevent a clean mix is using too many effects. Digital and analog processors both have their way of smudging the original signal and make it less clear.
it’s been said a million times, but less is more and pick good sounds. Only use all the techniques you’re wanting to when they’re really necessary.
^^^^^^This so hard.
There are a few exceptions but using effects to turn some garbage into ice cream only happens every so often.
oops meant to quote @mnkvolcno
I agree, less is more. The basic rule of mixing is don’t have two things trying to play the same frequency in the same place at the same time. So sound selection is really where mixing starts, and EQ helps you get rid of what you don’t need in a given sound or mitigate where you do have two things playing the same frequencies. Just boost one sound slightly and take the other sound down in the same region and the louder sound will now come through pretty clearly.
All those more advanced tricks are only there for when you have something that you just can’t fix without them. Take mid/side EQ, for example. In all the mixing and mastering I’ve done over the past few years, I’ve needed mid/side processing in any fashion a number of times I can count on one hand, and only one of those was in a mix to fix a bad sample.
For my part, my mixes tend to go into a light distortion, a filtering and corrective EQ (where I pull out any resonances and run a hipass filter on most sounds), and then a broad tone shaping EQ, and then into one reverb bus for the whole mix. Some sounds don’t even get the distortion or an extra EQ stage if they don’t need it. Occasionally, I’ll add a per-channel effect, but not too often (I use effects in the patches on my plugins as part of my sound design process). Notice there’s no compression anywhere in my chain (except on the drum bus). Even my kick-ducking, I accomplish with actual volume automation rather than compression. I’d avoid compression as much as you can, even if you know what you’re doing with it, because you might not think any individual instance is adding too much distortion, but 10 or 12 channels later, it can start to add up.
I only mention my mixing habits because my mixes have been called clean to a fault on more than one occasion, so I figured it might point you the right direction.
You are correct, but I have found methods of fixing that up. I regularly chain many effects (average of 5, many go up to 10 or sometimes more) up to get interesting sounds. And yes the problem of blurriness does crop up. But that blurriness is often caused by stereo phasing effects, and destructive interference patterns softening it up too much. When making complex sounds involving lots of automated effects, at various stages I’ll compress the sound if it needs, it, I’ll adjust the stereo image and decrease stereo separation if it’s starting to get a bit wide and I’ll EQ at different stages as needed. If there’s something not sounding right, I’ll track down the cause and adjust/inject another plugin to fix it.
I’ll do to the sound whatever is necessary at each stage so that at each stage it sounds good. I’ll mess around for 30 mins and find a good combination of effects that makes a detailed sound with many dimensions of movement.
For example, this track, at 4:05, that sound was created with 15+ (can’t recall exact number) chained effects/compressors/EQs.
I’ve been at it all day with mix fatigue, so it may not be perfectly mixed, but it certainly doesn’t sound bad at all:
https://www.reverbnation.com/psyberspace/song/30344131-wip-wubba-lubba-dub
And during that at different stages I fixed whatever problems were caused by a previous plugin.
Lots of good tips here, thanks a lot to all of you!
The choice of sounds is definitely a great point. I wonder, though, if there’s a point in what @psyber is writing - I mean the choice of tools used (compressors, EQs, etc).
So another thing would be: let’s assume we have a good choice of sounds. Let’s assume we still need to apply EQs or maybe compressors. Do you hear large differences between stock DAW tools and external plugins or between different tools built into different DAWs?
I’ve always been using mostly stock Ableton EQ and stock Ableton compressors. They do the job. Can you do the same job better with other tools?
I don’t know for sure. I’ve only used FL and Reason properly. But it was black and white between the two for me.
It’s possible that each compressor has a unique sound, and you either like its sound or you don’t. Or you don’t care too much. But if you don’t like how it sounds, and you try to compensate, and then you’re just fighting against it, perhaps that’s where the issue arises.
And so maybe Ableton’s compressor is very good and/or you just like it, hence why there’s no problem for you. I’ve heard good things about it’s built in effects. But I’ve not tried it.
I’m not certain but it’s a thought.
Assuming your sounds are sorted and you just want to talk tools, I haven’t used ableton’s tools, but I’ve heard they’re some of the best stock tools around. If you were going to try some 3rd party stuff, I’d go to either Izotope Neutron (I use Alloy, which they upgraded into Neutron later on) or the Fabfilter suite. If anybody is going to have a genuine improvement over the stock tools, it’d be those companies. They just put together the best code to do EQ and all your other mixing needs.
Going from FL Studio’s parametric EQ to Alloy, I can go back to parametric EQ and do everything I do in Alloy now. BUT (and that’s a bit but, see?), I don’t think I could have taught myself how to mix as quickly and easily as I did in FL Studio’s stock plugins as I did in Izotope Alloy. For me, the biggest bonus with Alloy is the workflow, because you can quickly solo a band or give yourself a high-q band to sweep the spectrum with. My favorite is that you can sweep the spectrum with a solo band, then double click to drop an eq band right at the frequency and volume that your cursor is at. That makes it much faster to make some of the less obvious EQ tweaks when all you need to do is change the Q of the band and maybe the gain. The UI is also just bigger and nicer by default, so I can better see what I’m doing than in my stock plugins. I can’t say how much that would apply to Ableton’s plugins.
The tools play a part in the sound being ‘sorted’ in the first place. So I think it does matter a lot.
It goes full circle.
And I think there’d be more than those two companies to make a really good compressor.
And to be honest I never had an issue with most of FL’s other tools. But their Phaser had a nasty habbit of causing constructive interference and therefore spikes in amplitude, and I had to compress after the phaser almost universally when used on any sound with a decent size dominant frequency range. But the phaser did have better stereo phasing effects and flexibility than Reason’s stock phaser. But Reason supports VST nowadays so not really an issue. I didn’t like FL’s workflow either but that’s not really relevant.
@pysber Yes the tools can make a huge difference on the sounds achieved.
If I sit down to mix at a SSL desk, and run everything through Otari Tape machines. Then send said reels off to a mate who sits in a top notch room to master said work, our goals are never to play the loudness war! It’s all about working to get a clean soft dynamic mix, so it can be mastered at industry standard levels.
I often mix sessions for broadcast here:
http://oneunionrecording.com/wp/studios/
From my working knowledge of mixing both for records and broadcast (music/television/film) the gear can help get warm sounds, but if the person who has created the project has little to no knowledge of how to properly use the tools available to them, the entire project is shit!
Gear means nothing if you don’t really know what you’re looking to achieve.
As I stated in my 1st post it starts from inception, not twiddling potentiometers to make things happen. Take Phil Spectors “wall of sound” for example. Cleanly recorded instruments, with distinct sounds, and many of them are just big sounds.
You can learn all of the over compression, limiters, and tools you want to get a over saturated sound, but will not have a song that will be talked about in years to come. Keep it simple, and try to mix organic or rawer.
For a clear example listen to whatever is a modern mix and compare it to classic record. Bet you’ll hear all of the instruments pretty clearly, and no distortion, over compression, etc. …But once again…to each their own.
Really don’t mean to come off as a know it all or anything, but gear only works as good as the tracks need ear using it.
I am certified with manely compressors, a protools certified trainer, and a logic certified trainer. As well as a engineer who has other certificates from audio equipment, development companies, and so on…but that means nothing when it comes down to mixing poorly recorded stems that come my way.
Always start from a organic place, so when you get to a mix stage it will be easier…once again just my 2 stone
Being a fan of Iglooghost, I think it’s just good design. If we could all pull it off to this level, there’d be no Iglooghost to care about.
EQ and compression are obviously a part of this equation, but I think his choice of samples, placement / arrangement, and no doubt synthesis goes a much longer way than using a few of the tools that we all have in our toolbox. He’s doing all the things, and if you ever get to his level write the rest of us an insider’s guide!
Mixing begins with sound sources and composition. The sooner you get a vision for those things in a piece, the easier it is to realize, whatever type of mix you’re interested in.