Do you think different DAWs sounds different?


Yes, indeed :sweat_smile:

I still think that the person changing has something to do with it. The sound isn’t changing, the person twiddling the knob is - bur we mis-identify the source of the percieved change - as something coming from outside, when the change is actually (only) happening in the mind.


absolutely not, you can even test that yourself by rendering the same ( neutral ) sound from each daw and then phase cancel them




A lot of people on Gearslutz swear by the sound of Harrison Mixbus. I downloaded copy when they did one of their sales and I payed I think 29,- for it but sadly it has just been collecting space on my hard drive…


My understanding is that Mixbus used to be special when it was the only game in town that could do inter-channel plugin communication to simulate a mixing desk in front of you where the channels push and pull on each other. Inter-channel plugin communication from VST to VST is still rare, but it’s happening from third parties now so it doesn’t need to be a DAW feature so much any more. Airwindows console 5 I think is their latest, and it really does a good job of doing some light saturation and some non-linear summing on busses where what’s happening on one channel can have an effect on all the others.

Brief primer on inter-channel communication: Most console emulators simulate a single channel in isolation, but then when you overload the adjacent channel, nothing happens. With inter-channel plugin communication, the overload of one buss can saturate others as well, etc.


All DAWs are identical. It’s all about the processing you apply. Effects, different instruments will sound different, dIfferent limiters do different things… Some of stuff has their own unique perks. But if you’re messing with audio files only there’s 0 difference. Bounce the same audio in each DAW, invert phase of one bounce then play it at the same time with another that came from a different DAW. No sound? Audio files are identical, repeat this with every DAW to find “that” difference which doesn’t even exist.


Slate software does this Im pretty sure.


Probably the most forward-looking example is Izotope with the masking meter from channel to channel and the visual mixer on the master outs of a mix, not using it to do better analogue saturation at all. I have no idea what Slate does, but I think there is a lot of unexplored potential in this kind of thing.


Then how come my pink theme makes everything sound squishier?


Yes, this. Whether they sound different or not is entirely irrelevant if you’re working with the DAW that suits your style and workflow.


I found a solution and i think it works great!

So i use rewire.

At the same time i can use power of Reaper and have amazing sound of FL Studio. Nice! :v::kissing_smiling_eyes::space_invader:


Most people do the opposite because they sound the same and it’s easier to mix in Reaper than FL, but I think you know how to have fun in the studio and that’s what counts :panda_face:


That perfectly describes the effect. No two listens are the same… even by the same person listening to the same track on the same equipment. Add to that someone asking “…do you hear the difference?” and the human brain will find something different.

It’s the same reason humans can be tricked by magic… suggestion and misdirection :sunglasses:.


Doh!!! :exploding_head: I’ve done this so many times that I’m slowly (too slowly!) learning to push settings hard to clearly hear the difference before tweaking to taste.

Funny… but I’ve always done a version of this “push hard” when learning to perform a new piece… especially with another performer… I will intentionally exaggerate different aspects of the piece… above and below where it should be. it really helps to zero in on a good clear dynamic range of color, emotion, tempo and of course volume.

It’s also why my tag line and personal Mantra/Motto is…

Reality is one thing… but perception is everything… :sunglasses:


I have a dilemma that’s kind of related to this that someone smarter than me might know about.

I just set up my studio-making stuff on my work PC because my living situation just got a little cramped and I’m having to consolidate temporarily, and things are sounding extremely different through ASIO when I’m looking for low-latency instrument tracking. So when I mix appropriately, my renders are all coming out entirely different than they actually sound unless I flip back to non-ASIO methods of listening in Live. Even FL’s ASIO produces the ‘real’ sound, so the problem is obviously ASIO4ALL. But I still don’t understand why this would happen.

I’ve never had my DAW sound different before, but the difference is pretty extreme. Any ideas? I’ve messed with buffer size and just about every other setting within ASIO4ALL to no avail.

Also, tracking with FL’s ASIO (it works with live, thanks bro’s) produces way too much latency to pull off a basic rhythm riff, so that’s not even an option. DAW sounds, how do they work?


I’m just guessing but I would assume that it’s a combination of the algorithm being used to gauge the quality of the sound being produced and the actual hardware components of the pc…
I’m just totally guessing because I really have no fucking clue…


I don’t think so, For example if you take load a wav file into FL Studio for example and duplicate it into Ableton, there is absolutely no difference in its tonal integrity. In this case where the difference of night and day lies is in the methods of processing, I think.

As for the VST VS. Hardware: Abso-fucking-loutely. Sure Arturia VST’s sound just like their hardware counterpart through meticulous engineering but like anything synthetic, It cannot replicate the feeling.


Who cares at all? Sound is sound. :smiley::smiley::grin::grin::grin::grin::grin::sunglasses:


Could it be headroom? Maybe FL studio Asio does math in 32-bit and Asio4All only does 24? Then, anything over 0 clips instead of moving the mantissa as in 32-bit.


That’s possible, since the only thing that does it is ASIO4ALL. I wonder if I can capture the weirdness if I record the master internally. To the bat cave

Edit: just as I thought, this can’t be recreated on renders or internal resampling. Outputting the audio to something might work