Speed up your music production by using a workflow

I have made this blogpost about my music production workflow. I use this to have a goal when I am making beats, or just producing music in general.
The blog post explains each step of the workflow in detail. It also includes tips for idea generation, mixing and mastering.

Check it out and leave a comment.

Music production workflow - Speed up your music making!

Do you already have a workflow that you trust?
Share it.
What is your workflow?

All the best


This is pretty cool, man. Thanks for sharing.

I find it interesting how different producers approach beat making, and music in general. Some people (like yourself, from what I gather from this workflow) are really good at quantifying each step and laying out in a workflow to go through the motions on making a track, constantly giving yourself a “does this sound good” QA/QC step as you move through it, eventually finishing the composition as prepped tracks before doing full mixdown and mastering.

I could see how this would be really helpful for beginners to look at, or for those people who are just simply lost in direction.

My first response to this is… Why is speeding up your music production important, and how does this workflow really aid in that? I see the initial loop being “do I like this?” and returning to 30-60 minute intervals. That could go on forever, for some people. In a lot of ways, I think producers think too much ahead in the scope, and in a sense that limits them - but again, everyone works different, that’s why I find this a fascinating approach.

I’ve always considered myself an “exploratory” producer. I like to start off either with drums, or a single instrument… and see where it goes… chaining inspiration in a micromanaged sense. “Ok, I have a synth that is cool. Let’s add some drums. This is neat, I think (this) would go good with this… then experiment” repeat.

Then again I also break a lot of standard operating procedure in the studio, I like to also do my mix down as I’m going along, making constant minor adjustments so in the end I can master it extremely fast and with ease… and I find that helps production speed up on the back end.

But that’s my two cents. Thanks for posting :slight_smile:

1 Like

Hi Nostromer,
thank you for you comment on the post, I am happy that you like it.

Maybe “speed up” is not the right term to use here, because it is not speeding up the process that is important. The important thing, in my opinion, is to finish tracks. When I started using a workflow, it made me very aware of the different steps I needed to take, to end up with a finished track or beat.

Before I started to use a workflow for my music production, I often found myself spending a lot of time on music that I would never finish. I found that this often was due to the fact that I started focusing on the mix of the beat, before the idea was finished, and therefore lost the creative flow that I needed to finish the beat.

It is true that the idea generation intervals can go on forever. But I find that the best music I make, is the music that comes naturally. If I find that I am pushing myself to be finish an idea, then it is often better leave the idea and start working on something new. The idea can then be worked on at another time.

But, as you say. Everyone works different. The purpose of this workflow is to show one way of getting from an idea to a finished mixed and mastered production. If a person already have a method that works then…“if it ain’t broke, don’t fix it” :slight_smile:

Again, thank you for your very good comment.

All the best


Great stuff!, really handy to see how other people go about finishing.

Not totally on topic but with regards to speeding up your workflow this dude is great, really really fast worker.

This guy works insanely fast and has some great little tips from speeding things up, its from an ableton live perspective but these could be applied to other daws. There is another 2 videos of him talking about the same sort of stuff.

As for me I tend to sample myself alot so I might spend the day writing drum parts in different bpms and stuff, then pads or synth patches, so on. Then I sort of sample myself and try and get things down as fast as possible. I find the more I can write when I get in the zone the easier and more enjoyable I find the later stages of mixing. Helps me not listen to my own track too much and lose objectivity.

1 Like

I find the tracks I finish are usually the ones where I get a solid, full track arrangement of the basic elements down in the first session. Maybe two. If arranging and writing the track takes more than that those tracks tend to not get finished.

1 Like

If I were to graph my workflow the top would be the same as yours.

The bottom would be:

Arrange and edit…maybe that is implied in your ‘needs more work.’ Half my music, even most of the parts that are electronic, are played live. Get some good takes, but I usually hold off doing fine editing until I am arranging. Because often I find some tiny ‘mistake’ I thought I had made, subtle phrasing, cant be heard in the mix.

Get my busses in order. I color tracks in here. Also double check that synths or drum machines are going through an amp sim, if I didnt record them through a real amp. Most things already have saturation by the time I start thinking things are ready to arrange and mix, most things I still use hardware saturation on.

Spatialization. Now that I have my busses, decide on the virtual room the song is in. Make one to three auxs for variations of a reverb for far, near, and sometimes near left, near right. Send tracks and busses to the verbs deciding where they are in the room. I usually draw a picture here because it helps me think about the milliseconds per meter.

High pass …what should be high passed.

Eq pockets…push things into their eq pockets.

Compression and levels…I usually do these together. I tend to put a compressor on everything. Some may be a very subtle thing, 2:1 or whatever, and I may even decide something is too compressed, fuzzy guitar a common culprit, and Ill flip it to being an expander. But ya, as this effects levels, I would put these together.

Review eq pockets

Take a break…I dont prefer to do my own mastering. I’m not bad at mastering or mixing imuho but I think this is great to do one or the other. But if that particular project doesnt have the luxury, I at least have to forget about the nuances of the mix and come back.


Getting pissed about something and remixing

Mastering take 2…usually have it by then.


Thank you, I will check the videos out.

Making your own samples can really speed up your workflow and keep your “own” sort of sound. I have sometimes made a whole loop or song and then used that as a sample. This can really jumpstart an idea and give you that “Everything is fresh” vibe.

You are spot on! Arrangement is truly king. To focus on moving on from that initial 4-8 bar loop, that so many of us start from, should be top priority, the magic is in the arrangement.

Thank you for the details on your workflow. I will probably add a section on reverb to the workflow at some point. This subject have been a bit neglected in the workflow post, unfortunately.

1 Like

On the subjext of natural, not special effect reverbs…While I am definitely not trying to say this should be done as a rule, I personally like a mix to simulate a virtual room. It is the way we are used to hearing real life things so I simulate it, or simulate it on the parts not recorded in a real room.

This is from a blog post I did for a local studio about natural reverb techniques.

2D Sound in a 1D System

Most humans live in three dimensional space - up/down, left/right, front/back. Typical audio systems have two channels - left/right… think of our ears as being two left and right inputs.

If we only have one dimensional hearing, how can we hear in two dimensions? If you were in a completely anechoic space, a space with no reflections, it would be much harder. (Interestingly, artificial rooms like this make people anxious and even hallucinate- google ‘Anechoic Orfield Labs’ for e.g.

In natural spaces, however, we do perceive more than one dimension. Left/right is easy. Tap something to your left, the sound reaches your left ear before your right ear. Your brain makes a calculation based on this.

Near/far is more complicated. In short, we perceive it through environmental clues.

If you have ever looked at a reverb pedal or plugin, you may know the terms pre-delay, dampening, mix, time. But let’s examine where those come from. Many people turn these knobs and aren’t clear on how they relate to perception of distance. Understanding this can help you use these effects yourself or in critically listening when working with an engineer.

First, a brief history of reverb.

Humans long enjoyed the natural reverb characteristics of caves, cathedrals, public baths, and dungeons. Our first artificial ‘analog’ reverb units used springs or plates. Spring reverbs are still quite popular in guitar amps; think of the iconic surf reverb sound. Digital reverbs have come much closer to simulating natural spaces.

I tend to think of using reverb in two ways - to simulate a natural room or as a special effect. As a special effect, I might use a huge wash of reverb on guitar in place of a synthesizer pad. Or I might use a reverse reverb on a track - a sound that doesn’t occur naturally in our time-space dimension.

With special effects, of course, there are absolutely no rules. In simulating a natural space, some understanding is helpful.

Before we get to effects, you want to consider how you will record things. If you have a nice sounding room, you might want to put the microphones farther away to capture more of the room’s reverb. On the other hand, you might want to close mic them because some elements, e.g. a synth, might be overdubbed directly, or you might want to close mic things for isolation. There, you might want the freedom to add similar reverb to each element to glue the mix later. Granted, you can have some room mics on the drums and dial in reverb on the overdubs that matches it, but this takes skill and practice and is hard to get perfect.

With that background, here are the key parameters that help us perceive distance, our second D:

The farther something is from you, the smaller the difference is between any left or right signals. Image a 20 meter long room. You are facing the drums. The low tom is on the left, hi hat on the right. If you are at the far end, the angles to your ears may only be a few degrees. If you move the drums to a meter in front of you, they may be at 45 degree angles to your ears, making it much more obvious which is to the left and right.

So anything intended to be far away should be panned more to the center. Close items can be panned anywhere, those are just perceived as being to your left or right.

Imagine again you are in the back of a room, a singer is in the middle. When they sing, the sound of their voice will go straight to your head. That is the ‘dry’ signal. Their voice is also radiating in multiple directions, off the walls, the ceiling, and the bass player’s vinyl pants.

The pre-delay parameter adjusts the delay between the dry signal hitting you and the wall or ceiling reflections. If the singer is farther back in the room, some of the reflections would hit you much closer in time to the dry signal. If a guitar amp intended to be at the far wall, you might use a pre-delay of near zero.

If you are trying to match overdubs or a specific hypothetical room, note that three milliseconds is about 1 meter. You might look at your plugins or hardware at this point, ones you are familiar with. That is probably pretty low on the dial. A lot of reverb effects are meant to go well beyond studio or even concert hall sized spaces.

Reverb Mix or Wet/Dry- Far away sources tend to have more reverb overall. There is more opportunity for more reflections on the way from the source to you. In some scenarios it can even mask the dry signal - remember the way it sounded when you locked your accordion player in the basement? Raise the reverb mix to move items farther away.

Equalization or ‘Damping’- High frequencies get absorbed more easily than low frequencies. So things that are farther away will have a high frequency cut or added dampening. It is difficult to give a default starting point here. I may use anything from 1000Hz to 6000Hz, but that is as much subjective taste as near-far positioning as different surfaces absorb frequencies at different rates. I also often set it a bit different for the near left and right because many real rooms will be filled with different junk that changes this.

At the other end of the frequency spectrum, you tend to have a tighter window of what is cut. A roll off up to 200Hz is not unusual.

Decay Time or Room Size - Things will tend to be perceived as farther away if the virtual room is larger. This one is more obvious, but note that if you are trying to glue your mix, having a long decay on some tracks and a short one on another will sound unnatural. A slight variation though just makes the room sound irregularly shaped. Remembering regarding pre-delay that three milliseconds is about a meter, note that big concert halls may have just 2 seconds of decay.

My advice having hopefully gained some understanding of these psychoacoustics, is to pay attention to these factors in your natural environment.


That’s one hell of a post. Pretty helpful stuff to think about. I’ve never played my tunes live - and never really thought too much about how production reverb could be affected by different venue sizes and acoustics.

Awesome man, really fantastic input. :slight_smile: :beers: