Mixing/Mastering Loudness Equalizing Tool (anyone interested?) - calling all engineers!


#21

Getting orban set up, calibrated, and clicking away is definitely one of the harder steps.

Oh, also, I didn’t highlight that Audacity cuts off at 4.5ish minutes, so if the first 4.5ish minutes aren’t a good representation of the song’s typical form, then highlight the 4.5ish minutes that are and run that for the spectrograph.

Cheers,
Jayson


#22

Alright got it done:

  • doesn’t work if you open the spreadsheets in Open Office Calc (works in Libre).
  • it is possible to unknowingly screw up so that the Orban won’t save the data for whatever reason.
  • I’m too lazy to study everything from A to Z to figure out if the calculation was done properly, so I just drop the file here
  • Audacity notified me it uses the first ~2 and a half minutes, not 4.5
  • used 24bit WAV of this:

#23

Hmm. Yeah, it’s not 4.5, but should be 237.8 sec at 44100 Hz, which should equal roughly 4 minutes.

Increasing above 44100 would lower the time since Audacity performs 10485760 samples at whatever rate you choose.

I cannot speak for the integrity of the excel doc in other software. I know all commas would need to become semicolons for open office, at least, and several functions would likely not work.

I tried to open your file, but it wouldn’t load. I’ll try on my main laptop when I get a chance later.

On the other hand, the point of this isn’t just to suck in one song and get some data. There are way easier tools for that. You use this to balance two or more songs to each other by following the instructions for how to go about calculating the levels for each song relative to the song you pick as the one you want to balance all other songs by; opening each song into a different excel file.

It can be used to just read information of one song only, but like I said, there are definitely plenty of vst tools out there that deliver most of that information much more easily if that’s all someone wants.

Thanks for giving it a spin.

Cheers,
Jayson


#24

I have yet to get Orban to spit out data successfully. I’m going to try a reinstall and see if that helps. I have audacity already, so I’m hoping for no issues there. And I have real excel, so I expect things to go smoothly when I get that far.


#25

@White_Noise

I’m curious if you search your computer for “Log 10” (or 11 if it’s the 11th where you’re at), if anything pops up.

IIRC the naming convention is: Log day month year 24hour.minute.
EDIT: I just checked, I was close. Here’s an example: “Log 6 Feb 2019 21.54.49.csv”

I’m wondering if it’s dumping into a default dir and ignoring the dir you tell it. I’ve had that issue before.

Look in the default folder as well, just in case, which is: My Documents / Orban Audio Loudness Meter

Cheers,
Jayson


#26

lol

Wow, facepalm
OK, so I just redownloaded and it looks like the newest update flipped things around on me a tad. It used to be that you could tell it where to plop a log file. Now it looks like it ALWAYS dumps to the my document/orban loudness meter folder and the folder selection spot ONLY interacts with the FOLDER WATCH feature for the ANALYSIS tab of the program.

OI…yeah, so I mislead everyone because of that!
I’m sorry.

Cheers,
Jayson


#27

OK, @Ag_U,

I got to my laptop and loaded your data into excel. So, like I said before, this calculator is mainly at it’s best when you use it to balance two or more songs so they can be played together nicely (like setting up an entire album or a set of songs).

That said, it does have some nice ability to dig into an audio and tell some potential problems that it might be facing.

Now, you had your file set to supply raw for LKFS and none for peaks, which is fine (the instructions indeed say to start there), but I punched it up by 5 dB which will be explained below as to why.

Here’s a snapshot of it:

Now, I’ve set it to a boost of 5 LKFS (after equalization) and an average peak limit of -3 (which this track doesn’t have any chance of hitting, I just do this by default of habit), but really, the issue that I see is in the DNR and the RAW (original) version’s peaks.

There’s over a 100 peaks over 0 dB going on, and the LKFS is really high at just under -7.
The DNR is basically equal to the amount of room that’s sitting between the average LKFS and the 0 limit: approximately 7.

And you can see this in the LKFS/LUFS histrogram. There’s basically one value that’s spiked for LKFS averages and then another single value that’s spiked for peaks. It’s basically -5 to -4 and -1 to 0.

There’s little variation otherwise. This means that when this is uploaded to streaming services, it’s going to get hit hard on those streaming service limiters.

Which we can see when we load this song onto Loudness Penalty: Analyzer: https://www.loudnesspenalty.com/

Basically, this would need to drop this by at least around -7.5 dB to avoid (most of) those heavy penalties from the streaming services, but in doing so, because of how tightly packed the Average LKFS to Peak range is (DNR), the perceived level would be reduced because to fit without penalty, the track has to be turned down to a point where the highest peak is around -6 dB.

This is because most streaming services aim for something around -14 to -15 LKFS ranges, and a DNR of around 9 to 12 (ish…this is where a lot of variation comes in…no one’s really on the same page).

How they react is different too. Some will just clip the top and shove it through, which really hurts because the head just gets cut off of the song. Others will compress it until it fits, which also hurts because now the DNR is even smaller than it already was before. And others will drop the volume until it fits; probably the nicest approach, but it’s still a bummer because it’s now quieter than the song was intended.

The bandwidth is pretty interesting on this song. I’ve never seen a bandwidth split like this before.
It’s like looking at a lightning bolt that’s forked. I wonder if that’s where a lot of the pressed and tightly packed volume is coming from; because the same mid to high range frequencies have a double compound of dB pumping out which is surely going to have an affect on the total space in the sound.

From the look of it, there’s possibly an over-crowding of sound at the top end, with less in the low end, though what is there in the low end is indeed loud. If we make these values additive, then -50 and -40 net a result of around -10 in simplified compound value since the sound is in the same frequency range and stacked on top of each other.

One thing this song has soundly done is contain a strong and wide frequency spectrum. It’s not cutting off short like so many do these days; it allows for the very top end to still exist and breath - which is great!

So, though it is true that this calculator was mainly intended to help with balancing multiple songs to each other, it also does have the ability to tell some pretty granular detail about the fidelity of a song.

Cheers,
Jayson


#28

Thx for the reply, it was an interesting read. More funny spectrograph art:

This tune:

And…

More KP (sorry, I don’t possess any dynamic music in lossless quality :tired_face:):

This tune:

I think it’s interesting how the first Knife Party tune is -5LKFS on the drop, and “should be” reduced by 7.3dB, while these next 2 tracks by 13.9dB & 14.6dB, but their PLR is only 2dB less. I know both of them are brighter in tonal color, but the difference in perceived loudness doesn’t really sound like 6.5-7.5dB to me. :face_with_raised_eyebrow:


#29

Flip them to Amplitude 5 instead of sending Raw.

It’s saying to drop to -19 you have to drop by that much.

If you boost that up to 5 instead of raw, you’ll see the amount to alter by change.

Cheers,
Jayson


#30

This is why I hammer the point that you have to set the same dB Increase Setting on every song you intend to balance together. Otherwise the calculator will be telling you directions for different settings. :wink:

I’ll write more about how it’s doing all of this tomorrow, but a peak into it is that -23 in the advanced setting in the middle. Everything is being dropped relative to that (in part) first, and the the dB Increase is pushing it back up after it’s been equalized. When you send raw, you’re telling it to just send the raw equalized by that -23 and DNR 18 (plus bandwidth considerations), so everything is very low to start with.

And that’s why you have to keep the same boost. If you boost differently for each song, then the equality goes off because now they are being pushed up by different levels after being equalized relative to the -23, 18, and bandwidth considerations.

I’ll write more tomorrow.

Cheers,
Jayson


#31

OK. I have a free moment, so here’s a crash course on what’s going on behind the scenes with the system.

The basic concept, as an analogy, is that if you just hold two types of objects in your hands and you want to know something about them in regards to how they differ from each other in their physical nature, one of the easiest ways to know discern that is to throw each object against the same wall.

That wall is the equalizer to each of the objects. If one of the objects bounces back 3 meters and the other bounces back 5 meters, and you threw them at the same controlled force, then you know something about how they differ right away. One of them is more dense than the other, and/or one of them is made of more reflexive material.

Now, further, if you wanted to know if it was a matter of density versus the material’s reflexive nature, you could then take each object and place them in water. The one that displaces more volume by proportion is more dense than the other. If they differ here, then you can say that one is more dense than the other and that this is likely how they differ in bouncing off of the wall. If they don’t differ here, then you can presume that one is made of more reflexive material, and that accounts for their difference in bouncing behavior.

In like fashion, that -23 in the Advanced User Settings is the wall that songs are thrown against first, and the 18 for the DNR is like the water that each is placed into next.

We’ll get to the bandwidth in a moment; for now, we’ll just focus on these two.

So the way the calculations start is by comparing how much of a difference the sample song’s LKFS is from an LKFS of -23, and jots that difference down.

So, if the song’s LKFS is -14, then it would have a difference of 9 from -23.

Next, we flip to tossing it into the water. So the song’s DNR profile is compared against the DNR of 18 and again, the difference is noted and kept in memory. So a song with a DNR of 12 would have a difference from 18 of 6.

Now, we then combine them through Mathimagical Voodoo (which…I won’t go into here) so that we don’t end up with ridiculously high values. Meaning, we don’t simply add them up and say that the value is 15. That’s WAY too heavy-handed of an adjustment, and would produce terrible results.

Instead, again, some voodoo goes on and they are each chopped down by the same factor (4 is that factor actually) and we get 2.25 + 1.5 = 3.75.

Now we take the original LKFS of -14 and LOWER it by the amount we just arrived at, which gives us -17.75.

Woooh-oh! We’re half way theeEere! (sorry, couldn’t resist).

The LKFS and DNR are effectively done. Now for the bandwidth.

The bandwidth is like looking at the size of the object instead of its response/behavior and density.

Sometimes, the shear size of an object might be the reason that something behaves differently than another, and not what it’s made of or its density.

So, to get that picture, we look at the Audacity data, the bandwidth. This is where things get really hairy in the math, so I’ll do my best to keep it simple and detailed at the same time (scratches head).

The way that you do size of something is by checking its measurements of height, width, and then you can apply a scale of its surface area via checking the radius and employing Pi (well…for round objects anyway).

Likewise we’re going to check the total bandwidth range, then we’re going to check the dB range of those frequencies. This gives us our height and width bit. Width is the bandwidth range, and height is the dB range.
Now we need to “check against the radius and employ Pi”…ish.
To do that, we need to create a simple ratio, like rise over run, or a TV’s aspect ratio. We just want one simple number, and that is What is the ratio of Bandwidth Range to dB Range? So we divide the first by the second.
Bam. Now we have a huge number. Eegads!

No problem, we’ll get back to that in a moment.
Firstly, there’s a bit of a pit-stop that doesn’t fit the analogy here because we also toss the DNR back into the fixing. Why? Well, because the spread of the LKFS range (which is the DNR) is important to the weight of the frequency range. If we just took only the frequency information and ignored how tightly packed they are by dynamic LKFS range, then we would run into an imbalance.

So we take that very large Eegads value and multiply it by the DNR value. GAAAH! Now that giant number is even bigger!

No worries. We’re going to go on to the last part of the analogy and get the surface area in terms of “per square inch” style - that is, we’re going to tell how big by how long it is per the density involved (dnr).
We actually have all of this already, but the number is just vastly oversized, so we need to scale it back down.

To scale it back down, we divide by 1,000*Pi.
Pi to keep it in proportion, and 1,000 because that’s the caliber in decimal places we jumped up into from our starting point.

So, for the bandwidth of the song that’s loaded in the document by default, the result is 3.9, from a starting position of 21963 as the Bandwidth range, and a dB range of 22.4 (btw, the dB range is what’s called the “AFB” in the calculations and math parts).

Woohoo!

So, we take that -17.75 that we have from the LKFS and DNR correction (simply called LKFS correction) and lower it even further by this as well. This gives us -21.65.

So, relative to -23 and 18, this song spits out as -21.5ish.
A different song would spit up differently.

NOW I take an bump it up by the amount I want until I see a value for the LKFS in the ADJUSTED OUTPUT that I want. So I click on the dB INCREASE SETTING and bump it up. Let’s say I want -14 because I liked where this song was at (we’ll get back to why I would just return it to where it started in a moment).

So I set the INCREASE SETTING to 7. Bam. I’m pretty much back to where I started (we’ll ignore any issues with PEAKS that might happen for the moment, because if you saw issues with PEAKS highlighting in red, then you can choose to walk it down…maybe to 6…and see if that removes the issue and settle for 6 as the solution).

Now back to why I return it back to what it started at.
ONLY the first song that I do returns to normal IF I WAS HAPPY with the setting (if not, then I would adjust it if, for example, I thought it was too loud).

Now that I’ve set my FIRST SONG to 7, I then go to my second song, load it up and I MUST SET IT TO 7.
If I don’t, then it won’t work right for a good balance when the two are played together.

Let’s say the second song starts out at -7 LKFS and a DNR of 7, and a bandwidth of 21920, and I set the dB INCREASE SETTING to 7 so that it’s paired with the first song, then I get a return of -13 LKFS and to accomplish this, I reduce it by -6 dB in my DAW and re-render it out.

The first song, since I was happy with it, doesn’t change. I changed it by +/-0 dB so I didn’t need to re-render it from the DAW.

So now, if I play them next to each other, they sound like this: https://www.dropbox.com/s/9264nvcgfkw5ih6/loudness%20calculator%20example.mp3?dl=0

Now, the thing that remains untested, and which I need feed back on, is whether it’s working for lots of variations on this. It might be that the LKFS considerations need to be more heavily impacted than they currently are (that’s a possibility). So basically, I have faith in the concepts of the calculations and the approach, but the weighting might need to be adjusted to properly work in a more broad use than just my own songs.

Cheers,
Jayson


#32

Actually, you know what…I just tinkered and made a refinement after writing that up.

The above works alright, but it’s even better if you make the adjustments in the following screenshot.
Go to the CALCULATIONS TAB and the 4 in PINK above FACTOR OF 4 that is to the right of DNR DIFFERENCE…CHANGE THAT TO 2.5 instead of 4 (as seen highlighted here in red boxes).

If you do this, then you seem to get an improved performance between very compressed songs and less compressed songs.

Try it out and let me know what you guys think.

Here’s what those two from above sound like with this adjustment.
https://www.dropbox.com/s/g1drksffbjl3gws/loudness%20calculator%20example%202.mp3?

Cheers,
Jayson


#33

Hmm…almost but not quite there (I only had some simple headphones when I tested that, but later checked on varying speakers/phones).

More tinkering to do to nicely nail that down. I’ll keep fiddling and report back once I get somewhere.

edit: take that back. that does seem to work pretty well. Though it looks like there may be need for low end weighting…more to tinker possibly.

Cheers,
Jayson


#34

You were correct about it dumping all logs to the default folder. I’m going to be looking at this again in the next day or so when I have time.


#35

Cool.

Hey when you do, @White_Noise, try this one out (I’m toying with some revisions to the way some of the core calculations work to try to close the gap on some differences).
https://www.dropbox.com/s/2dl8kjz21h2w0n5/Loudness%20Calculator%202.2.xlsx?dl=1

This 2.2 version, so far to me, seems better at balancing very different sound profiles together.

I did this one on it just now and I think it seems pretty good.

Cheers,
Jayson


#36

Oh…derp. Might help if there’s a comparison to what it was originally, right?
facepalm

Here’s a sample of what it was like before equalizing:

I think the difference between the equalized and non-equalized is pretty drastic (for the better)!

And here’s a comparison of the audio file of each of these files…there’s a pretty radical difference.

Cheers,
Jayson


#37

Okey doke, I’ll get this downloaded tonight before I start crunching numbers. I’m going to be using a selection of tracks from “Breakfast in America” by Supertramp because it is the definition of a well-produced album for me, with loads of dynamic range (for pop music at least). Lots of quiet and loud parts, fadeouts to the noise floor on most songs. I’m really going to be giving your math a workout with these, I hope. I pulled out the loudness results form Orban this morning, going to get the frequency data tonight. For reference, I believe I’m using tracks 1, 2, 5, 8, and 9 off the top of my head.


#38

I’m kinda ranting here, but I feel the loudness calculations should pretty much ignore frequencies below 80Hz. That’s my 2 cents about the entire debate. Looking at waveform on the right, I wonder how it would look & sound like with a 48dB/oct linear-phase highpass at 80Hz-100Hz. For an average person, loudness is 90% about upper bass, mids, and highs. In that KP tune, the sub bass is the loudest instrument (in terms of RMS), while being just a sine wave below 50Hz.

Btw this is sort of an issue even within the loud genres where everything is already loud, but people still want an extra inch, so they reduce the level of the sub & and use sound design methods/mixing to emphasize the 80-200Hz range. An example of this is using distinctive saturation on 808 bass. It sounds weaker on a festival rig, but for any basic consumer gear, those higher harmonics make the kick louder.

I’ll do more testing later with new the new settings.


#39

I found another error on my part in the documentation. (you may or may not need to note this @White_Noise)
I said to set Audacity to 512. That’s incorrect. It should have stated 1024. 512 works fine, but:

  1. You have to first delete everything on the Audacity Input sheet because otherwise you’ll end up with part of the original data still left behind (this is what happened to you @Ag_U…I kept picking at it and figured that out, and chased back how that happened and discovered this error on my part).
  2. It will screw up the axis display on the spectrograph (the axis will read to 300 if it’s not 512 rows of excel data, and at 512 in Audacity, this will render 256 rows of data…which is where I confused myself because 1024 produces 512 rows of data…counting headers).

So I’ve got a couple things to clean up in the manual.
It should be Audacity setting at 1024; not 512.

Audacity picks up scanning at 86.13291 HZ, so it doesn’t even appear to pick up below that range.

That said.

If you want. It’s not programmed in, but you would basically flip to the Audacity Calculator, and in cell C1, you type in 80. After that, click on cell A2 and delete it, and replace it with the following:

=IF(AND('Audacity Data Input'!A2<>"",'Audacity Data Input'!B2>$E$1,'Audacity Data Input'!A2>$C$1),'Audacity Data Input'!A2,"")

Then:

  1. Click on A2 so that it’s selected, but you’re not in the formula editor anymore.
  2. Press CTRL+DOWN ARROW. This will shoot you to the bottom of the columns’ filled calculations.
  3. Press CTRL+SHIFT+UP ARROW. This will select everything in the column, including A2, down to row 2000.
  4. Press CTRL+D. This will auto-fill down using A2 as the reference point to fill A3 through A2000 with.

I don’t think it’s needed though, because of the Audacity tool. Granted, the math itself doesn’t bother with that consideration, so it’s a valid point to consider.

Having said that, in a perfect world, I would not clip anything out like that.
Instead, what I would do is treat the <80Hz into the formula so that it penalizes it for having it there because what it’s doing is just bloating space and cramming the sound. No one can technically hear it, but it’s having an effect. In fact, if you take that to a very large venue with massive sound systems…that sub 80 Hz will definitely be noticed. It may or may not be heard, but it will definitely hit the sound and some folks may even feel it (and it won’t be a good kind of feel to most).

The song on the right is one of mine, so I can say that it has a bit of float a tad below the 80hz range because I do that on purpose, but it has a hard knee at 40Hz. Basically, I taper off from 80 Hz to 40Hz rather than a faster drop off at 80 Hz. I’m not entirely happy with this approach, but it does alright.
Soon here, I’ll be stepping my toes into the areas of low/high passes and compressors in finer detail, as I’ve mostly used them lightly but focused more on level balance and equalizing. I know compression and passing decent, but I’ve not spent the time really exploring my personal palette with them yet (which, I just started last night in fact…on this very song).


For the Calculation, though, everything is being ignored below 86 Hz because of Audacity, and this only factors into the Bandwidth section of the equation and not the LKFS/DNR section. Those have a threshold of ignoring samples below -23 (because that’s the BS. 1770 LKFS integrated loudness approach - because if it starts at -23, then by the time you re-balanced down to -23, it will be pointless, and also because if a sample moment is already at -23, then it’s not needing adjustment…you can see this because if you remove the cut-off from being -23, then the Average LKFS from the calculator won’t line up with other LKFS loudness tools).


So because of how Audacity works, the same frequencies will be reported between two songs (since you’ll use the same Audacity setting on each). You’ll notice a repeated pattern in the frequencies that are sampled.

The only variability will be the amplitude between the two song’s for those frequencies (with anything at or below -100 dB being ignored).

So what makes the difference is that -100. If a frequency sample came in at -100 (effectively not used - or at the very least, you can pretty much guarantee these are not used), then that will change the bandwidth range of the sampled song.

Then it grabs all of the dB levels for all surviving frequencies.

And the main focus of this calculation segment is to figure out if the sound is narrow or wide and react accordingly. In my original approach (the one written out above), I would penalize the wider song over the more narrow song, all other factors being equal. However, after a variety of testing, Loudness Calculator 2.2 flipped that idea around.

My original thinking was that the dB ranges wouldn’t be as big of a factor and that they would relatively be equal in most cases (this is because, I now realize, of the songs I was sampling…a lot of 90’s rock music and my own songs…which follow more of a 90’s style profile), and that - assuming that equality, then - the biggest difference would be that the wider song would appear louder than the narrow song due to having more information packed in per db/frequency sample range.

Nice idea in theory, but it rested on an axiom that doesn’t hold up: dB values are not relatively going to be equal unless your sample group is from a specific moment in time in a genre of music. Otherwise, it’ll be all over the place.

So in Loudness Calculator 2.2, I reversed it and got better results (without needing to change the DNR Factor from 4 to 2.5…that was somewhat closer, but approaching the bandwidth issue seems to be performing better than offloading it to the DNR for the work).

Now it dropped the 1000*Pi approach and instead takes the BW/Range, skips the DNR integration, divides by 1,000 to bring down to scale and then subtracts that from 20 (20 being like the 18 for DNR; a wall to throw against that is unlikely to result in 0).

This produces a value that seems to perform much better for balancing across widely ranging compression approaches.

I’m going to be doing some further testing on this: going to take a Jazz sample, 70’s Funk, 80s/90s Punk Rock, and a modern EDM sample and see if it can handle them all.

Cheers,
Jayson


#40

OK, I completed the test. Overall, it went decent.
The only real strong problem point that I noticed was that very old songs (in this case, Duke Ellington’s “Take the A Train”) get hit in a way that doesn’t help them out.
This is because they tend to have very low DNR because the recording quality back then was terrible. I don’t know that I’ll be able to remedy this issue (possibly; I’ll keep thinking about it), but otherwise, it performed pretty well.

Songs used, in order:

  1. Duke Ellington-Take the A Train
  2. Kool And The Gang-Jungle Boogie
  3. Ramones-I Wanna Be Sedated
  4. Green Day-Basket Case
  5. SKRILLEX-Bangarang feat Sirah

They were all balanced to the Ramones (it looked like the best middle ground between all samples)

Here’s the resulting audio file:

If you want to load it up directly or download it, us copy/paste the following:
Link: https://www.dropbox.com/s/lt3tt9tjzswwtt5/606%20-%20Test%20-%20Duke%20Ellington-Take%20the%20A%20Train--Kool%20And%20The%20Gang-Jungle%20Boogie--Ramones-I%20Wanna%20Be%20Sedated--Green%20Day-Basket%20Case--SKRILLEX-Bangarang%20feat%20Sirah%20(online-audio-converter.com).mp3

And here’s what it looks like (the red band references the meat of Duke Ellington at the beginning and compares that against the rest, while the yellow band references the peaks).

Cheers,
Jayson