Stop Compressing For Loudness


#21

Sure thing. :slight_smile:

No idea. They didn’t make their data directly available, but instead gave instructions on connecting to the Spotify API and accessing the data from the API yourself…which, I’m not really interested in doing.

You can if you want to via:

I found another set while I was looking that has over 200,000 songs from Spotify, but when I downloaded it, for whatever reason, they didn’t pull the year of the songs, which makes it’s contextually useless.
Well, OK, they pulled genre and that was their main focus, but I still can’t understand why you wouldn’t pull the year. It’s just…why?

Cheers,
Jayson


#22

I think it does not matter you compress or not. If you make a proper track it will sound loud enough on Youtube.


#23

All that matters is how correctly done is your track .

If you don’t know WTF is going on no compression will help you or damage you.


#24

You’re talking as if an entire industry of Mastering Engineers don’t know what’s going on, or that an entire industry of Mastering Engineers aren’t seriously struggling with this issue as a primary hot topic and have been for years now.

There’s a very big difference from, “I don’t care about this”, and “This does not matter”.

This very much matters, and the topic assumes a person knows what they’re doing in the mix, and has at least some decent capability in mastering, otherwise this topic shouldn’t even be touched to begin with as they should be spending their time learning much more basic aspects of mixing.

Also, “correct” or “proper”, are final grades comprised of smaller key indicators of properties and aspects of the song. One of those should be the loudness value and how you achieved it. If that didn’t matter, then primarily no one would be hiring mastering engineers (or bothering @White_Noise with mastering IDMF albums) as that is a big part of their job - getting the loudness to the right value for the track in context to the album, given everything else about it (this isn’t the only aspect by any stretch, but it’s a pretty important one).

Cheers,
Jayson


#25

OK @metaside

I finally found a direct dataset to work from so I could drill in how I wanted to.
Now, unfortunately, for whatever reason, this individual did capture Genre and Year, but pulled them as separate data, and didn’t preserve the id tag between the two sets, and from what it looks like, it doesn’t even look like the same data set, or even a subset. No clue why they pulled the data in such a weird way like that.

Anyway, at the very least, I can deliver the year break down.
It does indeed pretty much follow the #1’s chart above. The data set is 170 thousand songs from Spotify.
What’s not clear is how many of the older songs are remasters vs originals, and what year those remasters were done. But that’s not a huge issue as that would likely be a small difference when accounted for.

Anyway, here’s the results.
Red = above -14.
Blue = at or below -14.
Orange Line = -14 LUFS.
Green Line = Average (mean) for all data.
Yellow Lines = Average (mean) for the 5 year period.
First Black Vertical Line: The first demarcation from a -14 center.
Second Black Vertical Line: When we basically said “Fuck it! YOLO!” and just started smashing the ceiling on everything.

As you can see, this follows the other chart pretty much. at this moment, we’re starting a potential trend downward after peaking our ceiling smashing between 2005 to 2015 where things were just ridiculously over cranked.

Here’s the raw dataset if you want it for yourself.

Cheers,
Jayson


#26

So I did some more analysis of the data set since I have it.
I was curious what the concentration of data looked like.

In a way, this concept is the statistics variation of LUFS. You’re basically asking what the “body” of the data actually looks like in much the same way that LUFS is measuring audio - same basic concept, but different math involved.

So I broke things into quartiles and looked at the upper quartile and lower quartile.
You can think of the upper quartile as the top of the meat and potatoes of a pile of data samples.
You can think of the lower quartile as the bottom of the meat and potatoes of a pile of data samples.

Then you take the median - that is, the middle ground between the two (not to be confused with mean/average).

Next, to build the body and find out how concentrated the song’s LUFS are, you determine the range from the upper quartile to the lower quartile. So if your upper is -5 and your lower is -10, then your range is 5 LU.

The wider that spread is, the more freedom that a batch of music has (5 year period of time).
That is to say, the tighter this spread is, the more of a trend exists of following the average of that period of time.

What I find interesting is that as we’ve moved ever louder, we’ve also moved ever narrower on deviating from the average. By the time we get to the 2005-2015 era, it’s as loud as it’s ever been in history, and as narrow in diversity as it’s ever been in history. We’re currently still that narrow. The average simply dropped by roughly half a decibel - essentially returning to the early 2000’s values.

Here’s what it looks like when you look at this.
image

Keep in mind that this is not calculating anything to do with -14 LUFS. This is simply checking what the range is in a period of time relative to that period’s average LUFS.

So it’s really impressive not just how crunched everything is, but literally how crunched everything is.
In terms of loudness, there’s very little diversity.

Also of note, the Spotify “danciness” metric was also provided, and coupled with this pattern is an overall increase in “dancy” music. Most of the music today is what I would call, active music rather than passive music. Conversely, most of the music in the 40’s and 50’s was passive.

What I find really interesting is that the 20’s throw everything out the door. They have a huge range, are way below the -14 LUFS, and have a huge lean towards dancy music. We’re definitely much more refined and interested in a very uniformed audio experience now by comparison to then.

Also of interesting note. It’s approximately every 30 years (give or take 5 to 10 years) that we change course in audio preference culture in a strong direction regarding loudness…well…actually, overall in most properties.
The same can be found with danciness.

Here’s the updated charting showing all of this.
I removed the average of all data because things are already busy. So now the green line is the average of the period. The yellow lines are the Upper, Middle, and Lower boundaries of the quartiles.

Red dots = dancy
Blue dots = not very dancy

Cheers,
Jayson


#27

I have a question about spotify’s data - are these the number one songs played on spotify for each time period or are they the number one songs available on spotify for each time period? I think that could skew the data somewhat. For instance, there could be a billboard #1 hit from the 40s that’s on spotify, but nobody listens to anymore because it just isn’t relevant (maybe it was an election spoof song or something). In that case, this data may not accurately track popular taste in music (and therefore loudness) over time, but only our modern preference of loudness through time. You dig?


#28

The first graph data points are #1 as played in the UK Billboard, as listed according to Spotify.

From the graph’s source page.

Step 2: Getting the songs in the playlist
The first step is to get the song IDs of all the number ones in the UK, dating back to 1952. The Official Charts Company playlist has about 1,300 different songs in it.

The second and third graphs, which were graphs I made, have nothing to do with number 1’s. That’s 170 thousand songs. Just, 170 thousand songs pulled by Yamaç Eren Ay and arranged by me based on when they were made.

Cheers,
Jayson


#29

I formalized this presentation on my blog.
http://jaysonabalos.com/2020/08/24/loudness-through-history/

Cheers,
Jayson


#30

I updated the article.
http://jaysonabalos.com/2020/08/24/loudness-through-history/

There’s a really interesting effect that became visible when I sliced up the data first by dancy and non-dancy music, and then by whether songs were louder than the 5 year average or quieter.

Firstly, dancy songs have a negative correlation (meaning, going up in one way correlates to a down in the other) between how loud songs in a 5 year period are and how big the range of loudness is of the largest amount of songs only when the songs are louder than the 5 year average.
Conversly, non-dancy songs only have this correlation when the songs are lower than the 5 year average.

Secondly, and this was surprising, non-dancy music over the past 5 years has not only radically dropped in loudness, but the range of most quieter non-dancy songs grew larger than any range in recorded history. We’re talking a 14 dB range!

What this tells me is that there’s more exploration taking place over the past 5 years. People are willing to try production levels more often that have previously not been so common. Specifically, there’s a ton more quiet being produced.

This kind of flies in the face of the constant complaining about everything being loud.
Maybe the top 100 billboard might be, but the vast majority of music is really digging down low.

The more dancy music is also dropping a bit, but it’s doing it more uniformly. There’s less diversity in the ranges here.

Now, that’s probably because they tend to be louder - we’re talking a -5 dB average in the loud songs, so there’s very little room. And the range between the averages of the louder and quieter dancy songs is only 7 dBs as opposed to 12 dBs for non-dancy music (both in the past 5 years).

Overall, pretty fascinating stuff!

Cheers,
Jayson