How to estimate meta balance level

This was the question Decency asked on discord a while back.

The idea is simple: find a method to evaluate how “diverse” and “balanced” meta is in a report (aka tournament or patch). But how to do that? And what it might show you?

art by Guashineen

Russian version

I was experimenting with this topic for some time before TI, and decided to explore it more after completing the builds feature. Estimating balance level also went hand in hand with the idea of new teams hero diversity formula.

But to solve this riddle, you need to ask another question first.

What is ideal balance?

And what can you consider the ideal diversity?

To be completely honest, I didn’t think about this at all for a while, but it’s the most important thing to decide on.

Ideal meta balance is when every available hero was picked and banned, and the numbers of picks and bans for every hero are equal.

How to evaluate balance?

There were a couple of ideas regarding this even before the “ideal meta” was described, but my first real approach was based on calculating the maximum deviation from the median value. While it seemed to be a good starting point, it didn’t really describe the meta as much.

Unexpectedly, there was a better approach hidden in plain sight.

There is such thing called quartiles. These are the values that separate the data set in equal parts of 25%. In ideal universe with an ideal data set of random data between the 1st and 3rd quartiles you get two blocks that have 25% of data, or 50% of data in total. But here’s a catch: we don’t live in an ideal world!

So let’s take the number of hero picks. If we just take values of the quartiles, we might find, that there are some values in the first and the last blocks that are equal to these border values. So we end up having more than 50% of values between the 1st and the 3rd quartiles.

If we take the ideal balance, it will have 100% of values in this range, since they are all identical. And the closer our data set to the ideal balance, the more values we get in this range.

I am an amateur and don’t really know all the right terminology, so I’m not quite sure what to call it. But let’s call this value a “Core”.

How to calculate it?

There are some important details. First of all, the number of picks/bans can differ between reports, and there might also be some problems when calculating the Core as the number of total matches grows. For this I decided to use numbers of picks and bans relative to median, which you might have seen as “MP” and “MB” columns in the “Picks and bans” section. Rounding it up (and setting up our Q1 value to 1) we get a good description that should work for calculating the Core.

Second, there are three different values to go with: picks core, bans core and winrate core. And while picks and bans are relatively straight forward, winrate requires a more clever approach. For this instead of using [ Q1, Q3 ] range I went with [ m-0.5*M, m+0.5*M ], where m is median winrate in the dataset, and M is difference between the median and the maximum winrate (although it would be better to take the maximum delta without sign instead).

At first I wanted to get a single number to describe the balance level for every report, but decided to leave all three numbers and add an average balance rank instead. This makes it easier to see the whole picture: how balanced the meta was in terms of hero popularity, their winrates and bans. The average rank was calculated as a simple mean at first, but since then I updated the formula slightly. Now winrate balance matters most, then picks, then bans.

What about team diversity?

It’s a bit more complicated.

The approach is rather similar, but now we have two values.

The first is Core, aka how “balanced” picks of the team, or how many heroes were actually played (not counting outliers).

The second number is relation of the number of heroes picked to the theoretical maximum of heroes picked by the team. In other words, how many heroes the team could’ve picked by these number of matches played if every player was playing a unique hero every match.

Both numbers represent a different thing, so the end result is calculated as follows

(where Total is the total number of heroes available)

You can check out all the results in my stats hub.

The best examples are, in my opinion, last week at Immortal Rank report, Immortal Rank reports in general, as well as all reports regarding the last DPC season.

Interesting observations

It’s a bit hard to actually observe anything specific here, but it’s interesting to compare reports. In any case, there are some interesting nuances here.

First of all, it’s important to note, that the balance is estimated for the meta of a report. Meaning the “balance” may differ between tournaments. You should also note that picks and contest rate balance numbers are the least representative here since they are based on what people tend to favor more, and people tend to be biased and wrong.

It can be seen best when looking at the meta for the last week at Immortal Rank. Specifically when comparing Europe and China.

Europe has better winrate balance, but picks and contest rate numbers are skewed.

China’s situation in terms of winrates is worse (but it changes depending on the day). But picks and contest rate balance numbers are pretty high. It’s because in China people tend to play Random Draft in ranked matches, so the distribution is much more equal here.

And while Europe and China have more or less similar balance numbers, the average balance for all region gets as low as 68.6.

I also made two tables with the balance values: one for The International series and one for Immortal Rank reports (starting with 7.07).

Values in the tables below are slightly different from the final ones (because of the changes to calculation of average balance metric), but the relative position of reports pretty much didn’t change, so it works for getting a more or less decent understanding.

TI1 takes the first place, but mostly because it was the first: there weren’t many heroes available in the game, and there weren’t many matches played.

As for TI10, many people are biased towards it since it was the most recent TI, but it’s still one of the most balanced TIs and patches, along with TI9 (which is often confused with TI8). The most balanced tournament in the series, however, is probably TI7.

There’s also an interesting situation happening with the ranked reports.

The first thing you can notice is patch 7.19: it was the patch of The International 2018 (the 2nd worst TI), and it seems to be indeed one of the worst balanced patches.

Patch 7.29c seems to be the most balanced, and his brother 7.29d is the most balanced patch across the three longest recent patches (7.27d, 7.29d, 7.30e). Patch 7.30 is very close to them, so is 7.30d (the TI10 patch), but 7.30e dropped rather low. The reason for this is probably the meta of the patch being “figured out”.

Generally speaking, often the first versions of the patches (a/b letters or even without one) tend to be higher by “balance” then their later “brothers”.

And it’s also interesting to note that the most balanced patches are mostly ones that are recently released.

Closing words

Well, that’s the fun little experiment. It doesn’t have a good practical use, but it was fun!

Now you will be able to use beautiful numbers to complain about bad dota patches.

And that’s about it!

And here’s a mandatory reminder about my VK (ru), Telegram (ru), Twitter and discord. And the donations page, it really helps a lot. See ya!

Writing code and stories. https://spectral.gg/

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Technical know-how on Building a Simple yet Robust WebApp for Intraday Trading

Rise of Data Marketplaces

Data Science and Algorithm in Finance

Normal Distribution (with Python)

HOW THE MODEL OF NEWS HAS REFLECTED THE CRISYS OF COVID-19

How to Value Datasets — From “Data Leverage”

I analyzed my Lyft driver tips, here’s what I found

Stop using Spark for ML!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lea Spectral

Lea Spectral

Writing code and stories. https://spectral.gg/

More from Medium

Using R and Open Data to Check-in on the Canada Emergency Wage Subsidy

Interesting Thoughts Introduction

Some Thoughts on Artificial Intelligence and Ethics

I trained a machine learning model that found a diabetes drug that can be used for HIV