I'd like some opinions on something I've been working on, if y'all don't mind.
Here's one of my
secret projects:
https://docs.google.com/spreadsheets/d/1_IAFcQB-EnN1jC_hQ6zuGAHBx-9kD7syEMn0sbV8Ibo/edit?usp=sharing
I went through all the sets in Shine's Top 8 and recorded the opponent's percent and the user's percent for every kill. With that data, I originally wanted to compare the approximate ranges at which characters tend to kill. So I averaged the percents of when each character killed their opponents. After doing that, it became clear to me that the data wasn't showing an accurate picture. Only using data when characters kill doesn't say anything about
when they fail to kill. Using Pikachu as an example, only using data when he killed said nothing about the fact that ZeRo's Diddy survived until 199%, never getting killed, during Grand Finals. And such an occurrence is important to note when talking about a character's ability to kill. So then I looked at the peak damage for every character's opponent in every individual game. Oftentimes, an opponent reached their peak damage when they were killed. But there were some key places where it was different (like in ESAM's and ZeRo's set during Grand Finals). Using the data from both of these categories (if a datum appeared in both categories, it was only used once), I averaged them and called it the survivability of the opponent, meaning how long we can expect an opponent to live against a given character. It seemed to make logical sense to me: Using the percent damage of an opponent when they were and weren't killed to come to an approximate percent range we can expect the opponent to survive 'til, thus giving us something somewhat concrete to use to compare characters' killing abilities. Then again, I could be missing something. Which is why I'm sharing this with all of you. Input from other people would certainly help and ensure that this isn't a misguided approach or something.