Foxy
Smash Master
Okay, guys. I apologize for traveling around the country seeing doctors for the past couple of weeks, haven't quite had time to read the arguments in this thread until now, and I caught up. I wish I had gotten here sooner.
I have responses to about everything.
Some people have asked, why am I on the panel?
Back in the old panel, Mike and I were added during the same period, and we began participating. From that point I began keeping spreadsheets of tournament results, rankings, and players in the state, trying to retain extensive statistical data. I've continued that since and every rankings period I provide full and analyzed results for the entire period for us to discuss.
Since the first period I was a part of the panel, I've been mainly the only member who continues tracking the results and statistics in-between individual rankings, and I provide everything needed for every discussion. PP, having lead a number of rankings with myself participating, I suppose saw this as meaning that I was the major contributor and decided to give me lead of the panel.
I want to respond about how the rankings work, and why the past period's rankings had issues. First, we do rankings 100% from results. The previous periods are given zero weight, with exceptions in difficult cases, ties in results, or lack of data. After complete results (ideally) are put together by myself, we discuss how to weigh and view all of the intertwined results and come up with a final solution. The following flaws exist in the system: first, despite being exclusively data-oriented, every ranking position seems more convincing data-wise based on the pathos/ethos/logos of the member presenting the idea, hence some positions may be wrong simply due to longer and more detailed persuasive discussions (human flaws do ruin everything). Second, sometimes it is difficult to get complete results in a reasonable time-frame, and some of that was an issue this time.
Now, I want to discuss this period. Regarding me supposedly "leaving out information" with DJ... I want to clear that up. I did specify the loss to DJ in that same tournament that I lost to Cam at. However, we deal with the ENTIRE PERIOD. We have all of the data, but the only sensible way to look at it is over the full period and your results against a certain player in that timeframe. From the beginning of the discussion we are concerned with full-period results, looking at individual tournaments too much (unless they're regional-national) places distracting mental weight on that piece of data. Anyways, no more rambling. My point is that DJ, in the period, was 1-2 against myself, so when it was said that he only beat Cam "in the whole period", it was true. He did not beat me over the period. But he beat me in a specific tournament, which was countered later.
The other mistakes are due to these three things, as far as I can think right now, in order of importance:
1. We had uncertain information about the records at the Duke tournament. This was the biggest detriment, and one of the reasons why Theo is lower than he should be (if I remember correctly, he did well there). We couldn't get our hands on full results from it, so we had to go with some word-of-mouth personal records to partially include the tournament. I'll be the first to admit that I should have waited to discuss the rankings until I got the Tio file from DJ, but considering Mike was urgent to get the rankings done and it's hard to get us together concurrently, we chose to go with what we had.
2. None of the rankings members have any bias, but when discussing rankings, all data can be presented in a persuasive way. Not that we are pushing for ourselves, or for our favorites, but with such (usually) convoluted data, when someone strikes on a sensible-appearing interpretation, sometimes it's easy to go too far down that road, even if it's inaccurate. I'm not speaking of the panel really, but just of the nature of people in general, and how language and communication corrupts statistics. Point 1 is far more significant, but I'm just throwing this out there for those who haven't really given it much thought.
3. Honestly, we could use another un-biased and motivated participant. 3 should be fine, but it's really 98% from me, which despite my statistical focus, still all carries my personal views when I present it, and can do no good. I get and analyze all of the data, and then we discuss it. Karn is rarely a part of discussions, we just give him our list and he mentions his issues and we work it out. Not that he doesn't do his job, he does, but we usually don't get a hold of him when we are discussing. Mike is more involved, discussing at greater length the data and opinions I document, but it's still usually from my perspective, and I feel that if we had another panelist that could balance and share the statistical perspective, we could lose a lot of the human-bias that comes from me initiating most things. I'm assuming DJ would be good for this role and if people think it's a good idea, I would be really happy with adding him.
I have responses to about everything.
Some people have asked, why am I on the panel?
Back in the old panel, Mike and I were added during the same period, and we began participating. From that point I began keeping spreadsheets of tournament results, rankings, and players in the state, trying to retain extensive statistical data. I've continued that since and every rankings period I provide full and analyzed results for the entire period for us to discuss.
Since the first period I was a part of the panel, I've been mainly the only member who continues tracking the results and statistics in-between individual rankings, and I provide everything needed for every discussion. PP, having lead a number of rankings with myself participating, I suppose saw this as meaning that I was the major contributor and decided to give me lead of the panel.
I want to respond about how the rankings work, and why the past period's rankings had issues. First, we do rankings 100% from results. The previous periods are given zero weight, with exceptions in difficult cases, ties in results, or lack of data. After complete results (ideally) are put together by myself, we discuss how to weigh and view all of the intertwined results and come up with a final solution. The following flaws exist in the system: first, despite being exclusively data-oriented, every ranking position seems more convincing data-wise based on the pathos/ethos/logos of the member presenting the idea, hence some positions may be wrong simply due to longer and more detailed persuasive discussions (human flaws do ruin everything). Second, sometimes it is difficult to get complete results in a reasonable time-frame, and some of that was an issue this time.
Now, I want to discuss this period. Regarding me supposedly "leaving out information" with DJ... I want to clear that up. I did specify the loss to DJ in that same tournament that I lost to Cam at. However, we deal with the ENTIRE PERIOD. We have all of the data, but the only sensible way to look at it is over the full period and your results against a certain player in that timeframe. From the beginning of the discussion we are concerned with full-period results, looking at individual tournaments too much (unless they're regional-national) places distracting mental weight on that piece of data. Anyways, no more rambling. My point is that DJ, in the period, was 1-2 against myself, so when it was said that he only beat Cam "in the whole period", it was true. He did not beat me over the period. But he beat me in a specific tournament, which was countered later.
The other mistakes are due to these three things, as far as I can think right now, in order of importance:
1. We had uncertain information about the records at the Duke tournament. This was the biggest detriment, and one of the reasons why Theo is lower than he should be (if I remember correctly, he did well there). We couldn't get our hands on full results from it, so we had to go with some word-of-mouth personal records to partially include the tournament. I'll be the first to admit that I should have waited to discuss the rankings until I got the Tio file from DJ, but considering Mike was urgent to get the rankings done and it's hard to get us together concurrently, we chose to go with what we had.
2. None of the rankings members have any bias, but when discussing rankings, all data can be presented in a persuasive way. Not that we are pushing for ourselves, or for our favorites, but with such (usually) convoluted data, when someone strikes on a sensible-appearing interpretation, sometimes it's easy to go too far down that road, even if it's inaccurate. I'm not speaking of the panel really, but just of the nature of people in general, and how language and communication corrupts statistics. Point 1 is far more significant, but I'm just throwing this out there for those who haven't really given it much thought.
3. Honestly, we could use another un-biased and motivated participant. 3 should be fine, but it's really 98% from me, which despite my statistical focus, still all carries my personal views when I present it, and can do no good. I get and analyze all of the data, and then we discuss it. Karn is rarely a part of discussions, we just give him our list and he mentions his issues and we work it out. Not that he doesn't do his job, he does, but we usually don't get a hold of him when we are discussing. Mike is more involved, discussing at greater length the data and opinions I document, but it's still usually from my perspective, and I feel that if we had another panelist that could balance and share the statistical perspective, we could lose a lot of the human-bias that comes from me initiating most things. I'm assuming DJ would be good for this role and if people think it's a good idea, I would be really happy with adding him.