
For example, to avoid bad results, IMDB has very special vote rating system for their Top250 list that goes like this:
"weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C" I don't think this is very understandable, and they also have to exclude non-regular voters.
Another problem is, that with the old rating system, many users are influenced. For example they have a look at a movies rating on IMDB with something like 60000 votes, averaged 7,6. and they think, "hey, this movie doesn't deserve 7,6, it should be lower". So what they do is, they don't won't rate the movie at the score, which they actually think the movie deserves. But they vote 1, in order to lower the aggregated result as much as possible towards the score they think it deserves. This is a problem that all rating systems have, where the user has information on other voters choice before they vote on their own. On Websites that is most of the time the case.
So this is my new system:

If you ever had a course on variance analysis, this comes naturally to your mind. Apparently it doesn't for the creators of the voting systems, they just calculate the average vote.
What will it change? First of all, you won't be confused with interpreting different rating scales anymore. You'll have a good guess, what -0.612 means in contrast to +0.997. Second of all, the influence of prevoting result spotting won't affect the vote so much anymore. This is because, you can't overexpress your opinion anymore by voting more extreme. Extreme votes are only heavier weighted, if you have a lot more moderate votes, and that takes time and consideration.
So, the math is easy and already used heavily in analysing surveys and stuff like that. The voting wouldn't change from the users perspective, so they wouldn't notice much of a difference. Only the results would be much more interesting, because fake-votes are stripped of their power.


This post originally appeared here