Do you love Coding? Ever thought you can earn a big amount out of it? Do you think you are that good at it? The Netflix Prize is an open competition for the best collaborative filtering algorithm that predicts user ratings for films, based on previous ratings. The competition is held by Netflix, an online DVD-rental service, and is opened for anyone (with some exceptions). The grand prize of $1,000,000 is reserved for the entry which bests Netflix's own algorithm for predicting ratings by 10%. Here's what they have to say "Netflix is all about connecting people to the movies they love. To help customers find those movies, we’ve developed our world-class movie recommendation system: CinematchSM. Its job is to predict whether someone will enjoy a movie based on how much they liked or disliked other movies. We use those predictions to make personal movie recommendations based on each customer’s unique tastes. And while Cinematch is doing pretty well, it can always be made better. Now there are a lot of interesting alternative approaches to how Cinematch works that we haven’t tried. Some are described in the literature, some aren’t. We’re curious whether any of these can beat Cinematch by making better predictions. Because, frankly, if there is a much better approach it could make a big difference to our customers and our business. So, we thought we’d make a contest out of finding the answer. It’s "easy" really. We provide you with a lot of anonymous rating data, and a prediction accuracy bar that is 10% better than what Cinematch can do on the same training data set. (Accuracy is a measurement of how closely predicted ratings of movies match subsequent actual ratings.) If you develop a system that we judge most beats that bar on the qualifying test set we provide, you get serious money and the bragging rights. But (and you knew there would be a catch, right?) only if you share your method with us and describe to the world how you did it and why it works. Serious money demands a serious bar. We suspect the 10% improvement is pretty tough, but we also think there is a good chance it can be achieved. It may take months; it might take years. So to keep things interesting, in addition to the Grand Prize, we’re also offering a $50,000 Progress Prize each year the contest runs. It goes to the team whose system we judge shows the most improvement over the previous year’s best accuracy bar on the same qualifying test set. No improvement, no prize. And like the Grand Prize, to win you’ll need to share your method with us and describe it for the world." Contest begins October 2, 2006 and continues through at least October 2, 2011, until unless someone gets an improvement of over 10% from the algorithm they use. Status of the Contest Nobody could do it in 2007 or 2008, but this year, team BellKor’s Pragmatic Chaos (composed of engineers from AT&T, Commendo, Pragmatic Theory, and Yahoo Research) broke the 10% threshold and guaranteed that someone would win the prize. However, that wasn’t the end because all the teams got 30 days to surpass BellKor’s achievement (a buffer period). Today was the last day of the contest, and the Netflix Prize closed in dramatic fashion as BellKor and another team, The Ensemble, duked it out to see who could obtain the best score. So, which team just became millionaires? Unfortunately, the answer is: we don’t know yet. Yes, it’s clear that the leaderboard shows that The Ensemble have the best score, 10.10%, compared to BellKor’s 10.09% improvement on Netflix’s movie recommendation algorithms. However, the prize is based on two scores, not just one. The other set, the “test subset,” isn’t public and the winner of that side won’t be revealed until the grand prize is given out in the next few weeks. Lets see who wins... References: http://www.netflixprize.com//rules http://en.wikipedia.org/wiki/Netflix_Prize http://mashable.com/2009/07/26/netflix-prize-winner-is/
Nice one buddy. I didn't knew about this one. What if all the G4EF members team up for next year? :rofl:
lol, but i doubt we will win even then, as from the fact that it took three years to go pass 10% threshold shows it must have been really difficult to do it...
1 Million is not small amount. Many people in India would not earn that much working whole life as software professional. Some may though.
to be more accurate, its 4.5 crores (approx) which i huge!! considering they worked on the algo max for three years. But the downside only one wins this amount. Rest have nothing in hand after working so hard.