Measuring the Performance of Tennis Prediction Models

With the recent buzz about Elo rankings in tennis, both at FiveThirtyEight and here at Tennis Abstract, comes the ability to forecast the results of tennis matches. It’s not far fetched to ask yourself, which of these different models perform better and, even more interesting, how they fare compared to other ‘models’, such as the ATP ranking system or betting markets.

For this, admittedly limited, investigation, we collected the (implied) forecasts of five models, that is, FiveThirtyEight, Tennis Abstract, Riles, the official ATP rankings, and the Pinnacle betting market for the US Open 2016. The first three models are based on Elo. For inferring forecasts from the ATP ranking, we use a specific formula1 and for Pinnacle, which is one of the biggest tennis bookmakers, we calculate the implied probabilities based on the provided odds (minus the overround)2.

Next, we simply compare forecasts with reality for each model asking If player A was predicted to be the winner (P(a) > 0.5), did he really win the match? When we do that for each match and each model (ignoring retirements or walkovers) we come up with the following results.

Model		% correct
Pinnacle	76.92%
538		75.21%
TA		74.36%
ATP		72.65%
Riles		70.09%

What we see here is how many percent of the predictions were actually right. The betting model (based on the odds of Pinnacle) comes out on top followed by the Elo models of FiveThirtyEight and Tennis Abstract. Interestingly, the Elo model of Riles is outperformed by the predictions inferred from the ATP ranking. Since there are several parameters that can be used to tweak an Elo model, Riles may still have some room left for improvement.

However, just looking at the percentage of correctly called matches does not tell the whole story. In fact, there are more granular metrics to investigate the performance of a prediction model: Calibration, for instance, captures the ability of a model to provide forecast probabilities that are close to the true probabilities. In other words, in an ideal model, we want 70% forecasts to be true exactly in 70% of the cases. Resolution measures how much the forecasts differ from the overall average. The rationale here is, that just using the expected average values for forecasting will lead to a reasonably well-calibrated set of predictions, however, it will not be as useful as a method that manages the same calibration while taking current circumstances into account. In other words, the more extreme (and still correct) forecasts are, the better.

In the following table we categorize the set of predictions into bins of different probabilities and show how many percent of the predictions were correct per bin. This also enables us to calculate Calibration and Resolution measures for each model.

Model    50-59%  60-69%  70-79%  80-89%  90-100% Cal  Res   Brier
538      53%     61%     85%     80%     91%     .003 .082  .171
TA       56%     75%     78%     74%     90%     .003 .072  .182
Riles    56%     86%     81%     63%     67%     .017 .056  .211
ATP      50%     73%     77%     84%     100%    .003 .068  .185
Pinnacle 52%     91%     71%     77%     95%     .015 .093  .172

As we can see, the predictions are not always perfectly in line with what the corresponding bin would suggest. Some of these deviations, for instance the fact that for the Riles model only 67% of the 90-100% forecasts were correct, can be explained by small sample size (only three in that case). However, there are still two interesting cases (marked in bold) where sample size is better and which raised my interest. Both the Riles and Pinnacle models seem to be strongly underconfident (statistically significant) with their 60-69% predictions. In other words, these probabilities should have been higher, because, in reality, these forecasts were actually true 86% and 91% percent of the times.3 For the betting aficionados, the fact that Pinnacle underestimates the favorites here may be really interesting, because it could reveal some value as punters would say. For the Riles model, this would maybe be a starting point to tweak the model.

In the last three columns Calibration (the lower the better), Resolution (the higher the better), and the Brier score (the lower the better) are shown. The Brier score combines Calibration and Resolution (and the uncertainty of the outcomes) into a single score for measuring the accuracy of predictions. The models of FiveThirtyEight and Pinnacle (for the used subset of data) essentially perform equally good. Then there is a slight gap until the model of Tennis Abstract and the ATP ranking model come in third and fourth, respectively. The Riles model performs worst in terms of both Calibration and Resolution, hence, ranking fifth in this analysis.

To conclude, I would like to show a common visual representation that is used to graphically display a set of predictions. The reliability diagram compares the observed rate of forecasts with the forecast probability (similar to the above table).

The closer one of the colored lines is to the black line, the more reliable the forecasts are. If the forecast lines are above the black line, it means that forecasts are underconfident, in the opposite case, forecasts are overconfident. Given that we only investigated one tournament and therefore had to work with a low sample size (117 predictions), the big swings in the graph are somewhat expected. Still, we can see that the model based on ATP rankings does a really good job in preventing overestimations even though it is known to be outperformed by Elo in terms of prediction accuracy.

To sum up, this analysis shows how different predictive models for tennis can be compared among each other in a meaningful way. Moreover, I hope I could exhibit some of the areas where a model is good and where it’s bad. Obviously, this investigation could go into much more detail by, for example, comparing the models in how well they do for different kinds of players (e.g., based on ranking), different surfaces, etc. This is something I will spare for later. For now, I’ll try to get my sleeping patterns accustomed to the schedule of play for the Australian Open, and I hope, you can do the same.

This is a guest article by me, Peter Wetz. I am a computer scientist interested in racket sports and data analytics based in Vienna, Austria.


1. P(a) = a^e / (a^e + b^e) where a are player A’s ranking points, b are player B’s ranking points, and e is a constant. We use e = 0.85 for ATP men’s singles.

2. The betting market in itself is not really a model, that is, the goal of the bookmakers is simply to balance their book. This means that the odds, more or less, reflect the wisdom of the crowd, making it a very good predictor.

3. As an example, one instance, where Pinnacle was underconfident and all other models were more confident is the R32 encounter between Ivo Karlovic and Jared Donaldson. Pinnacle’s implied probability for Karlovic to win was 64%. The other models (except the also underconfident Riles model) gave 72% (ATP ranking), 75% (FiveThirtyEight), and 82% (Tennis Abstract). Turns out, Karlovic won in straight sets. One factor at play here might be that these were the US Open where more US citizens are likely to be confident about the US player Jared Donaldson and hence place a bet on him. As a consequence, to balance the book, Pinnacle will lower the odds on Donaldson, which results in higher odds (and a lower implied probability) for Karlovic.

Why Novak Djokovic is Still Number One

Two weeks ago, Andy Murray took over the ATP #1 ranking from Novak Djokovic. Yesterday, he defeated Djokovic in their first meeting since June, securing his place at the top of the year-end ranking table. Murray has been outstanding in the second half of this season, winning all but three of his matches since the Roland Garros final, and he capped the year in style, beating four top-five players to claim the title at the World Tour Finals.

Despite all that, Murray is not the best player in the world. That title still belongs to Djokovic. Since June, Murray has closed the gap, establishing himself as part of what we might call the “Big Two,” but he hasn’t quite ousted his rival. There’s no question that over this period, Murray has played better–that sort of thing is occasionally debatable, but this season it’s just historical fact–but identifying the best player implies something more predictive, and it’s much more difficult to determine by simply looking over a list of recent results.

The ATP rankings generally do a good job of telling us which players are better than others. But the official system has two major problems: It ignores opponent quality, and it artificially limits its scope to the last 52 weeks. Pundits and fans tend to have different problems: They often give too much credit to opponent quality (“He beat Djokovic, so now he’s number one!”) and exhibit an even more extreme recency bias (“He’s looked unbeatable this week!”).

Two systems that avoid these issues–Elo and Jrank–both place Djokovic comfortably ahead of Murray. These algorithms handle the details of recent matches and opponent quality differently from each other, but what they share in common is more important: They consider opponent quality and they don’t use an arbitrary time cutoff like the ATP ranking system does.

Here’s how the three methods would forecast a Djokovic-Murray match, were it held today:

  • ATP: Murray favored, 51.6% chance of winning
  • Elo: Djokovic favored, 61.6% chance of winning
  • Jrank: Djokovic favored, 57.0% chance of winning

Betting markets favored Djokovic by a margin of slightly more than 60/40 yesterday, though bettors probably gave him some of that edge because they thought Murray would be fatigued after his marathon match on Saturday.

As I wrote last week, Elo doesn’t deny that Murray has had a tremendous half-season. Instead, it gives him less credit than the official algorithm does for victories over lesser opponents (such as John Isner in the Paris Masters final), and it recognizes that he started his current run of form at an enormous disadvantage. With his title in London, Murray reached a new peak Elo rating, but it still isn’t enough to overtake Djokovic.

Even though Elo still prefers Novak by a healthy margin, it reflects how much the situation at the top of the ranking list has changed. At the beginning of 2016, Elo gave Djokovic a 76.5% chance of winning a head-to-head against Murray, and that probability rose as high as 81% in April. It fell below 70% after the Olympics, and the gap is now the smallest it has been since February 2011.

Last week illustrates how difficult it will be for Murray take over the #1 Elo ranking place. The pre-tournament Elo difference of 91 points between the two players has shrunk by only 8%, to 84 points. Murray’s win yesterday was worth a bit more than a measly seven points, but Djokovic had several opportunities to nudge his rating upwards in his first four matches, as well. Despite some of Novak’s head-scratching losses this fall, he still wins most of his matches–some of them against very good players–slowing the decline of his Elo rating.

Of course, Elo is just a measuring stick–like any ranking system, it doesn’t tell us what’s really happening on court. It’s possible that Murray has made a significant (and semi-permanent) leap forward or that Djokovic has taken a major step back. On the other hand, streaks happen even without such leaps, and they always end. The smart money is usually on small, gradual changes to the status quo, and Elo gives us a way to measure those changes.

For Elo to rate Murray ahead of Djokovic, it will probably require several more months of these gradual changes. The only faster alternative is for Djokovic to start losing more matches to the likes of Jiri Vesely and Sam Querrey. When faced with dramatic evidence, Elo makes more dramatic changes. While Djokovic has occasionally provided that evidence this season, he has usually offered enough proof–like four wins at the World Tour Finals–to comfortably maintain his position at the top.

Factchecking the History of the ATP Number One With Elo

As I wrote at The Economist this week, Andy Murray might sit atop the ATP rankings, but he probably isn’t the best player in tennis right now. That honor still belongs to Novak Djokovic, who comes in higher on the Elo ranking list, which uses an algorithm that is more predictive of match outcomes than the ATP table.

This isn’t the first time Elo has disagreed with the official rankings over the name at the top. Of the 26 men to have reached the ATP number one ranking, only 18 also became number one on the Elo list. A 19th player, Guillermo Coria, was briefly Elo #1 despite never achieving the same feat on the ATP rankings.

Four of the remaining eight players–Murray, Patrick Rafter, Marcelo Rios, and John Newcombe–climbed as high as #2 in the Elo rankings, while the last four–Thomas Muster, Carlos Moya, Marat Safin, and Yevgeny Kafelnikov–only got as high as #3. Moya and Kafelnikov are extreme cases of the rankings mismatch, as neither player spent even a single full season inside the Elo top five.

By any measure, though, Murray has spent a lot of time close to the top spot. What makes his current ascent to the #1 spot so odd is that in the past, Elo thought he was much closer. Despite his outstanding play over the last several months, there is still a 100-point Elo gap between him and Djokovic. That’s a lot of space: Most of the field at the WTA Finals in Singapore this year was within a little more than a 100-point range.

January 2010 was the Brit’s best shot. At the end of 2009, Murray, Djokovic, and Roger Federer were tightly packed at the top of the Elo leaderboard. In December, Murray was #3, but he trailed Fed–and the #1 position–by only 25 points. In January, Novak took over the top spot, and Murray closed to within 16 points–a small enough margin that one big upset could make the difference. Altogether, Murray has spent 63 weeks within 100 points of the Elo top spot, none of those since August 2013.

For most of the intervening three-plus years, Djokovic has been steadily setting himself apart from the pack. He reached his career Elo peak in April of this season, opening up a lead of almost 200 points over Federer, who was then #2, and 250 points over Murray. Since Roland Garros, Murray has closed the gap somewhat, but his lack of opportunities against highly-rated players has slowed his climb.

If Murray defeats Djokovic in the final this week in London, it will make the debate more interesting, not to mention secure the year-end ATP #1 ranking for the Brit. But it won’t affect the Elo standings. When two players have such lengthy track records, one match doesn’t come close to eliminating a 100-point gap. Novak will end the season as Elo #1, and he is well-positioned to maintain that position well into 2017.

Elina Svitolina and Multiple #1 Upsets

Last week in Beijing, Elina Svitolina beat new WTA #1 Angelique Kerber. It was the first time the Ukrainian defeated Kerber this season, but it wasn’t her first 2016 triumph over a player ranked #1. At the Rio Olympics in August, Svitolina upset then-top-ranked Serena Williams.

It’s unusual for a player to face two (or more) different #1-ranked opponents in the same season. Since 1985, it has happened 136 times on the WTA tour and 148 times on the ATP tour. That’s less than five times per season per tour.

Of course, it’s much less common to upset multiple #1-ranked opponents, as Svitolina did. This was only the 16th time a woman did so (again, since 1985), while it has happened on the men’s side 18 times.

Here is a full list of WTA player-seasons that featured defeats of more than one top-ranked player:

Year  Player               Upsets                      
2016  Elina Svitolina      Kerber; Serena              
2010  Samantha Stosur      Serena; Wozniacki           
2009  Venus Williams       Serena; Safina              
2008  Dinara Safina        Henin; Sharapova; Jankovic  
2006  Justine Henin        Davenport; Mauresmo         
2003  Justine Henin        Serena; Clijsters           
2002  Kim Clijsters        Serena; Venus               
2002  Serena Williams      Capriati; Venus             
2001  Lindsay Davenport    Capriati; Hingis            
1999  Amelie Mauresmo      Hingis; Davenport           
1999  Venus Williams       Davenport; Hingis           
1997  Amanda Coetzer       Hingis; Graf                
1996  Jana Novotna         Graf; Seles                 
1996  Kimiko Date Krumm    Graf; Seles                 
1991  Martina Navratilova  Graf; Seles                 
1991  Gabriela Sabatini    Graf; Seles

It’s quite an accomplished list. As we might expect, there’s a lot of overlap between the players who achieved these upsets and past and future #1-ranked players. The real standouts here are Justine Henin and Venus Williams, who managed the feat twice, and Dinara Safina, who faced three different #1s in 2008, going undefeated against them.

Here are the men who beat multiple #1s in the same season:

Year  Player                 Upsets             
2013  Juan Martin Del Potro  Nadal; Djokovic    
2012  Andy Murray            Federer; Djokovic  
2011  David Ferrer           Nadal; Djokovic    
2011  Jo Wilfried Tsonga     Nadal; Djokovic    
2010  Marcos Baghdatis       Nadal; Federer     
2009  Juan Martin Del Potro  Nadal; Federer     
2008  Andy Murray            Nadal; Federer     
2008  Gilles Simon           Nadal; Federer     
2003  Rainer Schuettler      Roddick; Agassi    
2003  Fernando Gonzalez      Hewitt; Agassi     
2001  Greg Rusedski          Safin; Kuerten     
2001  Max Mirnyi             Safin; Kuerten     
1995  Michael Chang          Agassi; Sampras    
1992  Richard Krajicek       Courier; Edberg    
1991  Guy Forget             Edberg; Becker     
1991  Andrei Cherkasov       Edberg; Becker     
1990  Boris Becker           Lendl; Edberg      
1988  Boris Becker           Wilander; Lendl

This list isn’t quite as impressive, though it does capture several very good players at their best.  It also highlights the world-beating potential of Max Mirnyi, who–despite never reaching the top 15 himself–finished the 2001 season with a 3-1 record against ATP #1s.

The rarity of facing multiple #1s in the same season–let alone beating them–stops us from drawing any meaningful conclusions about what Svitolina’s feat indicates for her future. At the very least, however, it reminds us of the Ukrainian’s potential as a future star, and puts her among some very good historical company.

How Elo Solves the Olympics Ranking Points Conundrum

Last week’s Olympic tennis tournament had superstars, it had drama, and it had tears, but it didn’t have ranking points. Surprise medalists Monica Puig and Juan Martin del Potro scored huge triumphs for themselves and their countries, yet they still languish at 35th and 141st in their respective tour’s rankings.

The official ATP and WTA rankings have always represented a collection of compromises, as they try to accomplish dual goals of rewarding certain behaviors (like showing up for high-profile events) and identifying the best players for entry in upcoming tournaments. Stripping the Olympics of ranking points altogether was an even weirder compromise than usual. Four years ago in London, some points were awarded and almost all the top players on both tours showed up, even though many of them could’ve won more points playing elsewhere.

For most players, the chance at Olympic gold was enough. The level of competition was quite high, so while the ATP and WTA tours treat the tournament in Rio as a mere exhibition, those of us who want to measure player ability and make forecasts must factor Olympics results into our calculations.

Elo, a rating system originally designed for chess that I’ve been using for tennis for the past year, is an excellent tool to use to integrate Rio results with the rest of this season’s wins and losses. Broadly speaking, it awards points to match winners and subtracts points from losers. Beating a top player is worth many more points than beating a lower-rated one. There is no penalty for not playing–for example, Stan Wawrinka‘s and Simona Halep‘s ratings are unchanged from a week ago.

Unlike the ATP and WTA ranking systems, which award points based on the level of tournament and round, Elo is context-neutral. Del Potro’s Elo rating improved quite a bit thanks to his first-round upset of Novak Djokovic–the same amount it would have increased if he had beaten Djokovic in, say, the Toronto final.

Many fans object to this, on the reasonable assumption that context matters. It certainly seems like the Wimbledon final should count for more than, say, a Monte Carlo quarterfinal, even if the same player defeats the same opponent in both matches.

However, results matter for ranking systems, too. A good rating system will do two things: predict winners correctly more often than other systems, and give more accurate degrees of confidence for those predictions. (For example, in a sample of 100 matches in which the system gives one player a 70% chance of winning, the favorite should win 70 times.) Elo, with its ignorance of context, predicts more winners and gives more accurate forecast certainties than any other system I’m aware of.

For one thing, it wipes the floor with the official rankings. While it’s possible that tweaking Elo with context-aware details would better the results even more, the improvement would likely be minor compared to the massive difference between Elo’s accuracy and that of the ATP and WTA algorithms.

Relying on a context-neutral system is perfect for tennis. Instead of altering the ranking system with every change in tournament format, we can always rate players the same way, using only their wins, losses, and opponents. In the case of the Olympics, it doesn’t matter which players participate, or what anyone thinks about the overall level of play. If you defeat a trio of top players, as Puig did, your rating skyrockets. Simple as that.

Two weeks ago, Puig was ranked 49th among WTA players by Elo–several places lower than her WTA ranking of 37. After beating Garbine Muguruza, Petra Kvitova, and Angelique Kerber, her Elo ranking jumped to 22nd. While it’s tough, intuitively, to know just how much weight to assign to such an outlier of a result, her Elo rating just outside the top 20 seems much more plausible than Puig’s effectively unchanged WTA ranking in the mid-30s.

Del Potro is another interesting test case, as his injury-riddled career presents difficulties for any rating system. According to the ATP algorithm, he is still outside the top 100 in the world–a common predicament for once-elite players who don’t immediately return to winning ways.

Elo has the opposite problem with players who miss a lot of time due to injury. When a player doesn’t compete, Elo assumes his level doesn’t change. That’s clearly wrong, and it has cast a lot of doubt over del Potro’s place in the Elo rankings this season. The more matches he plays, the more his rating will reflect his current ability, but his #10 position in the pre-Olympics Elo rankings seemed overly influenced by his former greatness.

(A more sophisticated Elo-based system, Glicko, was created in part to improve ratings for competitors with few recent results. I’ve tinkered with Glicko quite a bit in hopes of more accurately measuring the current levels of players like Delpo, but so far, the system as a whole hasn’t come close to matching Elo’s accuracy while also addressing the problem of long layoffs. For what it’s worth, Glicko ranked del Potro around #16 before the Olympics.)

Del Potro’s success in Rio boosted him three places in the Elo rankings, up to #7. While that still owes something to the lingering influence of his pre-injury results, it’s the first time his post-injury Elo rating comes close to passing the smell test.

You can see the full current lists elsewhere on the site: here are ATP Elo ratings and WTA Elo ratings.

Any rating system is only as good as the assumptions and data that go into it. The official ATP and WTA ranking systems have long suffered from improvised assumptions and conflicting goals. When an important event like the Olympics is excluded altogether, the data is incomplete as well. Now as much as ever, Elo shines as an alternative method. In addition to a more predictive algorithm, Elo can give Rio results the weight they deserve.

The Case for Novak Djokovic … and Roger Federer … and Rafael Nadal

By winning the US Open last weekend and increasing his career total to ten Grand Slams, Novak Djokovic has pushed himself even further into conversations about the greatest of all time. At the very least, his 2015 season is shaping up to be one of the best in tennis history.

A recent FiveThirtyEight article introduced Elo ratings into the debate, showing that Djokovic’s career peak–achieved earlier this year at the French Open–is the highest of anyone’s, just above 2007 Roger Federer and 1980 Bjorn Borg. In implementing my own Elo ratings, I’ve discovered just how close those peaks are.

Here are my results for the top 15 peaks of all time [1]:

Player                 Year   Elo  
Novak Djokovic         2015  2525  
Roger Federer          2007  2524  
Bjorn Borg             1980  2519  
John McEnroe           1985  2496  
Rafael Nadal           2013  2489  
Ivan Lendl             1986  2458  
Andy Murray            2009  2388  
Jimmy Connors          1979  2384  
Boris Becker           1990  2383  
Pete Sampras           1994  2376  
Andre Agassi           1995  2355  
Mats Wilander          1984  2355  
Juan Martin del Potro  2009  2352  
Stefan Edberg          1988  2346  
Guillermo Vilas        1978  2325

A one-point gap is effectively nothing: It means that peak Djokovic would have a 50.1% chance of beating peak Federer. The 35-point gap separating Novak from peak Rafael Nadal is considerably more meaningful, implying that the better player has a 55% chance of winning.

Surface-specific Elo

If we limit our scope to hard-court matches, Djokovic is still a very strong contender, but Fed’s 2007 peak is clearly the best of all time:

Player          Year  Hard Ct Elo  
Roger Federer   2007         2453  
Novak Djokovic  2014         2418  
Ivan Lendl      1989         2370  
Pete Sampras    1997         2356  
Rafael Nadal    2014         2342  
John McEnroe    1986         2332  
Andy Murray     2009         2330  
Andre Agassi    1995         2326  
Stefan Edberg   1987         2285  
Lleyton Hewitt  2002         2262

Ivan Lendl and Pete Sampras make much better showings on this list than on the overall ranking. Still, they are far behind Fed and Novak–the roughly 100-point difference between peak Fed and peak Pete is equivalent to a 64% probability that the higher-rated player would win.

On clay, I’ll give you three guesses who tops the list–and your first two guesses don’t count. It isn’t even close:

Player           Year  Clay Ct Elo  
Rafael Nadal     2009         2550  
Bjorn Borg       1982         2475  
Novak Djokovic   2015         2421  
Ivan Lendl       1988         2408  
Mats Wilander    1984         2386  
Roger Federer    2009         2343  
Jose Luis Clerc  1981         2318  
Guillermo Vilas  1982         2316  
Thomas Muster    1996         2313  
Jimmy Connors    1980         2307

Borg was great, but Nadal is in another league entirely. Though Djokovic has pushed Nadal out of many greatest-of-all-time debates–at least for the time being–there’s little doubt that Rafa is the greatest clay court player of all time, and likely the most dominant player in tennis history on any single surface.

Djokovic is well back of both Nadal and Borg, but in his favor, he’s the only player ranked in the top three for both major surfaces.

The survivor

As the second graph in the 538 article shows, Federer stands out as the greatest player of all time at his age. Most players have retired long before their 34th birthday, and even those who stick around aren’t usually contesting Grand Slam finals. In fact, Federer’s Elo rating of 2393 after his US Open semifinal win against Stanislas Wawrinka last week would rank as the sixth-highest peak of all time, behind Lendl and just ahead of Andy Murray.

Here are the top ten Elo peaks for players over 34:

Player         Age   34+ Elo  
Roger Federer  34.1     2393  
Jimmy Connors  34.1     2234  
Andre Agassi   35.3     2207  
Rod Laver      36.6     2207  
Ken Rosewall   37.4     2195  
Tommy Haas     35.3     2111  
Arthur Ashe    35.7     2107  
Ivan Lendl     34.1     2054  
Andres Gimeno  35.0     2035  
Mark Cox       34.0     2014

The 160-point gap between Federer and Jimmy Connors implies that 34-year-old Fed would win about 70% of the time against 34-year-old Connors. No one has ever sustained this level of play–or anything close to it–for this long.

At the risk of belaboring the point, similar arguments can be made for 33-year-old Fed, all the way to 30-year-old Fed. At almost any stage in the last four years, Federer has been better than any player in history at that age [2].  Djokovic has matched many of Roger’s career accomplishments so far, especially on clay, but it would be truly remarkable if he maintained a similar level of play through the end of the decade.

Current Elo ratings

While it’s not really germane to today’s subject, I’ve got the numbers, so let’s take a look at the current ATP Elo ratings. Since Elo is new to most tennis fans, I’ve included columns to indicate each player’s chances of beating Djokovic and of beating the current #10, Milos Raonic, based on their rating. As a general rule, a 100-point gap translates to a 64% chance of winning for the favorite, a 200-point gap implies 76%, and a 500-point gap is equivalent to 95%.

Rank  Player                  Elo  Vs #1  Vs #10  
1     Novak Djokovic         2511      -     91%  
2     Roger Federer          2386    33%     84%  
3     Andy Murray            2332    26%     79%  
4     Kei Nishikori          2256    19%     71%  
5     Rafael Nadal           2256    19%     71%  
6     Stan Wawrinka          2186    13%     62%  
7     David Ferrer           2159    12%     58%  
8     Tomas Berdych          2148    11%     56%  
9     Richard Gasquet        2128    10%     54%  
10    Milos Raonic           2103     9%       -  
Rank  Player                  Elo  Vs #1  Vs #10  
11    Gael Monfils           2084     8%     47%  
12    Jo-Wilfried Tsonga     2083     8%     47%  
13    Marin Cilic            2081     8%     47%  
14    Kevin Anderson         2074     7%     46%  
15    John Isner             2035     6%     40%  
16    David Goffin           2027     6%     39%  
17    Grigor Dimitrov        2021     6%     38%  
18    Gilles Simon           2005     5%     36%  
19    Jack Sock              1994     5%     35%  
20    Roberto Bautista Agut  1986     5%     34%  
Rank  Player                  Elo  Vs #1  Vs #10  
21    Philipp Kohlschreiber  1982     5%     33%  
22    Tommy Robredo          1963     4%     31%  
23    Feliciano Lopez        1955     4%     30%  
24    Nick Kyrgios           1951     4%     29%  
25    Ivo Karlovic           1949     4%     29%  
26    Jeremy Chardy          1940     4%     28%  
27    Alexandr Dolgopolov    1940     4%     28%  
28    Bernard Tomic          1936     4%     28%  
29    Fernando Verdasco      1932     3%     27%  
30    Fabio Fognini          1925     3%     26%

Continue reading The Case for Novak Djokovic … and Roger Federer … and Rafael Nadal

How Elo Rates US Open Finalists Flavia Pennetta and Roberta Vinci

Among the many good things that have happened to Flavia Pennetta and Roberta Vinci after reaching the final of this year’s US Open, both enjoyed huge leaps in Monday’s official WTA rankings. Pennetta rose from 26th to 8th, and Vinci jumped from 43rd to 19th.

Such large changes in rankings are always a little suspicious and expose the weakness of systems that award points based on round achieved. A lucky draw or one incredible outlier of a match doesn’t mean that a player is suddenly massively better than she was a couple of weeks ago.

To put it another way: As they are, the official rankings do a decent job of representing how a player has performed. What they don’t do so well is represent how well someone is playing, or the closely related issue of how well she will play.

For that, we can turn to Elo ratings, which Carl Bialik and Benjamin Morris used at the beginning of the US Open to compare Serena Williams to other all-time greats [1]. Elo awards points based on opponent quality, not the importance of the tournament or round. As such, the system provides a better estimate of the current skill level of each player than the official rankings do.

Sure enough, Elo agrees with my hypothesis, that Pennetta didn’t suddenly become the 8th best player in the world. Instead, she rose to 17th, just behind Garbine Muguruza (another Slam finalist overestimated by the rankings) and ahead of Elina Svitolina. Vinci didn’t really return to the top 20, either: Elo places her 34th, between Camila Giorgi and Barbora Strycova.

While her official ranking of 8th is Pennetta’s career high, Elo disagrees again. The system claims that Pennetta peaked during the US Open six years ago, after a strong summer that involved semifinal-or-better showings in four straight tournaments, plus a fourth-round win over Vera Zvonareva in New York. She’s more than 100 points below that career-high level, equivalent to the present gap between her and 7th-Elo-rated Angelique Kerber.

The current Elo rankings hold plenty of surprises like this, having little in common with the official rankings:

Rank  Player                 Elo  
1     Serena Williams       2460  
2     Maria Sharapova       2298  
3     Victoria Azarenka     2221  
4     Simona Halep          2204  
5     Petra Kvitova         2174  
6     Belinda Bencic        2144  
7     Angelique Kerber      2130  
8     Venus Williams        2126  
9     Caroline Wozniacki    2095  
10    Lucie Safarova        2084

Rank  Player                 Elo   
11    Ana Ivanovic          2078  
12    Carla Suarez Navarro  2062  
13    Agnieszka Radwanska   2054  
14    Timea Bacsinszky      2041  
15    Sloane Stephens       2031  
16    Garbine Muguruza      2031  
17    Flavia Pennetta       2030  
18    Elina Svitolina       2023  
19    Madison Keys          2019  
20    Jelena Jankovic       2016

While Victoria Azarenka is still nearly 200 points shy of her peak, Elo gives her credit for the extremely tough draws that have met her return from injury. Another player rated much higher here than in the WTA rankings is Belinda Bencic, whose defeat of Serena launched her into the top ten.

The oldest final

Pennetta and Vinci are both unusually old for Slam finalists, not to mention players who reached that milestone for the first time. Elo doesn’t consider them among the very best players active today, but next to other 32- and 33-year-olds in WTA history, they compare very well indeed.

Among players 33 or older, Pennetta’s current rating is sixth best in the last thirty-plus years [2]. As the all-time list shows, that puts her in extraordinarily good company:

Rank  Player                Age   Elo  
1     Martina Navratilova  33.4  2527  
2     Serena Williams      33.9  2480  
3     Chris Evert          33.4  2412  
4     Venus Williams       33.3  2175  
5     Nathalie Tauziat     33.9  2088  
6     Flavia Pennetta      33.5  2030  
7     Wendy Turnbull       33.1  2018  
8     Conchita Martinez    33.3  2014

In the 32-and-over category, Vinci stands out as well. Her lower rating, combined with the somewhat larger pool of players who remained competitive to that ago, means that she holds 24th place in this age group. For a player who has never cracked the top ten, 24th of all time is an impressive accomplishment.

Keep an eye out for more Elo-based analysis here. Soon, I’ll be able to post and update Elo ratings on Tennis Abstract and, once a few more kinks are worked out, use them to improve the WTA tournament forecasts on the site as well.

Continue reading How Elo Rates US Open Finalists Flavia Pennetta and Roberta Vinci