Some players are much better returners than others. Many players are such good returners that everyone knows it, agrees upon it, and changes their game accordingly. This much, I suspect, we can all agree on.
How far does that go? When players are altering their service tactics and changing their risk calculations based on the man on the other side of the net, does the effect show up in the numbers? Do players double fault more or less depending on their opponent?
Put it another way: Do some players consistently induce more double faults than others?
The conventional wisdom, to the extent the issue is raised , is yes. When a server faces a strong returner, like Andy Murray or Gilles Simon, it’s not unusual to hear a commentator explain that the server is under more pressure, and when a second serve misses the box, the returner often gets the credit.
Credit where credit isn’t due
In the last 52 weeks, Jeremy Chardy‘s opponents have hit double faults on 4.3% of their service points, the highest rate of anyone in the top 50. At the other extreme, Simon’s opponents doubled only 2.8% of the time, with Novak Djokovic and Rafael Nadal just ahead of him at 2.9% and 3.0%, respectively.
The conventional wisdom isn’t off to a good start.
But the simple numbers are misleading–as the simple numbers so often are. Djokovic and Nadal, going deep into tournaments almost every week, play tougher opponents. Djokovic’s median opponent over the last year was ranked 21st, while Chardy’s was outside the top 50. While it isn’t always true that higher-ranked opponents hit fewer double faults, it’s certainly something worth taking into consideration. So even though Chardy has certainly benefited from some poorly aimed second serves, it may not be accurate to say he has benefited the most–he might have simply faced a schedule full of would-be Fernando Verdascos.
Looking now at the most recent full season, 2012, it turns out that Djokovic did face those players least likely to double fault. His opponents DF’d on 2.9% of points, while Filippo Volandri‘s did so on 3.9% of points. While these are minor differences when compared to all points played, they are enormous when attempting to measure the returners impact on DF rate. While Djokovic “induced” double faults on 3.0% of points and Volandri did so on 3.9% of points, you can see the importance of considering their opponents. Despite the difference in rates, neither player had much effect on their opponents, as least as far as double faulting is concerned.
This approach allows to express opponent’s DF rate in a more efficient way, relative to “expected” DF rate. Volandri benefited from 1% more doubles than expected, Chardy enjoyed a whopping 39% more than expected, and–to illustrate the other extreme–Simon received 31% fewer doubles than his opponents would be predicted to suffer through.
You can’t always get what you want
One thing is clear by now. Regardless of your method and its sophistication, some players got a lot more free return points in 2012 than others. But is it a skill?
If it is a skill, we would expect the same players to top the leaderboard from one year to the next. Or, at least, the same players would “induce” more double faults than expected from one year to the next.
They don’t. I found 1405 consecutive pairs of “player-years” since 1991 with at least 30 matches against tour-level regulars in each season. Then I compared their adjusted opponents’ double fault rate in year one with the rate in year two. The correlation is positive, but very weak: r = 0.13.
Nadal, one player who we would expect to have an effect on his opponents, makes for a good illustration. In the last nine years, he has had six seasons in which he received fewer doubles than expected, three with more. In 2011, it was 15% fewer than expected; last year, it was 9% more. Murray has fluctuated between -18% and +25%. Lots of noise, very little signal.
There may be a very small number of players who affect the rate of double faults (positively or negatively) consistently over the course of their career, but a much greater amount of the variation between players is attributable to luck. Let’s hope Chardy hasn’t built a new game plan around his ability to induce double faults.
The value of negative results
Regular readers of the blog shouldn’t be surprised to plow through 600 words just to reach a conclusion of “nothing to see here.” Sorry about that. Positive findings are always more fun. Plus, they give you more interesting things to talk about at cocktail parties.
Despite the lack of excitement, there are two reasons to persist in publishing (and, on your end, understanding) negative findings.
First, negative results indicate when journalists and commentators are selling us a bill of goods. We all like stories, and commentators make their living “explaining” causal connections. Sometimes they’re just making things up as they go along. “That’s bad luck” is a common explanation when a would-be winner clips the net cord, but rarely otherwise. However, there’s a lot more luck in sport than these obvious instances. We’re smarter, more rational fans when we understand this.
(Though I don’t know if being smarter or rational helps us enjoy the sport more. Sorry about that, too.)
Second, negative results can have predictive value. If a player has benefited or suffered from an extreme opponents’ double-fault rate (or tiebreak percentage) and we also know that there is little year-to-year correlation, we can expect that the stat will go back to normal next year. In Chardy’s case, we can predict he won’t get as many free return points, thus he won’t continue to win quite as many return points, thus his overall results might suffer. Admittedly, in the case of this statistic, regression to the mean would have a tiny effect on something like winning percentage or ATP rank.
So at Heavy Topspin, negative results are here to stay. More importantly, we can all stop trying to figure out how Jeremy Chardy is inducing all those double faults.