Benchmarks for Shot-by-Shot Analysis

In my post last week, I outlined what the error stats of the future may look like. A wide range of advanced stats across different sports, from baseball to ice hockey–and increasingly in tennis–follow the same general algorithm:

  1. Classify events (shots, opportunities, whatever) into categories;
  2. Establish expected levels of performance–often league-average–for each category;
  3. Compare players (or specific games or tournaments) to those expected levels.

The first step is, by far, the most complex. Classification depends in large part on available data. In baseball, for example, the earliest fielding metrics of this type had little more to work with than the number of balls in play. Now, batted balls can be categorized by exact location, launch angle, speed off the bat, and more. Having more data doesn’t necessarily make the task any simpler, as there are so many potential classification methods one could use.

The same will be true in tennis, eventually, when Hawkeye data (or something similar) is publicly available. For now, those of us relying on public datasets still have plenty to work with, particularly the 1.6 million shots logged as part of the Match Charting Project.*

*The Match Charting Project is a crowd-sourced effort to track professional matches. Please help us improve tennis analytics by contributing to this one-of-a-kind dataset. Click here to find out how to get started.

The shot-coding method I adopted for the Match Charting Project makes step one of the algorithm relatively straightforward. MCP data classifies shots in two primary ways: type (forehand, backhand, backhand slice, forehand volley, etc.) and direction (down the middle, or to the right or left corner). While this approach omits many details (depth, speed, spin, etc.), it’s about as much data as we can expect a human coder to track in real-time.

For example, we could use the MCP data to find the ATP tour-average rate of unforced errors when a player tries to hit a cross-court forehand, then compare everyone on tour to that benchmark. Tour average is 10%, Novak Djokovic‘s unforced error rate is 7%, and John Isner‘s is 17%. Of course, that isn’t the whole picture when comparing the effectiveness of cross-court forehands: While the average ATPer hits 7% of his cross-court forehands for winners, Djokovic’s rate is only 6% compared to Isner’s 16%.

However, it’s necessary to take a wider perspective. Instead of shots, I believe it will be more valuable to investigate shot opportunities. That is, instead of asking what happens when a player is in position to hit a specific shot, we should be figuring out what happens when the player is presented with a chance to hit a shot in a certain part of the court.

This is particularly important if we want to get beyond the misleading distinction between forced and unforced errors. (As well as the line between errors and an opponent’s winners, which lie on the same continuum–winners are simply shots that were too good to allow a player to make a forced error.) In the Isner/Djokovic example above, our denominator was “forehands in a certain part of the court that the player had a reasonable chance of putting back in play”–that is, successful forehands plus forehand unforced errors. We aren’t comparing apples to apples here: Given the exact same opportunities, Djokovic is going to reach more balls, perhaps making unforced errors where we would call Isner’s mistakes forced errors.

Outcomes of opportunities

Let me clarify exactly what I mean by shot opportunities. They are defined by what a player’s opponent does, regardless of how the player himself manages to respond–or if he manages to get a racket on the ball at all. For instance, assuming a matchup between right-handers, here is a cross-court forehand:

illustration of a shot opportunity

Player A, at the top of the diagram, is hitting the shot, presenting player B with a shot opportunity. Here is one way of classifying the outcomes that could ensue, together with the abbreviations I’ll use for each in the charts below:

  • player B fails to reach the ball, resulting in a winner for player A (vs W)
  • player B reaches the ball, but commits a forced error (FE)
  • player B commits an unforced error (UFE)
  • player B puts the ball back in play, but goes on to lose the point (ip-L)
  • player B puts the ball back in play, presents player A with a “makeable” shot, and goes on to win the point (ip-W)
  • player B causes player A to commit a forced error (ind FE)
  • player B hits a winner (W)

As always, for any given denominator, we could devise different categories, perhaps combining forced and unforced errors into one, or further classifying the “in play” categories to identify whether the player is setting himself up to quickly end the point. We might also look at different categories altogether, like shot selection.

In any case, the categories above give us a good general idea of how players respond to different opportunities, and how those opportunities differ from each other. The following chart shows–to adopt the language of the example above–player B’s outcomes based on player A’s shots, categorized only by shot type:

Outcomes of opportunities by shot type

The outcomes are stacked from worst to best. At the bottom is the percentage of opponent winners (vs W)–opportunities where the player we’re interested in didn’t even make contact with the ball. At the top is the percentage of winners (W) that our player hit in response to the opportunity. As we’d expect, forehands present the most difficult opportunities: 5.7% of them go for winners and another 4.6% result in forced errors. Players are able to convert those opportunities into points won only 42.3% of the time, compared to 46.3% when facing a backhand, 52.5% when facing a backhand slice (or chip), and 56.3% when facing a forehand slice.

The above chart is based on about 374,000 shots: All the baseline opportunities that arose (that is, excluding serves, which need to be treated separately) in over 1,000 logged matches between two righties. Of course, there are plenty of important variables to further distinguish those shots, beyond simply categorizing by shot type. Here are the outcomes of shot opportunities at various stages of the rally when the player’s opponent hits a forehand:

Outcomes of forehand responses based on number of shots

The leftmost column can be seen as the results of “opportunities to hit a third shot”–that is, outcomes when the serve return is a forehand. Once again, the numbers are in line with what we would expect: The best time to hit a winner off a forehand is on the third shot–the “serve-plus-one” tactic. We can see that in another way in the next column, representing opportunities to hit a fourth shot. If your opponent hits a forehand in play for his serve-plus-one shot, there’s a 10% chance you won’t even be able to get a racket on it. The average player’s chances of winning the point from that position are only 38.4%.

Beyond the 3rd and 4th shot, I’ve divided opportunities into those faced by the server (5th shot, 7th shot, and so on) and those faced by the returner (6th, 8th, etc.). As you can see, by the 5th shot, there isn’t much of a difference, at least not when facing a forehand.

Let’s look at one more chart: Outcomes of opportunities when the opponent hits a forehand in various directions. (Again, we’re only looking at righty-righty matchups.)

Outcomes of forehand responses based on shot direction

There’s very little difference between the two corners, and it’s clear that it’s more difficult to make good of a shot opportunity in either corner than it is from the middle. It’s interesting to note here that, when faced with a forehand that lands in play–regardless of where it is aimed–the average player has less than a 50% chance of winning the point. This is a confusing instance of selection bias that crops up occasionally in tennis analytics: Because a significant percentage of shots are errors, the player who just placed a shot in the court has a temporary advantage.

Next steps

If you’re wondering what the point of all of this is, I understand. (And I appreciate you getting this far despite your reservations.) Until we drill down to much more specific situations–and maybe even then–these tour averages are no more than curiosities. It doesn’t exactly turn the analytics world upside down to show that forehands are more effective than backhand slices, or that hitting to the corners is more effective than hitting down the middle.

These averages are ultimately only tools to better quantify the accomplishments of specific players. As I continue to explore this type of algorithm, combined with the growing Match Charting Project dataset, we’ll learn a lot more about the characteristics of the world’s best players, and what makes some so much more effective than others.

The Match Charting Project, 2017 Update

2016 was a great year for the Match Charting Project (MCP), my crowdsourced effort to improve the state of tennis statistics. Many new contributors joined the project, the data played a part in more research than ever, and best of all, we added over 1,000 new matches to the database.

For those who don’t know, the MCP is a volunteer effort from dozens of devoted tennis fans to collect shot-by-shot data for professional matches. The resulting data is vastly more detailed than anything else available to the public. You can find an extremely in-depth report on every match in the database–for example, here’s the 2016 Singapore final–as well as an equally detailed report on every player with more than one charted match. Here’s Andy Murray.

In 2016, we:

  • added 1,145 new matches to the database, more than in any previous year;
  • charted more WTA than ATP matches, bringing women’s tennis to near parity in the project;
  • nearly completed the set of charted Grand Slam finals back to 1980;
  • filled in the gaps to have at least one charted match of every member of the ATP top 200, and 198 of the WTA top 200;
  • reached double digits in charted matches for every player in the ATP top 49 (sorry, Florian Mayer, we’re working on it!) and the WTA top 58;
  • logged over 174,000 points and nearly 700,000 shots.

I believe 2017 can be even better. To make that happen, we could really use your help. As with most projects of this nature, a small number of contributors do the bulk of the work, and the MCP is no different–Isaac and Edo both charted more than 200 matches last year.

There are plenty of reasons to contribute: It will make you a more knowledgeable tennis fan, it will help add to the sum of human knowledge, and it can even be fun. Click here to find out how to get started.

I’m proud of the work we’ve done so far, and I hope that the first 2,700 matches are only the beginning.

Shot-by-Shot Stats for 261 Grand Slam Finals (and More?)

One of my favorite subsets of the Match Charting Project is the ongoing effort–in huge part thanks to Edo–to chart all Grand Slam finals, men’s and women’s, back to 1980. We’re getting really close, with a total of 261 Slam finals charted, including:

  • every men’s Wimbledon and US Open final all the way back to 1980;
  • every men’s Slam final since 1989 Wimbledon;
  • every women’s Slam final back to 2001, with a single exception.

The Match Charting Project gathers and standardizes data that, for many of these matches, simply didn’t exist before. These recaps give us shot-by-shot breakdowns of historically important matches, allowing us to quantify how the game has changed–at least at the very highest level–over the last 35 years. A couple of months ago, I did one small project using this data to approximate surface speed changes–that’s just the tip of the iceberg in terms of what you can do with this data. (The dataset is also publicly available, so have fun!)

We’ve got about 30 Slam finals left to chart, and you might be able to help. As always, we are actively looking for new contributors to the project to chart matches (here’s how to get started, and why you should, and you don’t have to chart Slam finals!), but right now, I have a different request.

We’ve scoured the internet, from YouTube to Youku to torrent trackers, to find video for all of these matches. While I don’t expect any of you to have the 1980 Teacher-Warwick Australian Open final sitting around on your hard drive, I’ve got higher hopes for some of the more recent matches we’re missing.

If you have full (or nearly full) video for any of these matches, or you know of a (preferably free) source where we can find them, please–please, please!–drop me a line. Once we have the video, Edo or I will do the rest, and the project will become even more valuable.

There are several more finals from the 1980s that we’re still looking for. Here’s the complete list.

Thanks for your help!

The Grass is Slowing: Another Look at Surface Speed Convergence

A few years ago, I posted one of my most-read and most-debated articles, called The Mirage of Surface Speed Convergence.  Using the ATP’s data on ace rates and breaks of serve going back to 1991, it argued that surface speeds aren’t really converging, at least to the extent we can measure them with those two tools.

One of the most frequent complaints was that I was looking at the wrong data–surface speed should really be quantified by rally length, spin rate, or any number of other things. As is so often the case with tennis analytics, we have only so much choice in the matter. At the time, I was using all the data that existed.

Thanks to the Match Charting Project–with a particular tip of the cap to Edo Salvati–a lot more data is available now. We have shot-by-shot stats for 223 Grand Slam finals, including over three-fourths of Slam finals back to 1980. While we’ll never be able to measure anything like ITF Court Pace Rating for surfaces thirty years in the past, this shot-by-shot data allows us to get closer to the truth of the matter.

Sure enough, when we take a look at a simple (but until recently, unavailable) metric such as rally length, we find that the sport’s major surfaces are playing a lot more similarly than they used to. The first graph shows a five-year rolling average* for the rally length in the men’s finals of each Grand Slam from 1985 to 2015:


* since some matches are missing, the five-year rolling averages each represent the mean of anywhere from two to five Slam finals.

Over the last decade and a half, the hard-court and grass-court slams have crept steadily upward, with average rally lengths now similar to those at Roland Garros, traditionally the slowest of the four Grand Slam surfaces. The movement is most dramatic in the Wimbledon grass, which for many years saw an average rally length of a mere two shots.

For all the advantages of rally length and shot-by-shot data, there’s one massive limitation to this analysis: It doesn’t control for player. (My older analysis, with more limited data per match, but for many more matches, was able to control for player.) Pete Sampras contributed to 15 of our data points, but none on clay. Andres Gomez makes an appearance, but only at Roland Garros. Until we have shot-by-shot data on multiple surfaces for more of these players, there’s not much we can do to control for this severe case of selection bias.

So we’re left with something of a chicken-and-egg problem.  Back in the early 90’s, when Roland Garros finals averaged almost six shots per point and Wimbledon finals averaged barely two shots per point, how much of the difference was due to the surface itself, and how much to the fact that certain players reached the final? The surface itself certainly doesn’t account for everything–in 1988, Mats Wilander and Ivan Lendl averaged over seven shots per point at the US Open, and in 2002, David Nalbandian and Lleyton Hewitt topped 5.5 shots per point at Wimbledon.

Still, outliers and selection bias aside, the rally length convergence we see in the graph above reflects a real phenomenon, even if it is amplified by the bias. After all, players who prefer short points win more matches on grass because grass lends itself to short points, and in an earlier era, “short points” meant something more extreme than it does today.

The same graph for women’s Grand Slam finals shows some convergence, though not as much:


Part of the reason that the convergence is more muted is that there’s less selection bias. The all-surface dominance of a few players–Chris Evert, Martina Navratilova, and Steffi Graf–means that, if only by historical accident, there is less bias than in men’s finals.

We still need a lot more data before we can make confident statements about surface speeds in 20th-century tennis. (You can help us get there by charting some matches!) But as we gather more information, we’re able to better illustrate how the surfaces have become less unique over the years.

Two New Ways to Chart Tennis Matches

Readers of this site are probably already aware of the Match Charting Project, my effort to coordinate volunteer contributions to build a massive shot-by-shot database of professional tennis. If this is the first you’ve heard of it, I encourage you to check out the detailed match- and player-level data we’ve gathered already.

In the last week, two developers have released GUIs to make charting easier and more engaging. When I first started the project, I put together an excel spreadsheet that tracks all the user input and keeps score. I’ve used that spreadsheet for the hundreds of matches I’ve charted, but I recognize that it’s not the most intuitive system for some people.

The first new interface is thanks to Stephanie Kovalchik, who writes the tennis blog On the T. (And who has contributed to the MCP in the past.) Her GUI is entirely click-based, which means you don’t have to learn the various letter- and number-codes that are required for the traditional MCP spreadsheet.


While it’s web-based, it has some of the look and feel of a modern handheld app. It’s probably the easiest way to get started contributing to the project.

(Which reminds me, Brian Hrebec wrote an Android app for the project almost two years ago, and I haven’t given it the attention it deserves. It also makes getting started relatively easy, especially if you’d like to chart on an Android device.)

The second new interface is thanks to Charles Allen, of Tennis Visuals. Also web-based, his app requires that you use the same letter- and number-based codes as the original spreadsheet, but sweetens the deal with live visualizations that update after each point:


With four ways to chart matches and add to the Match Charting Project database, there are even fewer excuses not to contribute. If you’re still not convinced, I have even more reasons for you to consider. And if you’re ready to jump in, just click over to one of the new GUIs, or click here for my Quick Start guide.


What Happens After an Unsuccessful First Serve Challenge?

A lot of first serves miss, so every player has a well-established routine between the first and second serve. So much so that, traditionally, if something disrupts that routine, the receiver may grant the server another first serve.

Hawkeye has changed all that. If the server doubts the line call, he or she may challenge it. That results in a lengthy wait, usually some crowd noise, and a general wreckage of that between-serves routine.

The conventional wisdom seems to be that the long pause is harmful to the server: that if the challenge fails, the server is less likely to put the second serve in the box. And if the second serve does go in, it’s weaker than average, so the server is less likely to win the point.

My analysis of over 200 first-serve challenges casts doubt on the conventional wisdom. It’s another triumph for the null hypothesis, the only force in tennis as dominant as Novak Djokovic.

As I’ve charted matches for the Match Charting Project, I’ve noted each challenge, the type of challenge, and whether it was successful. I’ve accumulated 116 ATP and 89 WTA instances in which a player unsuccessfully challenged the call on his own first serve. For each of these challenges, I also calculated some match-level stats for that server: how often s/he made the second serve, and how often s/he won second serve points.

Of the 116 unsuccessful ATP challenges, players made 106 of their second serves. Based on their overall rates in those matches, we’d expect them to make 106.6 of them. They won exactly half–58–of those points, and their performance in those matches suggests that they “should” have won 58.2 of them.

In other words, players are recovering from the disruption and performing almost exactly as they normally do.

For WTAers, it’s a similar story. Players made 77 of their 89 second serves. If they landed second serves at the same rate they did in the rest of those matches, they’d have made 77.1. They won 38 of the 89 points, compared to an expected 40 points. That last difference, of five percent, is the only one that is more than a rounding error. Even if the effect is real–which is doubtful, given the conflicting ATP number and the small sample size–it’s a small one.

Of course, the potential benefit of challenging the call on your first serve is big: If you’re right, you either win the point or get another first serve. Of the challenges I’ve tracked, men were successful 38% of the time on their first serves, and women were right 32% of the time.

There’s no evidence here that players are harmed by appealing to Hawkeye on their own first serves. Apart from the small risk of running out of challenges, it’s all upside. Tennis pros adore routine, but in this case, they perform just as well when the routine is disrupted.

Match Charting Project February Update

At the beginning of the year, I announced an ambitious goal: to double the number of matches in the Match Charting Project dataset. That’s a target of 1,617 new matches in 2016–about 135 per month, or 4.5 per day.

So far, so good! In January, ten contributors combined to add 162 new matches to the total. Our biggest heroes were Edo, with 35 matches, including many Grand Slam finals; Isaac, with 33; and Edged, whose 22 included some of the dramatic late-round men’s matches from Melbourne.

As we close in on the 1,800-match mark, I’m excited to announce a new addition to the stats and reports available on Tennis Abstract. Now, for every player with at least two charted matches in the database, there’s a dedicated player page with hundreds of aggregate data points for that player.

Here’s Novak Djokovic’s page, and here’s Angelique Kerber’s. I’m still working on integrating these pages into the rest of Tennis Abstract, but for now, you’ll be able to access them by clicking on the match totals next to every player’s name on the Match Charting home page.

These pages each feature four charts, which compare the player’s typical rally length, shot selection, winner types, and unforced error types to tour average. The other links on each page take you to tables very similar to those on the MCP match reports. Move your cursor over any rate to see the relevant tour average, as well as that player’s rates on each surface.

I hope you like this new addition, which owes so much to the amazing efforts of so many volunteer charters.

I hope, too, that you’ll be inspired to contribute to the project as well. When you’re ready to try your hand at charting, start here. As always, the more matches we have, the more valuable the project becomes.