Benchmarks for Shot-by-Shot Analysis

In my post last week, I outlined what the error stats of the future may look like. A wide range of advanced stats across different sports, from baseball to ice hockey–and increasingly in tennis–follow the same general algorithm:

  1. Classify events (shots, opportunities, whatever) into categories;
  2. Establish expected levels of performance–often league-average–for each category;
  3. Compare players (or specific games or tournaments) to those expected levels.

The first step is, by far, the most complex. Classification depends in large part on available data. In baseball, for example, the earliest fielding metrics of this type had little more to work with than the number of balls in play. Now, batted balls can be categorized by exact location, launch angle, speed off the bat, and more. Having more data doesn’t necessarily make the task any simpler, as there are so many potential classification methods one could use.

The same will be true in tennis, eventually, when Hawkeye data (or something similar) is publicly available. For now, those of us relying on public datasets still have plenty to work with, particularly the 1.6 million shots logged as part of the Match Charting Project.*

*The Match Charting Project is a crowd-sourced effort to track professional matches. Please help us improve tennis analytics by contributing to this one-of-a-kind dataset. Click here to find out how to get started.

The shot-coding method I adopted for the Match Charting Project makes step one of the algorithm relatively straightforward. MCP data classifies shots in two primary ways: type (forehand, backhand, backhand slice, forehand volley, etc.) and direction (down the middle, or to the right or left corner). While this approach omits many details (depth, speed, spin, etc.), it’s about as much data as we can expect a human coder to track in real-time.

For example, we could use the MCP data to find the ATP tour-average rate of unforced errors when a player tries to hit a cross-court forehand, then compare everyone on tour to that benchmark. Tour average is 10%, Novak Djokovic‘s unforced error rate is 7%, and John Isner‘s is 17%. Of course, that isn’t the whole picture when comparing the effectiveness of cross-court forehands: While the average ATPer hits 7% of his cross-court forehands for winners, Djokovic’s rate is only 6% compared to Isner’s 16%.

However, it’s necessary to take a wider perspective. Instead of shots, I believe it will be more valuable to investigate shot opportunities. That is, instead of asking what happens when a player is in position to hit a specific shot, we should be figuring out what happens when the player is presented with a chance to hit a shot in a certain part of the court.

This is particularly important if we want to get beyond the misleading distinction between forced and unforced errors. (As well as the line between errors and an opponent’s winners, which lie on the same continuum–winners are simply shots that were too good to allow a player to make a forced error.) In the Isner/Djokovic example above, our denominator was “forehands in a certain part of the court that the player had a reasonable chance of putting back in play”–that is, successful forehands plus forehand unforced errors. We aren’t comparing apples to apples here: Given the exact same opportunities, Djokovic is going to reach more balls, perhaps making unforced errors where we would call Isner’s mistakes forced errors.

Outcomes of opportunities

Let me clarify exactly what I mean by shot opportunities. They are defined by what a player’s opponent does, regardless of how the player himself manages to respond–or if he manages to get a racket on the ball at all. For instance, assuming a matchup between right-handers, here is a cross-court forehand:

illustration of a shot opportunity

Player A, at the top of the diagram, is hitting the shot, presenting player B with a shot opportunity. Here is one way of classifying the outcomes that could ensue, together with the abbreviations I’ll use for each in the charts below:

  • player B fails to reach the ball, resulting in a winner for player A (vs W)
  • player B reaches the ball, but commits a forced error (FE)
  • player B commits an unforced error (UFE)
  • player B puts the ball back in play, but goes on to lose the point (ip-L)
  • player B puts the ball back in play, presents player A with a “makeable” shot, and goes on to win the point (ip-W)
  • player B causes player A to commit a forced error (ind FE)
  • player B hits a winner (W)

As always, for any given denominator, we could devise different categories, perhaps combining forced and unforced errors into one, or further classifying the “in play” categories to identify whether the player is setting himself up to quickly end the point. We might also look at different categories altogether, like shot selection.

In any case, the categories above give us a good general idea of how players respond to different opportunities, and how those opportunities differ from each other. The following chart shows–to adopt the language of the example above–player B’s outcomes based on player A’s shots, categorized only by shot type:

Outcomes of opportunities by shot type

The outcomes are stacked from worst to best. At the bottom is the percentage of opponent winners (vs W)–opportunities where the player we’re interested in didn’t even make contact with the ball. At the top is the percentage of winners (W) that our player hit in response to the opportunity. As we’d expect, forehands present the most difficult opportunities: 5.7% of them go for winners and another 4.6% result in forced errors. Players are able to convert those opportunities into points won only 42.3% of the time, compared to 46.3% when facing a backhand, 52.5% when facing a backhand slice (or chip), and 56.3% when facing a forehand slice.

The above chart is based on about 374,000 shots: All the baseline opportunities that arose (that is, excluding serves, which need to be treated separately) in over 1,000 logged matches between two righties. Of course, there are plenty of important variables to further distinguish those shots, beyond simply categorizing by shot type. Here are the outcomes of shot opportunities at various stages of the rally when the player’s opponent hits a forehand:

Outcomes of forehand responses based on number of shots

The leftmost column can be seen as the results of “opportunities to hit a third shot”–that is, outcomes when the serve return is a forehand. Once again, the numbers are in line with what we would expect: The best time to hit a winner off a forehand is on the third shot–the “serve-plus-one” tactic. We can see that in another way in the next column, representing opportunities to hit a fourth shot. If your opponent hits a forehand in play for his serve-plus-one shot, there’s a 10% chance you won’t even be able to get a racket on it. The average player’s chances of winning the point from that position are only 38.4%.

Beyond the 3rd and 4th shot, I’ve divided opportunities into those faced by the server (5th shot, 7th shot, and so on) and those faced by the returner (6th, 8th, etc.). As you can see, by the 5th shot, there isn’t much of a difference, at least not when facing a forehand.

Let’s look at one more chart: Outcomes of opportunities when the opponent hits a forehand in various directions. (Again, we’re only looking at righty-righty matchups.)

Outcomes of forehand responses based on shot direction

There’s very little difference between the two corners, and it’s clear that it’s more difficult to make good of a shot opportunity in either corner than it is from the middle. It’s interesting to note here that, when faced with a forehand that lands in play–regardless of where it is aimed–the average player has less than a 50% chance of winning the point. This is a confusing instance of selection bias that crops up occasionally in tennis analytics: Because a significant percentage of shots are errors, the player who just placed a shot in the court has a temporary advantage.

Next steps

If you’re wondering what the point of all of this is, I understand. (And I appreciate you getting this far despite your reservations.) Until we drill down to much more specific situations–and maybe even then–these tour averages are no more than curiosities. It doesn’t exactly turn the analytics world upside down to show that forehands are more effective than backhand slices, or that hitting to the corners is more effective than hitting down the middle.

These averages are ultimately only tools to better quantify the accomplishments of specific players. As I continue to explore this type of algorithm, combined with the growing Match Charting Project dataset, we’ll learn a lot more about the characteristics of the world’s best players, and what makes some so much more effective than others.

How Argentina’s Road Warriors Defied the Davis Cup Home-Court Odds

The conventional wisdom has long held that there is a home court advantage in Davis Cup. It makes sense: In almost every sport, there is a documented advantage to playing at home, and Davis Cup gives us what seem to be the most extreme home courts in tennis.

However, Argentina won this year’s competition despite playing all four of their ties on the road. After the first round this season, only one of seven hosts managed to give the home crowd a victory. Bob Bryan has some ideas as to why:

https://twitter.com/Bryanbros/status/803244964784308227

Which is it? Do players excel in front of an enthusiastic home crowd, on a surface chosen for their advantage? Or do they suffer from the distractions that Bryan cites?

To answer that question, I looked at 322 Davis Cup ties, encompassing all World Group and World Group Play-off weekends back to 2003. Of those, the home side won 196, of 60.9% of the time. So far, the conventional wisdom looks pretty good.

But we need to do more. To check whether the hosting teams were actually better, meaning that they should have won more ties regardless of venue, I used singles and doubles Elo ratings to simulate every match of every one of those ties. (In cases where the tie was decided before the fourth or fifth rubber, I simulated matches between the best available players who could have contested those matches.) Based on those simulations, the hosts “should” have won 171 of the 322 ties, or 53.1%.

The evidence in favor of home-court advantage–and against Bryan’s “distractions” theory–is strong. Home sides have won World Group ties about 15% more often than we would expect. Some of that is likely due to the hosts’ ability to choose surface. I doubt surface accounts for the whole effect, since some court types (like the medium-slow hard court in Croatia last weekend) don’t heavily favor either side, and many ties are rather lopsided regardless of surface. Teasing out the surface advantage from the more intangible home-court edge is a worthy subject to study, but I won’t be going any further in that direction today.

If distractions are a danger to hosts, we might expect see the home court advantage erode in later rounds. Many early-round matchups are minor news events compared to semifinals and finals. (On the other hand, there were over 100 representatives of the Argentinian press in Croatia last weekend, so the effect isn’t entirely straightforward.) The following table shows how home sides have fared in each round:

Round         Ties  Home Win %  Wins/Exp  
First Round    112       58.9%      1.11  
Quarterfinal    56       60.7%      1.16  
Semifinal       28       82.1%      1.30  
Final           14       57.1%      1.14  
Play-off       112       58.9%      1.14

Aside from a blip at the semifinal level, home-court advantage is quite consistent from one round to the next. The “Wins/Exp” shows how much better the hosts fared than my simulations would have predicted; for instance, in first-round encounters, hosts won 11% more ties than expected.

There is also no meaningful difference between home court advantage on day one and day three. The hosts’s singles players win 15% more matches than my simulations would expect on day one, and 15% more on day three. The day three divide is intriguing: Home players win the fourth rubber 12% more often than expected, but they claim the deciding fifth rubber a whopping 23% more frequently than they would in neutral environments. However, only 91 of the 322 ties involved five live rubbers, so the extreme home advantage in the deciding match may just be nothing more than a random spike.

The doubles rubber is less likely to be influenced by venue. Compared to the 15% advantage enjoyed by World Group singles players, the hosting side’s doubles pairings win only 6% more often than expected. This again raises the issue of surface: Not only are doubles results less influenced by court speed than singles results, but home sides are less likely to choose a surface based on the desire of their doubles team, if that preference clashes with the needs of their singles players.

Argentina on the road

In the sense that they never played at home or chose a surface, Argentina beat the odds in all four rounds this year. Of course, home court advantage can only take you so far; it helps to have a good squad. My simulations indicate that the Argentines had a nearly 4-in-5 chance of defeating their Polish first-round opponents on neutral ground, while Juan Martin del Potro and company had a more modest 59% chance of beating the Italians in Italy.

For the last two rounds, though, the Argentines were fighting an uphill battle. The semifinal venue in Glasglow didn’t matter much; the prospect of facing the Murray brothers meant Argentina had less than a 10% chance of advancing no matter what the location. And as I wrote last week, Croatia was rightfully favored in the final. Playing yet another tie on the road simply made the task more difficult.

Once we adjust my simulations of each tie for home court advantage, it turns out that Argentina’s chances of winning the Cup this year were less than 1%, barely 1 in 200. The following table shows the last 14 winners, along with the number of ties they played at home and their chances of winning the Cup in my simulations, given which countries they ended up facing and the players who turned up for each tie:

Year  Winner  Home Ties  Win Prob  
2016  ARG             0      0.5%  
2015  GBR             3     18.9%  
2014  SUI             2     54.7%  
2013  CZE             1     10.5%  
2012  CZE             3     19.7%  
2011  ESP             2     12.2%  
2010  SRB             3     17.6%  
2009  ESP             4     44.0%  
2008  ESP             1     14.3%  
2007  USA             2     24.4%  
2006  RUS             2      1.7%  
2005  CRO             2      7.4%  
2004  ESP             3     23.8%  
2003  AUS             3     15.9%

In the time span I’ve studied, only the 2006 Russian squad managed anything close to the same season-long series of upsets. (I don’t yet have adequate doubles data to analyze earlier Davis Cup competitions.)  At the other end of the spectrum, the simulations emphasize how smoothly Switzerland swept through the bracket in 2014. A wide-open draw, together with Roger Federer, certainly helps.

It was tough going for Argentina, and the luck of the home-court draw made it tougher. Without a solid #2 singles player or an elite doubles specialist, it isn’t likely to get much easier. For all that, they’ll open the 2017 Davis Cup campaign against Italy with at least one unfamiliar weapon in their arsenal: They finally get to play a tie at home.

How To Keep Round Robin Matches Interesting, Part Two

Earlier this week, I published a deep dive into the possible outcomes of four-player round robin groups and offered an ideal schedule that would minimize the likelihood of dead rubbers on the final day. I’ve since heard from a few readers who pointed out two things:

  1. You might do better if you determined the schedule for day two after getting the results of the first two matches.
  2. Major tournaments such as the ATP and WTA Tour Finals already do this, pairing the winners of the first two matches and the losers of the first two matches on day two.

This is an appealing idea. You’re guaranteed to end the second day with one undefeated (2-0) player, two competitors at 1-1, and the last at 0-2. The two participants at 1-1 have everything to play for, and depending on day three’s schedule and tiebreak factors, the 0-2 player could still be in the running as well.

Best of all, you avoid the nightmare scenario of two undefeated players and two eliminated players, in which the final two matches are nearly meaningless.

However, this “contingent schedule” approach isn’t perfect.

Surprise, surprise

We learned in my last post that, if we set the entire schedule before play begins, the likelihood of a dead rubber on the final day is 17%, and if we choose the optimal schedule, leaving #4 vs #1 and #3 vs #2 for the final day, we can drop those chances as low as 10.7%.

(These were based on a range of player skill levels equivalent to 200 points on the Elo scale. The bigger the range of player skills–for instance, the ATP finals is likely to have a group with a range well over 300–the more dramatic the differences in these numbers.)

In addition, we discovered that “dead/seed” matches–those in which one player is already eliminated and the other can only affect their semifinal seeding–are even more common. When the schedule is chosen in advance, the probability of a dead rubber or a “dead/seed” match is always near 40%.

If the day two schedule is determined by day one outcomes, the overall likelihood of these “mostly meaningless” (dead or “dead/seed”) matches drops to about 30%. That’s a major step in the right direction.

Yet there is a drawback: The chances of a dead rubber increase! With the contingent day two schedule, there is a roughly 20% chance of a completely meaningless match on day three.

Our intuition should bear this out. After day two, we are guaranteed one 2-0 player and one 0-2 player. It is somewhat likely that these two have faced each other already, but there still remains a reasonable chance they will play on day three. If they do, the 0-2 player is already eliminated–there will be two 2-1 players at the end of day three. The 2-0 player has clinched a place in the semifinals, so the most that could be at stake is a semifinal seeding.

In other words, if the “winner versus winner” schedule results in a 2-0 vs 0-2 matchup on day three, the odds are that it’s meaningless. And this schedule often does just that.

The ideal contingent schedule

If the goal is to avoid dead rubbers at all costs, the contingent schedule is not for you. You can do a better job by properly arranging the schedule in advance. However, a reasonable person might prefer the contingent schedule because it completely avoids the risk of the low-probability “nightmare scenario” that I described above, of two mostly meaningless day three matches.

Within the contingent schedule, there’s still room for optimization. If the day one slate consists of matches setting #1 against #3 and #2 against #4 (sorted by ranking), the probability of a meaningless match on day three is about average. If day one features #1 vs #2 and #3 vs #4, the odds are even higher: about a 21% chance of a dead rubber and another 11% chance of a “dead/seed” match.

That leaves us with the optimal day one schedule of #1 vs #4 and #2 vs #3. It lowers the probability of a dead rubber to 19% and the chances of a “dead/seed” match to 9.7%. Neither number represents a big difference, but given all the eyes on every match at major year-end events, it seems foolish not to make a small change in order to maximize the probability that both day three matches will matter.

How To Keep Round Robin Matches Interesting

Round robins–such as the formats used by the ATP and WTA Tour Finals–have a lot going for them. Fans are guaranteed at least three matches for every player, and competitors can recover from one (or even two) bad outings. Best of all, when compared to a knockout-style draw, it’s twice as much tennis.

On the other hand, round robins have one major drawback: They can result in meaningless matches. It’s fairly common that, after two matches, a player is guaranteed a spot in the semifinals (sometimes even a specific seed) or eliminated from contention altogether. At a high-profile event such as the Tour Finals, with sky-high ticket prices, do we really want to run the risk of dead rubbers?

I don’t claim to have the answer to that question. However, we can take a closer look at the round robin format to answer several relevant questions. What is the probability that the final day of a four-player group will include at least one dead rubber? What about the final match? And most importantly, before the event begins, can we set the schedule in such a way to minimize the likelihood of dead rubbers?

The range of possibilities

As a first step, let’s determine all of the possible outcomes of the first four matches in a four-player round robin group. For convenience, I’ll refer to the players as A, B, C, and D. The first day features two matches, A vs B and C vs D. The second day is A vs C and B vs D, leaving us with a final day of A vs D and B vs C.

Each match has four possible outcomes: the first player wins in two sets, the first player wins in three, the second wins in two, or the second wins in three. (Sets won are important because they are used as a tiebreaker when, for instance, three players win two matches apiece.) Thus, there are 4 x 4 x 4 x 4 = 256 possible arrangements of the group standings entering the final day of round robin play.

Of those 256 permutations, 32 of them (12.5%) include one dead rubber on the final day. In those cases, the other match is played only to decide semifinal seeding between the players who will advance. Another 32 of the 256 permutations involve one “almost-dead” match, between a player who has been eliminated and a player who is competing only to determine semifinal seeding.

In other words, one out of every four possible outcomes of the first two days results in a day three match that is either entirely or mostly meaningless. Later on, we’ll dig into the probability that these outcomes occur, which depends on the relative skill levels of the four players in the group.

Before we do that, let’s take a little detour to define our terms. Because of the importance of semifinal seeding, some dead rubbers are less dead than others. Further, it is frequently the case that one player in a match still has a shot at the semifinals and the other doesn’t. Altogether, from “live” to “dead,” there are six gradations:

  1. live/live — both players are competing to determine whether they survive
  2. live/seed — one player could advance or not; the other will advance, and is playing to try to earn the #1 group seed
  3. live/dead — one player is trying to survive; the other is eliminated
  4. seed/seed — both players will advance; the winner gets the #1 group seed
  5. seed/dead — one player is in the running for the #1 seed; the other is eliminated
  6. dead/dead — both players are eliminated

All else equal, the higher a match lies on that scale, the more engaging its implications for the tournament. For the remainder of this article, I’ll refer only to the “dead/dead” category as “dead rubbers,” though I will occasionally discuss the likelihood of “dead/seed” matches as well. I’ll assume that the #1 seed is always more desirable than #2 and ignore the fascinating but far-too-complex ramifications of situations in which a player might prefer the #2 spot.

The sixth match

As we’ve seen, there are many sequences of wins and losses that result in a dead rubber on day three. Once the fifth match is played, it is even more likely that the seedings have been determined, making the sixth match meaningless.

After five matches, there are 1,024 possible group standings. (256 permutations after the first four matches, multiplied by the four possible outcomes of the fifth match.) Of those, 145 (14.1%) result in a dead sixth rubber, and another 120 (11.7%) give us a “dead/seed” sixth match.

We haven’t yet determined how likely it is that we’ll arrive at the specific standings that result in dead sixth rubbers. So far, the important point is that dead rubbers on day three aren’t just flukes. In a four-player round robin, they are always a real possibility, and if there is way to minimize their likelihood, we should jump at the chance.

Real scenarios, really dead rubbers

To figure out the likelihood of dead rubbers in practical situations, like the ATP and WTA Tour Finals, I used a hypothetical group of four players with Elo ratings spread over a 200-point range.

Why 200? This year’s Singapore field was very tightly packed, within a little bit more than 100 points, implying that the best player, Angelique Kerber, had about a 65% chance of beating the weakest, Svetlana Kuznetsova. By contrast, the ATP finalists in London are likely to be spread out over a 400-point range, giving the strongest competitor, Novak Djokovic, at least a 90% edge over the weakest.

I’ve given our hypothetical best player a rating of 2200, followed by a field of one player at 2130, one at 2060, and one at 2000. Thus, our favorite has a 60% chance of beating the #2 seed, a 69% chance of defeating the #3 seed, and a 76% chance of besting the #4 seed.

For any random arrangement of the schedule, after the first two days of play, this group has a 17% chance of giving us a dead rubber on day three, plus a 23% chance of a “dead/seed” match on day three.

After the fifth match is contested, there is a 16% chance of that the sixth match is meaningless, with an additional 12% chance that the sixth match falls into the “dead/seed” category.

The wider the range of skill levels, the higher the probability of dead rubbers. This is intuitive: The bigger the range between the top and bottom, the more likely that the best player will win their first two matches–and the more likely they will be straight-setters. Similarly, the chances are higher that the weakest player will lose theirs. The higher the probability that players go into day three with 2-0 or 0-2 records, the less likely that day three matches have an impact on the outcome of the group.

How to schedule a round robin group

A 17% chance of a dead rubber on day three is rather sad. But there is a bright spot in my analysis: By rearranging the schedule, you can raise that probability as high as 24.7% … or drop it as low as 10.7%.

Remember that our schedule looks like this:

Day one: A vs B, C vs D

Day two: A vs C, B vs D

Day three: A vs D, B vs C

We get the lowest possible chance of a day three dead rubber if we put the players on the schedule in order from weakest to strongest: A is #4, B is #3, and so on:

Day one: #4 vs #3, #2 vs #1

Day two: #4 vs #2, #3 vs #1

Day three: #4 vs #1, #3 vs #2

There is a small drawback to our optimal arrangement: It increases the odds of a “dead/seed” match. It turns out that you can only optimize so much: No matter what the arrangement of the competitors, the probability of a “dead/dead” or “dead/seed” match on day three stays about the same, between 39.7% and 41.7%. While neither type of match is desirable, we’re stuck with a certain likelihood of one or the other, and it seems safe to assume that a “dead/seed” rubber is better than a totally meaningless one.

Given how much is at stake, I hope that tournament organizers heed this advice and schedule round robin groups in order to minimize the chances of dead rubbers. The math gets a bit hairy, but the conclusions are straightforward and dramatic enough to make it clear that scheduling can make a difference. Over the course of the season, almost every tennis match matters–it would be nice if every match at the Tour Finals did, too.

(I wrote more about this, which you can read here.)

What Would Happen If the WTA Switched to Super-Tiebreaks?

It’s in the news again: Some tennis execs think that matches are too long, fans’ attention spans are too short, and the traditional format of tennis matches needs to change. Since ATP and WTA doubles have already swapped a full third set for a 10-point super-tiebreak, something similar would make for a logical proposal to cap singles match length.

Let’s dig into the numbers and see just how much time would be saved if the WTA switched from a third set to a super-tiebreak. It is tempting to use match times from doubles, but there are two problems. First, match data on doubles is woefully sparse. Second, the factors that influence match length, such as average point length and time between points, are different in doubles and singles.

Using only WTA singles data, here’s what we need to do:

  1. Determine how many matches would be affected by the switch
  2. Figure out how much time is consumed by existed third sets
  3. Estimate the length of singles super-tiebreaks
  4. Calculate the impact (measured in time saved) of the change

The issue: three-setters

Through last week’s tournaments on the WTA tour this year, I have length (in minutes) for 1,915 completed singles matches.  I’ve excluded Grand Slam events, since third sets at three of the four Slams can extend beyond 6-6, skewing the length of a “typical” third set.

The average length of a WTA singles match is about 97 minutes, with a range from 40 minutes up to 225 minutes. Here is a look at the distribution of match times this year:

histo1

The most common lengths are between 70 and 90 minutes. Some executives may wish to shorten all matches–switching to no-ad games (which I’ve considered here) or a more radically different format such as Fast4–but for now, I think it’s fair to assume that those 90-minute matches are safe from tinkering.

If there is a “problem” with long matches–both for fan engagement and scheduling–it arises mostly with three-setters. About one-third of WTA matches go to a third set, and these account for nearly all of the contests that last longer than two hours. 460 matches have passed the two-hour mark this season. Of those, all but 24 required a third set.

Here is the distribution of match lengths for WTA three-setters this season:

histo3

If we simply removed all third sets, nearly all matches would finish within two hours. Of course, if we did that, we’d be left with an awful lot of ties. Instead, we’re talking about replacing third sets with something shorter.

Goodbye, third set

Third sets are a tiny bit shorter than the first and second sets in three-setters. If we count sets that go to tiebreaks as 14 games, the average number of games in a third set is 9.5, while the typical number of games in the first and second sets of a three-setter is 9.7.

Those counts are close enough that we can estimate the length of each set very simply, as one-third the length of the match. There are other considerations, such as the frequency of toilet breaks before third sets and the number of medical timeouts in different sets, but even if we did want to explore those minor issues, there is very little available data to guide us in those areas.

The length of a super-tiebreak

The typical WTA three-setter involves about 189 individual points, so we can roughly estimate that foregoing the third set saves about 63 points. How many points are added back by playing a super-tiebreak?

The math gets rather involved here, so I’ll spare you most of the details. Using the typical rate of service and return points won by each player in three-setters (58% on serve and 46% on return for the better player that day), we can use my tiebreak probability model to determine the distribution of possible outcomes, such as a final score 10-7 or 12-10.

Long story short, the average super-tiebreak would require about 19 points, less than one-third the number needed by the average third-set.

That still doesn’t quite answer our question, though. We’re interested in time savings, not point reduction. The typical WTA third set takes about 44 minutes, or about 42 seconds per point. Would a super-tiebreak be played at the same pace?

Tiebreak speed

While 10-point breakers are largely uncharted territory in singles, 7-point tiebreaks are not, and we have plenty of data on the latter. It seems reasonable to extend conclusions about 7-pointers to their 10-point cousins, and they are played with similar rules–switch servers every two points, switch points every six–and under comparable levels of increased pressure.

Using IBM’s point-by-point data from this year’s Grand Slam women’s draws, we have timestamps on about 700 points from tiebreaks. Even though the 42-seconds-per-point estimate for full sets includes changeovers, tiebreaks are played even more slowly. Including mini-changeovers within tiebreaks, points take about 54 seconds each, almost 30% longer than the traditional-set average.

The bottom line impact of third-set super-tiebreaks

As we’ve seen, the average third-set takes about 44 minutes. A 19-point super-tiebreak, at 54 seconds per point, comes in at about 17 minutes, chopping off more than 60% off the length of the typical third set, or about 20% from the length of the entire match.

If we alter this year’s WTA singles match times accordingly, reducing the length of all three-setters by one-fifth, we get some results that certain tennis executives will love. The average match time falls from 97 minutes to 89 minutes, and more importantly, far fewer matches cross the two-hour threshold.

Of the 460 matches this season over two hours in length, we would expect third-set super-tiebreaks to eliminate more than two-thirds of them, knocking the total down to 147. Here is the revised match length distribution, based on the assumptions I’ve laid out in this post:

histo4

The biggest benefit to switching to a third-set super-tiebreak is probably related to scheduling. By massively cutting down the number of marathon matches, it’s less likely that players and fans will have to wait around for an 11:00 PM start.

Of the various proposals floating around to shorten matches–third-set super-tiebreaks, no-ad scoring, playing service lets, and Fast4–changing the third-set format strikes the best balance of shortening the longest matches without massively changing the nature of the sport.

Personally, I hope none of these changes are ever seen on a WTA or ATP singles court. After all, I like tennis and tend to rankle at proposals that result in less tennis. If something must be done, I’d prefer it involve finding new executives to replace the ones who can’t stop tinkering with the sport. But if some rule needs to be changed to shorten matches and make scheduling more TV-friendly, this is likely the easiest one to stomach.

Measuring the Clutchness of Everything

Matches are often won or lost by a player’s performance on “big points.” With a few clutch aces or un-clutch errors, it’s easy to gain a reputation as a mental giant or a choker.

Aside from the traditional break point stats, which have plenty of limitations, we don’t have a good way to measure clutch performance in tennis. There’s a lot more to this issue than counting break points won and lost, and it turns out that a lot of the work necessary to quantify clutchness is already done.

I’ve written many times about win probability in tennis. At any given point score, we can calculate the likelihood that each player will go on to win the match. Back in 2010, I borrowed a page from baseball analysts and introduced the concept of volatility, as well. (Click the link to see a visual representation of both metrics for an entire match.) Volatility, or leverage, measures the importance of each point–the difference in win probability between a player winning it or losing it.

To put it simply, the higher the leverage of a point, the more valuable it is to win. “High leverage point” is just a more technical way of saying “big point.”  To be considered clutch, a player should be winning more high-leverage points than low-leverage points. You don’t have to win a disproportionate number of high-leverage points to be a very good player–Roger Federer’s break point record is proof of that–but high-leverage points are key to being a clutch player.

(I’m not the only person to think about these issues. Stephanie wrote about this topic in December and calculated a full-year clutch metric for the 2015 ATP season.)

To make this more concrete, I calculated win probability and leverage (LEV) for every point in the Wimbledon semifinal between Federer and Milos Raonic. For the first point of the match, LEV = 2.2%. Raonic could boost his match odds to 50.7% by winning it or drop to 48.5% by losing it. The highest leverage in the match was a whopping 32.8%, when Federer (twice) had game point at 1-2 in the fifth set. The lowest leverage of the match was a mere 0.03%, when Raonic served at 40-0, down a break in the third set. The average LEV in the match was 5.7%, a rather high figure befitting such a tight match.

On average, the 166 points that Raonic won were slightly more important, with LEV = 5.85%, than Federer’s 160, at LEV = 5.62%. Without doing a lot more work with match-level leverage figures, I don’t know whether that’s a terribly meaningful difference. What is clear, though, is that certain parts of Federer’s game fell apart when he needed them most.

By Wimbledon’s official count, Federer committed nine unforced errors, not counting his five double faults, which we’ll get to in a minute. (The Match Charting Project log says Fed had 15, but that’s a discussion for another day.) There were 180 points in the match where the return was put in play, with an average LEV = 6.0%. Federer’s unforced errors, by contrast, had an average LEV nearly twice as high, at 11.0%! The typical leverage of Raonic’s unforced errors was a much less noteworthy 6.8%.

Fed’s double fault timing was even worse. Those of us who watched the fourth set don’t need a fancy metric to tell us that, but I’ll do it anyway. His five double faults had an average LEV of 13.7%. Raonic double faulted more than twice as often, but the average LEV of those points, 4.0%, means that his 11 doubles had less of an impact on the outcome of the match than Roger’s five.

Even the famous Federer forehand looks like less of a weapon when we add leverage to the mix. Fed hit 26 forehand winners, in points with average LEV = 5.1%. Raonic’s 23 forehand winners occurred during points with average LEV = 7.0%.

Taking these three stats together, it seems like Federer saved his greatness for the points that didn’t matter as much.

The bigger picture

When we look at a handful of stats from a single match, we’re not improving much on a commentator who vaguely summarizes a performance by saying that a player didn’t win enough of the big points. While it’s nice to attach concrete numbers to these things, the numbers are only worth so much without more context.

In order to gain a more meaningful understanding of this (or any) performance with leverage stats, there are many, many more questions we should be able to answer. Were Federer’s high-leverage performances typical? Does Milos often double fault on less important points? Do higher-leverage points usually result in more returns in play? How much can leverage explain the outcome of very close matches?

These questions (and dozens, if not hundreds more) signal to me that this is a fruitful field for further study. The smaller-scale numbers, like the average leverage of points ending with unforced errors, seem to have particular potential. For instance, it may be that Federer is less likely to go for a big forehand on a high-leverage point.

Despite the dangers of small samples, these metrics allow us to pinpoint what, exactly, players did at more crucial moments. Unlike some of the more simplistic stats that tennis fans are forced to rely on, leverage numbers could help us understand the situational tendencies of every player on tour, leading to a better grasp of each match as it happens.

How Much Is a Challenge Worth?

When the Hawkeye line-calling system is available, tennis players are given the right to make three incorrect challenges per set. As with any situation involving scarcity, there’s a choice to make: Take the chance of getting a call overturned, or make sure to keep your options open for later?

We’ve learned over the last several years that human line-calling is pretty darn good, so players don’t turn to Hawkeye that often. At the Australian Open this year, men challenged fewer than nine calls per match–well under three per set or, put another way, less than 1.5 challenges per player per set. Even at that low rate of fewer than once per thirty points, players are usually wrong. Only about one in three calls are overturned.

So while challenges are technically scarce, they aren’t that scarce.  It’s a rare match in which a player challenges so often and is so frequently incorrect that he runs out. That said, it does happen, and while running out of challenges is low-probability, it’s very high risk. Getting a call overturned at a crucial moment could be the difference between winning and losing a tight match. Most of the time, challenges seem worthless, but in certain circumstances, they can be very valuable indeed.

Just how valuable? That’s what I hope to figure out. To do so, we’ll need to estimate the frequency with which players miss opportunities to overturn line calls because they’ve exhausted their challenges, and we’ll need to calculate the potential impact of failing to overturn those calls.

A few notes before we get any further.  The extra challenge awarded to each player at the beginning of a tiebreak would make the analysis much more daunting, so I’ve ignored both that extra challenge and points played in tiebreaks. I suspect it has little effect on the results. I’ve limited this analysis to the ATP, since men challenge more frequently and get calls overturned more often. And finally, this is a very complex, sprawling subject, so we often have to make simplifying assumptions or plug in educated guesses where data isn’t available.

Running out of challenges

The Australian Open data mentioned above is typical for ATP challenges. It is very similar to a subset of Match Charting Project data, suggesting that both challenge frequency and accuracy are about the same across the tour as they are in Melbourne.

Let’s assume that each player challenges a call roughly once every sixty points, or 1.7%. Given an approximate success rate of 30%, each player makes an incorrect challenge on about 1.2% of points and a correct challenge on 0.5% of points. Later on, I’ll introduce a different set of assumptions so we can see what different parameters do to the results.

Running out of challenges isn’t in itself a problem. We’re interested in scenarios when a player not only exhausts his challenges, but when he also misses an opportunity to overturn a call later in the set. These situations are much less common than all of those in which a player might want to contest a call, but we don’t care about the 70% of those challenges that would be wrong, as they wouldn’t have any effect on the outcome of the match.

For each possible set length, from 24-point golden sets up to 93-point marathons, I ran a Monte Carlo simulation, using the assumptions given above, to determine the probability that, in a set of that length, a player would miss a chance to overturn a later call. As noted above, I’ve excluded tiebreaks from this analysis, so I counted only the number of points up to 6-6. I also excluded all “advantage” fifth sets.

For example, the most common set length in the data set is 57 points, which occured 647 times. In 10,000 simulations, a player missed a chance to overturn a call 0.27% of the time. The longer the set, the more likely that challenge scarcity would become an issue. In 10,000 simulations of 85-point sets, players ran out of challenges more than three times as often. In 0.92% of the simulations, a player was unable to challenge a call that would have been overturned.

These simulations are simple, assuming that each point is identical. Of course, players are aware of the cap on challenges, so with only one challenge remaining, they may be less likely to contest a “probably correct” call, and they would be very unlikely to use a challenge to earn a few extra seconds of rest. Further, the fact that players sometimes use Hawkeye for a bit of a break suggests that what we might call “true” challenges–instances in which the player believes the original call was wrong–are a bit less frequent that the numbers we’re using. Ultimately, we can’t address these concerns without a more complex model and quite a bit of data we don’t have.

Back to the results. Taking every possible set length and the results of the simulation for each one, we find the average player is likely to run out of challenges and miss a chance to overturn a call roughly once every 320 sets, or 0.31% of the time. That’s not very often–for almost all players, it’s less than once per season.

The impact of (not) overturning a call

Just because such an outcome is infrequent doesn’t necessarily mean it isn’t important. If a low-probability event has a high enough impact when it does occur, it’s still worth planning for.

Toward the end of a set, when most of these missed chances would occur, points can be very important, like break point at 5-6. But other points are almost meaningless, like 40-0 in just about any game.

To estimate the impact of these missed opportunities, I ran another set of Monte Carlo simulations. (This gets a bit hairy–bear with me.) For each set length, for those cases when a player ran out of challenges, I found the average number of points at which he used his last challenge. Then, for each run of the simulation, I took a random set from the last few years of ATP data with the corresponding number of points, chose a random point between the average time that the challenges ran out and the end of the set, and measured the importance of that point.

To quantify the importance of the point, I calculated three probabilities from the perspective of the player who lost the point and, had he conserved his challenges, could have overturned it:

  1. his odds of winning the set before that point was played
  2. his odds of winning the set after that point was played (and not overturned)
  3. his odds of winning the set had the call been overturned and the point awarded to him.

(To generate these probabilities, I used my win probability code posted here with the assumption that each player wins 65% of his service points. The model treats points as independent–that is, the outcome of one point does not depend on the outcomes of previous points–which is not precisely true, but it’s close, and it makes things immensely more straightforward. Alert readers will also note that I’ve ignored the possibility of yet another call that could be overturned. However, the extremely low probability of that event convinced me to avoid the additional complexity required to model it.)

Given these numbers, we can calculate the possible effects of the challenge he couldn’t make. The difference between (2) and (3) is the effect if the call would’ve been overturned and awarded to him. The difference between (1) and (2) is the effect if the point would have been replayed. This is essentially the same concept as “leverage index” in baseball analytics.

Again, we’re missing some data–I have no idea what percentage of overturned calls result in each of those two outcomes. For today, we’ll say it’s half and half, so to boil down the effect of the missed challenge to a single number, we’ll average those two differences.

For example, let’s say we’re at five games all, and the returner wins the first point of the 11th game. The server’s odds of winning the set have decreased from 50% (at 5-all, love-all) to 43.0%. If the server got the call overturned and was awarded the point, his odds would increase to 53.8%. Thus, the win probability impact of overturning the call and taking the point is 10.8%, while the effect of forcing a replay is 7.0%. For the purposes of this simulation, we’re averaging these two numbers and using 8.9% as the win probability impact of this missed opportunity to challenge.

Back to the big picture. For each set length, I ran 1,000 simulations like what I’ve described above and averaged the results. In short sets under 40 points, the win probability impact of the missed challenge is less than five percentage points. The longer the set, the bigger the effect: Long sets are typically closer and the points tend to be higher-leverage. In 85-point sets, for instance, the average effect of the missed challenge is a whopping 20 percentage points–meaning that if a player more skillfully conserved his challenges in five such sets, he’d be able to reverse the outcome of one of them.

On average, the win probability effect of the missed challenge is 12.4 percentage points. In other words, better challenge management would win a player one more set for every eight times he didn’t lose such an opportunity by squandering his challenges.

The (small) big picture

Let’s put together the two findings. Based on our assumptions, players run out of challenges and forgo a chance to overturn a later call about once every 320 matches. We now know that the cost of such a mistake is, on average, a 12.4 percentage point win probability hit.

Thus, challenge management costs an average player one set out of every 2600. Given that many matches are played on clay or on courts without Hawkeye, that’s maybe once in a career. As long as the assumptions I’ve used are in the right ballpark, the effect isn’t even worth talking about. The mental cost of a player thinking more carefully before challenging might be greater than this exceedingly unlikely benefit.

What if some of the assumptions are wrong? Anecdotally, it seems like challenges cluster in certain matches, because of poor officiating, bad lighting, extreme spin, precise hitting, or some combination of these. It seems possible that certain scenarios would arise in which a player would want to challenge much more frequently, and even though he might gain some accuracy, he would still increase the risk.

I ran the same algorithms for what seems to me to be an extreme case, almost doubling the frequency with which each player challenges, to 3.0%, and somewhat increasing the accuracy rate, to 40%.

With these parameters, a player would run out of challenges and miss an opportunity to overturn a call about six times more often–once every 54 sets, or 1.8% of the time. The impact of each of these missed opportunities doesn’t change, so the overall result also increases by a factor of six. In these extreme case, poor challenge management would cost a player the set 0.28% of the time, or once every 356 sets. That’s a less outrageous number, representing perhaps one set every second year, but it also applies to unusual sets of circumstances which are very unlikely to follow a player to every match.

It seems clear that three challenges is enough. Even in long sets, players usually don’t run out, and when they do, it’s rare that they miss an opportunity that a fourth challenge would have afforded them. The effect of a missed chance can be enormous, but they are so infrequent that players would see little or no benefit from tactically conserving challenges.