Featured Product
This Week in Quality Digest Live
Management Features
Constance Noonan Hadley
The time has come to check whether the benefits of teamwork still outweigh the costs
Naresh Pandit
Enter the custom recovery plan
Anton Ovchinnikov
In competitive environments, operational innovation could well be the answer to inventory risk
Julie Winkle Giulioni
The old playbook probably won't work
Sarah Schiffling
But supply chains will get worse before they get better

More Features

Management News
Program inspires leaders to consider systems perspective for continuous improvement and innovation
Recent research finds organizations unprepared to manage more complex workforce
Attendees will learn how three top manufacturing companies use quality data to predict and prevent problems, improve efficiency, and reduce costs
More than 40% of directors surveyed cite the ability of companies to execute as one of the biggest threats to improving ESG performance
MIT Sloan study shows that target-independent compensation systems can be superior
Steps that will help you improve and enhance your employee recruitment, retention, and engagement
300 Talent acquisition leaders and HR executives from companies gather in Kansas City
FedEx demonstrates commitment to customer-focused continuous improvement

More News

Davis Balestracci

Management

Golf, Statistically

Commentator hyperbole and the Masters golf tournament

Published: Monday, June 15, 2015 - 14:51

To celebrate Father’s Day in the United States (June 21 this year), I’m going to use this and my next column to honor my late dad by using a game he loved—golf—to teach some very basic statistics lessons. Some of these may have been lost on you previously, not through some fault of your own, but rather from trainers’ tendency to concentrate on a technique’s mechanics. Analysis of means (ANOM) might be new to many of you, but even if it’s a review, I hope you have as much fun reading this as I did writing it.

The recent Masters tournament, in which 97 golfers participated, will provide the data. The Masters is the crème-de-la-crème of golf tournaments. One qualifies by winning a major tournament or by formal invitation. Past champions qualify automatically.

The first two rounds of any tournament are used to establish the “cut” to narrow the field for the last two rounds. Cut rule: Following the second round, the 50 golfers with the lowest scores, plus ties, plus any golfer within 10 strokes of the lead, advance to play the final two rounds. In this case, players with scores above 146 were cut, narrowing the field from 97 to 55.

Here’s the analysis of variance (ANOVA) for the first two rounds:

Pre-cut: first two rounds

Source DF SS MS F P
Round 1 19.814 19.814 2.35 0.129
Golfer 96 1755.938 18.291 2.17 0.000
Error 96 810.186 8.439    
Total 193 2585.938      

S = 2.90

The standard deviation yielded by the ANOVA confirmed my previous experiences in looking at tournament scores. It consistently comes in around 2.5 to 3.

The seemingly arbitrary choice of 10 strokes from the lead is interesting. There’s something one can calculate called the least significant difference to declare two numbers in an analysis different. In this case, we’d use the t-value for 95-percent confidence and 96 df applied to the difference of two, two-round scores:

1.984 × sqrt (2 × (2 × 8.439)) ~ 11.5.

To be “statistical,” maybe the difference of 10 strokes from the leaders should be 12 instead? (Makes no difference in this case.)

I then did some basic calculations that might explain why even the best golfers can have what looks like an “off” day, which of course is usually treated as a special cause.

Using the standard deviation of 2.9, any one round can be anywhere from six to nine strokes—2 to 3 standard deviations—above or below par just “because.”

Lesson 1: A basic control chart calculation to get an interesting (and surprising) number

Because there was no difference by round (p-value of 0.129), I made the control chart calculation using the range of every individual golfer’s two rounds (i.e., the absolute value of their difference) to calculate an upper limit for how much two consecutive individual rounds could differ just due to common cause:

Average moving range = 3.237, 3.237 × 3.268 ~ 11 (occurred for two golfers, no one exceeded)
Median moving range = 3, 3 × 3.865 ~ 12

ANOVA answers the question, “Is there any evidence of ‘criminality’—i.e., differences?” ANOM answers the question, “Who, specifically, is ‘guilty’?”

ANOM is a brilliant and woefully underutilized technique invented by the late Ellis Ott. In this context, it treats the golfers as a “system.” It’s not as if each golfer had a specific experimental “treatment” applied to him to test. A pro is a pro, and that in and of itself makes for a fair comparison, so it’s assumed that the golfers are equivalent unless proven otherwise.

Isn’t this similar to what we should do to expose special causes of variation in a seemingly stable process?

Using the standard deviation of 2.9 to do an ANOM on the two qualifying rounds, we get:

If you’re curious about who’s who, here is the link to the leader board. This ranking of the final scores forms the horizontal axis order.

The highest score was made by the venerable past champion, Ben Crenshaw, long retired and obviously past his prime, who announced that this was his final Masters. The next highest score was the 2003 champion, whose Masters scores for the past nine consecutive years were above par. (A special cause indicating a deteriorating game? If you click on the link, click on the “player record” tab to get to his scores.) The next three scores were amateurs’, probably in a bit over their heads.

Surprised by the extent of the variation? There’s quite a lottery aspect to it. Maybe all one can say to pro golfers Nos. 56 to 92 is, “Better luck next time!”

Lesson 2: Normal distribution isn’t sacrosanct

I was curious what the nonparametric boxplot would yield for outlier criteria vis-à-vis the ANOM graph, which had limits of 134 to 159. For the 97 entrants’ first two rounds totals, I got:

N Mean StDev Minimum Q1 Median Q3 Maximum
97 146.21 6.05 130 142 145 149 176

[Q3 – Q1 = 7 Interquartile range]

Lower outlier: Q1 – 1.5 × (Q3 – Q1): 142 – 1.5 x (149 – 142) < 131.5

Upper outlier: Q3 + 1.5 × (Q3 – Q1): 149 + 1.5 x (149 – 142) > 159.5

Is sports commentary mostly about explaining common cause as special cause? The CBS announcers were especially surprised that these four golfers didn’t make the cut: “Jim Furyk... could not put it together in either round... and therefore will not play the weekend in Augusta for the first time since 2010 and just the [third] time in his career.”

+2, +1, difference = 1. Do the math. Er... uh... since he missed the cut, isn’t this the first time he missed the cut since he last missed the cut? So the announcer’s point is—? In terms of Masters history, Furyk made the cut 16 out of 19 times. The probability of six consecutive years without a cut is (16/19)**6) ~0.36). Very real possibility of missing the cut (0.74), but there was a reasonable possibility of getting by… this year.

“Brandt Snedeker... missed the cut at the Masters for the first time since 2009.”

Snedeker had the same scores as Furyk, only it was his first time since 2009. So the announcer’s point is—? Masters history: Snedeker made the cut six out of eight times. The probability of seven years in a row without a cut is (6/8)**7) ~ 0.13). It was getting to be about time for Snedeker to miss one, but he could have sneaked by.

“J. B. Holmes... [scored] –1 for the day on Friday, but it was not enough to save him from [his] four over 76 on Thursday.”
+4, –1, difference = 5. Do the math. Masters history: Homes made the cut one out of two times.

“Billy Horschel...[looked] to be right in the thick of things on Thursday with a two under, [but] fell apart Friday, going six over without a single birdie to his name.”
–2, +6, difference = 8. Do the math! Masters history: Horschel made the cut one out of two times.

Here’s an ANOM of the fraction of the time the 55 Masters qualifiers, plus these four, made the cut during all of the 2014–2015 tournaments they played:

So even an elite golfer makes the cut only about 76 percent of the time. Could the reason that the four people singled out as surprises for not making the cut be “just because?”

I was curious, so I looked up Jim Furyk’s cut history since 1994:

Talk about consistent! Is this compatible with his 16 out of 19 Masters record? Yes.

And then there was the statement that: “Horschel fell apart Friday, going six over without a single birdie to his name.” I did an ANOM on the fraction of birdies for the 59 golfers above, but I broke Horschel into his two separate 18-hole rounds (the round being referred to is the very last data point in the chart below):

A round with no birdies? It could happen just due to common cause!

In my next column I’ll continue this theme and analyze all four rounds—then give it a twist based on the ongoing enumerative vs. analytic conundrum.

Discuss

About The Author

Davis Balestracci’s picture

Davis Balestracci

Davis Balestracci is a past chair of ASQ’s statistics division. He has synthesized W. Edwards Deming’s philosophy as Deming intended—as an approach to leadership—in the second edition of Data Sanity (Medical Group Management Association, 2015), with a foreword by Donald Berwick, M.D. Shipped free or as an ebook, Data Sanity offers a new way of thinking using a common organizational language based in process and understanding variation (data sanity), applied to everyday data and management. It also integrates Balestracci’s 20 years of studying organizational psychology into an “improvement as built in” approach as opposed to most current “quality as bolt-on” programs. Balestracci would love to wake up your conferences with his dynamic style and entertaining insights into the places where process, statistics, organizational culture, and quality meet.

Comments

It was perfectly clear to ME what I meant

Sorry for the confusion.  That is a classic Analysis of Variance table.  Your main concern should be the "p" at the very right.  That is the probability that, if you declare an effect significant, you could be wrong (based on the current data).  Classic statistics courses teach that one is willing to take a 5% risk of that, so you are looking to see whether p < 0.05.

Other terms:  SS (sum of squares), df (degrees of freedom), MS (a factor's SS/df), F (a statistical test of significance of a factor using the common cause (Error MS):  MS factor / MS Error.  The "p" results from looking up the value of "F" in a statistical table, which good statistical packages now do automatically -- I remember the days when they didn't!)

I hope this helped.  Thanks for reading.

Davis

graph headers

Not being a golfer, I don't know what your first table headers refer to.  I haven't used statistics for a long time either.

DF
SS
MS
F
P

 What do these represent? 

Use of a run chart?

Thinking back to Tiger's golden years (e.g. 1995 to 2005 perhaps?), would a run chart of his combined 1st two round scores show he was consistently "different", i.e. better than the rest of the field? Perhaps his more recent scores would give a clear statistical signal of his powers waning?

I remember, come the final day, Nick Faldo was always in the reckoning for many years, perhaps his low-scores in the 1st 2 rounds would consistenly show him as an outstanding performer (in the 90s?).

Perhaps even Greg Norman, going back 15 to 20 years perhaps, would give a signal with his final round scores when he often went into the final round top of the leaderboard but failed to come away with the trophy due to a last round of perhaps 75 or 76? Was Norman's performance in these final rounds due to common cause or evidence of some potentially identifiable assignable cause affecting his game?

Here I'd be curious to see which technique/s you'd recommend if we included this year-by-year time element in the process as well.

thanks, scott.

I think you're onto something, Scott

Good comments and suggested analyses!  The data are easily available should one wish to do it.  I think I gave a good example with Jim Furyk's cut history and citing the recent performance of the 2003 champion.  Isn't is amazing what simply plotting data over time can do?

Davis

Six Sigma Golf

I agree with Davis; much of any golfer's variability is common, not special cause.

Most golfers don't think about tracking their shots, but 80% of golf shots occur within 100 yards of the hole, yet were do most golfers spend their time? On the range, hitting their driver. This is why the short game and putting are so essential to scoring well. A good drive is important, but pros only hit driver on a maximum of 15 holes (the others are par 3s). Keeping the number of putts per round under 30 is key for scoring. Want to score better? Double your putting practice. Simply tracking whether you tend to putt long or short, left or right will dramatically reduce your score.

Pro golfers think about hitting a sprinkler head on the fairway; amateurs think about hitting the fairway. Pro golfers think about hitting the hole; amateurs think about hitting the green. Amateurs have a much higher variation because they're thinking about a bigger target. Want a better score? Shrink your target.

Years ago I wrote a 24 page booklet on <a href="http://www.qimacros.com/pdf/golf.pdf">Six Sigma Golf</a>. One reader didn't think you could apply Six Sigma to golf, so he tested it out and cut 10 strokes off his game. See how Six Sigma can improve your golf game.

Good insights, Jay

Insightful comments, as always, and, as usual, the elegant simplicity of your suggestions hit the nail right on the head.

Davis