New Book Review: "What is a P-Value Anyway?"

New book review for   What is a P-Value Anyway? 34 Stories to Help You Actually Understand Statistics, by Andrew J. Vickers, Addison Wesley, 2009, reposted here:

What_is_a_p-value_anyway

Stars-5-0._V47081849_

Given the increased focus on analytics in the fields of technology and business, this gem of a book offers grounding to many who have lost sight of what statistics is all about. While I can easily see this author write a text that focuses solely on the p-value, the potential reader should instead evaluate the scope of what Vickers provides in this 200-page effort based on the subtitle, which points to understanding statistics in general. Other reviews are accurate. This book is small, but offers substance in an entertaining manner unlike any statistics text I have ever come across, certainly nothing like anything ever offered in my undergraduate statistics coursework.

In this introduction, the author summarizes the content well by noting that "the first 12 chapters deal with some basics, such as averages, variation, distributions and confidence intervals. I then have a few chapters on hypothesis testing and p-values, before discussing regression – the statistical method I use most in my work – and decision making – which generally should be, but often isn't, what statistics is about. The last third of the book, starting from the chapter 'One better than Tommy John', is devoted to discussing a wide variety of statistical errors."

"If it seems odd to devote so much of a book to slip-ups, it is because I have a little theory that 'science' is just a special name for 'learning from our mistakes'. When I teach, I give bonus points for any student giving a particularly dumb answer because those are the ones we really learn from. In fact, I don't think you can really understand, say, a p-value, without seeing some of the ways it has been misused and thinking through why these constitute mistakes. So please don't blow these chapters off thinking you've read the stuff you'll be examined on: the final chapters will really fill in your statistical knowledge."

Chapters that I especially appreciated include Chapter 9 on degree of normal distribution fit, Chapter 11 on variation and confidence intervals, Chapters 13, 14, 15, 23, and 29 on p-values, Chapter 17 on sample size, precision, and statistical power, Chapter 19 on regression and confounding, Chapter 20 on specificity and sensitivity, Chapter 21 on decision analysis, Chapter 22 on statistical errors, and Chapter 32 on science, statistics, and reproducibility. The discussion section that comprises the last 25% of what the author shares here works through questions posed at the end of each of the 34 chapters, and much of the value that I personally obtained from this text can be found in this section.

In my opinion, it is the rare reader who will not find any author discussions worth remembering, because Vickers simply tells it like it is. In Chapter 5, for example, the author describes a continuous variable as one that can take "a lot of different values", and in the discussion section for this chapter he points out that "statisticians disagree on this point (statisticians disagree on a lot of points, which just goes to show how much of statistics is a judgement call)." In Chapter 13, the author indicates that he "provided strong evidence" for something, and in the discussion section comments that "'proof' is not a word often used by scientists."

"Statisticians are particularly careful with the word 'proof', because they are keenly aware of the limitations of data, and the important role that chance plays in any set of results. Statisticians normally use the word 'proof' only to refer to mathematical relationships between formulas. The point here is that you don't use data to do math theory, so you aren't subject to the limitations of data, and so you can go about really claiming to have 'proved' something. It is certainly unwise to think that you can prove anything by applying a statistical test to a data set."

In Chapter 9, the author notes that "statisticians don't typically seem to worry too much about whether or not the data are a close fit to the normal distribution because they realize that statistics isn't football, and no one is going to throw a flag and send you back 10 yards if you are caught breaking the rules. In fact, there aren't really many 'rules' at all." After quoting one sentence from a scientific paper describing a clinical trial in Chapter 22, the author works through the sentence and discusses each of the four statistical errors made by the researcher, and discusses why he cares about such errors.

"Many people seem to think that we statisticians spend most of our time doing calculations, but that is perhaps the least interesting thing we do. Far more important is that we spend time looking at numbers and thinking through what they mean. If I see any number in a scientific report that is meaningless – a p-value for baseline differences in a randomized trial, say, or a sixth significant figure – I know that the authors are not being careful about what they are doing, they are just pulling numbers from a computer printout. Statistics is more than just cutting and pasting from one software package to another. We have to think about what the numbers mean and the implications for our scientific question." Well said.

Subscribe to Erik on Software

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe