A paper without p-values

All I seem to be writing about recently is rejection. So, today I thought I would write about a paper I have recently had accepted (they DO happen!). The paper is in press at the Quarterly Journal of Experimental Psychology, and I am very proud of it. (You can read it here; yay for Open Science.)

My pride is a consequence of several factors: it is a product of two years’ (hard) work and thought; it contains my first published attempts at computational modelling (see Appendices); it touches upon a very topical question (do memories for tasks/goals decay, or are they lost due to interference); Ellen Cross was an incredibly intelligent Year 2 (at the time) student at Keele Psychology, so it was a pleasure to help her career along (she is now doing a PhD in Neuroscience); and it doesn’t contain a single p-value.

Yes, that’s right. There isn’t a single p-value in the paper. No null-hypothesis significance test.  I must admit, I submitted the paper with some trepidation, and was  expecting the reviewers and editor to ask for p-values to be included (or some other form of NHST), but to my delight there was no mention of it! 

I am quite convinced by Geoff Cummings’ assertion that researchers should focus on estimation of effect sizes and meta-analytic thinking rather than NHST. This conviction began by reading his book (see here), and was cemented by his paper in Psychological Science 

So, instead of using NHST and p-values, I used confidence intervals. The paper is littered with plots like the one below; the panel on the left (A) shows the main data (mean response times [RTs] to task-switches and task-repetitions as a function of the interval between tasks, known as the response–cue interval [RCI]) together with their 95% confidence intervals. In this literature, one key effect is whether the difference between switch RT and repetition RT (the “switch cost”) decreases as RCI increases.

 

E2rci

 

To keep with the “estimation” philosophy of Cumming, I was more interested in the magnitude of the switch cost as RCI increased. So, in the right panel (B), I calculated the switch cost (switch RT minus repetition RT) for each RCI, and plotted them with their 95% confidence intervals. There are two types of confidence intervals; the thin one is the standard 95% CI, and the dark one is the Loftus & Masson CI for within-subject designs (see paper here).  (If you want to know how to do two error bars in Excel, I produced a short video here.)

I believe NHST is not essential for my purpose here (or throughout the paper). The panel on the left shows the general pattern of data, together with an estimate of the precision of these estimates. The panel on the right shows the magnitude (and precision of the estimate) of the switch cost. It is clear to see that the switch cost is reducing as RCI increases. The advantage of the CI approach—in my opinion—is that it forces us to focus on the magnitude & precision of effects, rather than their mere presence/absence. 

I’m not sure that I will never use NHST again. This paper was unique in that I was practically the sole-author on the paper, and thus can take full responsibility for it. EJ-Wagenmakers and colleagues also make a compelling argument for why hypothesis testing is essential for psychological science (see paper here). 

Either way, this paper was fun to write. It was incredibly refreshing to not discuss p-values and “significance”. It is odd, but it felt more scientific. Why not give it a try in your next paper?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s