Low Power & Effect Sizes

Yesterday I posted the following tweet which has since turned out to be my most popular tweet EVER with hundreds of retweets and “likes” in 24 hours:

My motivation for the tweet was quite straightforward. I have recently been emailing academics in my department every week with different topics in an attempt to raise awareness of topics associated with increasing the information value of the research we are conducting as a department. This week’s topic was “Power”. In my email—in which I included a copy of Button et al.’s (2013) excellent paper on low-power in the neurosciences—I mentioned in passing that power is not just an issue for statistical significance. I have heard from people before that low power is only an issue when interpreting null results, and that if a study produces a significant outcome, then power is not an issue.

To pre-empt this response to my encouragement to increase the power of our studies, I said in my email: “Studies with low power and significant effects have been shown to over-estimate effect sizes, meaning your low-powered study—although significant—is not giving you precision.”

As soon as I sent the email, I realised that I couldn’t recall ever reading a study that had demonstrated this. Now, I knew that such a study (or studies) would have been conducted, but I realised that I had never actually read it myself. It turns out that such studies have indeed been conducted before, as people helpfully pointed out to me on Twitter in response to my tweet:

As I was unaware of these studies—plus it was a Sunday, and I was a little bored—I thought instead of doing a literature search I would code a simulation demonstrating the inflation of effect sizes with low-powered, significant, studies the results of which I emailed to my department to demonstrate that what I had said was indeed the case. Then I thought, “Well, I haven’t tweeted much this year, so why not put it on Twitter, too.”

The incredible engagement I have had with this tweet—I propose—is due to this being a rather under-appreciated fact. Indeed, I “knew” that low-powered studies over-estimate effect sizes, but I didn’t KNOW it in the sense that I had seen hard evidence for it.

 

Details of Simulation

Because my tweet was made in passing, I didn’t explain in much detail about the stimulation implementation. I discuss this here in case others want to extend the simulation in some way.

The effect size of interest is a measure of correlation between two measures. I arbitrarily chose IQ (mean = 100, SD  = 20) and response time (mean = 600ms, SD = 80ms). I fixed the “true” effect size to be r = 0.3. It turns out that to obtain 80% power for an r=0.3 requires 85 subjects. In my simulation, I wanted to explore a wide range of sample sizes, so chose the set 10, 20, 30, 50, 85, 170, 500, and 1000.

For each sample size—N—I simulated 1,000 “studies”. For each simulated study, the following procedure occurred:

  • Sample N draws from a multivariate normal distribution with the means and SD for IQ and RT as above and a population correlation coefficient of 0.3
  • Conduct a Pearson’s correlation between the two samples
  • If the correlation was significant, store the observed correlation coefficient in a new data frame
  • If the correlation was not significant, move on without storing anything
  • After 1,000 studies are completed, plot a boxplot of the observed effect sizes for N

The result was the image in the tweet.

Limitations

Many limitations exist to this simulation, and I point interested readers to the material cited above in others’ tweets for a more formal solution. I didn’t intend for this to be a rigorous test, so it shouldn’t be taken too seriously; it was more for my own curiosity and also to provide a graphical image I could send to my colleagues at Keele to show the imprecision of effect sizes with low power. The particular outcomes are likely sensitive to my choice of means, SDs, r, etc. So, don’t generalise the specifics of this simulation, but maybe code your own tailored to your study of interest.

For me this was a bit of fun. Ten minutes of coding was time well spent on Sunday!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s