The Polls Weren’t Wrong

TL;DR: Trump had a 28% chance to win. We shouldn’t be surprised he won.

I’m not going to comment on the political outcome of last week’s US Presidential elections; enough ink—both pen-ink and eye-ink—has been spilled about that. What I am going to comment on though is the growing feeling the polls were wrong, and why an understanding of probability and statistical evidence might lead us to a more positive conclusion (or at least, less negative).

After last week’s “shock” result—read on for why shock is in scare-quotes—many news articles began to ask “Why were the polls wrong?”. (For a list, see the Google search here). This question is largely driven by the fact influential pollsters heavily favoured a Clinton victory. For example, FiveThirtyEight’s polls predicted a 71.4% chance of a Clinton victory. The New York Times predicted an 85% chance of a Clinton victory. Pretty convincing, huh? Something must have gone wrong in these polls, right?

538

 

All polls are known to be sub-optimum, but even if we found a way to conduct a perfect poll, and this perfect poll predicted a 71.4% chance of a Clinton victory, could we now state after observing a Trump victory that the polls were wrong? No, and the reason why most of us find this difficult to grasp is that most of us don’t truly appreciate probability.

No poll that I am aware of predicted a 100% chance of a Clinton victory. All polls that I saw had a non-zero chance of a Trump victory. So, even if with our “perfect” poll we see that Trump had a 28.6% chance of winning the election, we should not be surprised with a Trump victory. You can be disgusted, saddened, and /or scared, but you should not be surprised. After all, something with a 28.6% chance of occurring has—you guessed it!—a 28.6% chance of occurring.

28.6% translates to a 1 in 3.5 chance. If you think of a 6-sided die, each number has a 1 in 6 chance of being rolled on a single roll (~16.67% chance). Before you roll the die, you expect to see any other number than a 6. Are you surprised then if when you roll the die you observe a 6? Probably not. It’s not that remarkable. Yet it is expected less than Trump’s 28.6%. Likewise, if the weather-person on TV tells you there is a 28.6% chance of rain today, are you surprised if you get caught in a shower on your lunch break? Again, probably not.

So, the polls weren’t wrong at all. All predicted a non-zero chance of a Trump victory. What was wrong was the conclusion made from the polls.

Richard Royall & “Statistical Evidence”

The above raced through my mind without a second thought when I read numerous articles claiming the polls were wrong, but it was brought into sharper focus today when I was reading Richard Royall’s (excellent) chapter “The Likelihood Paradigm for Statistical Evidence”. In this chapter, he poses the following problem. A patient is given a non-perfect diagnostic test for a disease; this test has a 0.94 probability of detecting the disease if it is present in the patient (and therefore a 0.06 probability of missing the disease when it is present). However, it also has a non-zero probability of 0.02 of producing a “positive” detection even though the disease is not present (i.e., a false-positive).

The table below outlines these probabilities of the test result for a patient who does have the disease (X = 1) and a patient who does not have the disease (X = 0).

royall

Now a patient comes to the clinic and the test is administered. The doctor observes a positive result. What is the correct conclusion the doctor can make based on this positive test result?

  1. The person probably has the disease.
  2. The person should be treated for the disease.
  3. This test result is evidence that this person has the disease.

The person probably has the disease

Intuitively, I think most people would answer this is correct. After all, the test has a 0.94 probability of detecting the disease if present, and we have a positive test result. It’s unlikely that this is a false positive, because this only occurs with a probability of 0.02.

However, this does not take into account the prior probability of the disease being present. (Yes, I have just gone Bayesian on you.) If the disease is incredibly rare, then it turns out that there is a very small probability the patient has the disease even after observing a positive test outcome. For a nice example of how the prior probability of the disease influences the outcome, see here.

The person should be treated for the disease

It should be clear from the above that this conclusion also depends on the prior probability of the disease. If the disease is incredibly rare, the patient doesn’t likely have it (even after a positive test result), so don’t waste resources (and risk potential harm to the patient). Again, the evidence doesn’t allow us to draw this conclusion.

This test result is evidence that this person has the disease

Royall argues that this is the only conclusion one can draw from the evidence. It is subtly different from Conclusion 1, but follows naturally from the “Law of Likelihood”:

If hypothesis A implies that the probability that a random variable X takes the value x is pA(x), while hypothesis B implies that the probability is pB(x), then the observation X = x is evidence supporting A over B if and only if pA(x) is less than pB(x)…

In our “disease” example, the observation of a positive result is evidence that this person has the disease because this outcome (a positive result) is better predicted under the hypothesis of “disease present” than the hypothesis “disease absent”. But it doesn’t mean that the person probably has the disease, or that we should do anything about it.

Back to Trump

Observing a Trump victory after a predicted win of 28.6% isn’t that surprising. The polls weren’t wrong. 28.6% is a non-zero chance. We should interpret this evidence in a similar way to the disease example: These poll results are evidence that Clinton will win. It is a mistake to interpret them as “Clinton probably will win”.

Replication Crisis: What Changes Have your Department Made?

It’s been a year since the Open Science Collaboration’s publication on “Estimating the Reproducibility of Psychological Science” was published in Science. It has been cited 515 times since publication, and has been met with much discussion on social networks.

I am interested in what changes your psychology department have made since. Are staff actively encouraged to pre-register all studies? Do you ask faculty members for open data? Are faculty members asked to provide open materials? Do ethics panels check the power of planned studies? Have you embedded Open Science practices into your research methods teaching?

I am preparing a report for my department on how we can address the issues surrounding the replication crisis, and I would be very interested to hear what other departments have done to address these important issues. Please comment on this post with what your department has done!

Do Olympic Hosts Have a “Home-Field” Advantage?

My wife questioned in passing yesterday whether summer Olympic hosts have a home-field advantage; that is, do the hosts generally win more medals in their hosting year than in their non-hosting years?

That a home-field advantage exists in many team sports is generally not disputed—see for example this excellent blog post by the Freakonomics team. But is this true for (generally) individual sports like the Olympics? Most of us Brits recall our amazing—and quite unusual—3rd place finish when we hosted the event in 2012, so anecdotally I can understand why suspicion of a home-field advantage exists.

But is it real? I am quite sure there is an answer to this question on the web somewhere, but I wanted to take this opportunity to try and find an answer myself. Basically, I saw this as an excuse to learn some web-scraping techniques using R statistics.

The Data

Wikipedia holds unique pages for each Summer Olympic games. On these pages are medal tables tallying the number of Gold, Silver, and Bronze each competing nation won that year, as well as the Total Medals. So, I wrote some code in R that visits each of these pages in turn, finds the relevant html table containing the medal counts, and extracts it into my work-space. I only looked at post-2nd-world-war games.

My idea was to plot all the medals won for each host nation for all years they have appeared at the games. I was interested in whether the total number of medals that the host won in their host-year was more than their average (mean) across all the games the host had appeared. If there is some sort of home-field advantage, generally we would expect their host-year to be one of their better years, certainly above their average Olympic performance.

The Results

Below is a plot of the results. The header of each plot shows who the host was that year, and the data in each plot shows the total number of medals won by the host in all of the games they have appeared in. To help interpretation of the results, for each plot, the vertical blue line shows the year that nation hosted the games, and the horizontal red line shows that nation’s mean performance across all their games.

medals-1

Conclusion

I would take this data as providing some evidence that nations generally perform better when they are hosting the games. 11 out of 16 nations had their best year the year they hosted the games. All nations performed above average the year they hosted the games (although maybe Canada, 1976, just missed out).

The Real Conclusion (And the Code)

Coding in R is fun, and I look for any excuse to work on new projects. This is my first attempt at doing web scraping, and it wasn’t as painful as I thought it would be. Below is the code, relying a lot on the rvest R package which I highly recommend; check out this nice introduction to using it.

The code I wrote is below. It’s certainly not optimal, and likely full of errors, but I hope someone finds it of use. Although I tried to automate every aspect of the analysis, some aspects had to be manually altered (for example to match “Soviet Union” data with “Russia” data).

 

#------------------------------------------------------------------------------
# clear workspace
rm(list = ls())

# set working directory
setwd("D:/Work/Blog_YouTube code/Blog/Olympic Medals")

# load relevant packages
library(rvest)
library(stringr)
library(dplyr)
library(ggplot2)

# suppress warnings
options(warn = -1)
#------------------------------------------------------------------------------


#------------------------------------------------------------------------------
### get a list of all of the host nations

# set the url and extract html elements
host_url <- "http://www.topendsports.com/events/summer/hosts/list.htm"
temp <- host_url %>%
  html %>%
  html_nodes("table")

# extract the relevant table
hosts <- data.frame(html_table(temp[1]))

# remove the years that the Olympics were not held
hosts <- hosts[!grepl("not held", hosts$Host.City..Country), ]

# remove the cities from the host column
countries <- hosts$Host.City..Country
countries <- gsub(".*,", "", countries)
hosts$Host.City..Country <- countries

# remove the Olympics that are ongoing (or are yet to occur) and generally
# tidy the table up. Also, only select post-1948 games.
hosts <- hosts %>%
  select(-Olympiad) %>%
  select(year = Year, host = Host.City..Country) %>%
  filter(year < 2016 & year > 1948)

# remove white space from the names
hosts$host <- gsub(" ", "", hosts$host, fixed = TRUE)

# change host England to Great Britain. 
# change SouthKorea to South Korea
# change USSR to Russia
hosts$host <- gsub("England", "Great Britain", hosts$host, fixed = TRUE)
hosts$host <- gsub("SouthKorea", "South Korea", hosts$host, fixed = TRUE)
hosts$host <- gsub("USSR", "Russia", hosts$host, fixed = TRUE)
#------------------------------------------------------------------------------ 


#------------------------------------------------------------------------------ 
### get the medal tables for each year and store them in one list

# get a vector of all years
years <- hosts$year

# create a list to store the medal tables
medal_tables <- list()

# loop over each year and retrieve the data from Wikipedia
for(i in 1:length(years)){
  
  # what is the current year?
  curr_year <- years[i]
  
  # construct the relevant URL to the Wikipedia page
  url <- paste("https://en.wikipedia.org/wiki/", curr_year, 
               "_Summer_Olympics_medal_table", sep = "")
  
  # retrieve the data from this page
  temp <- url %>%
    html %>%
    html_nodes("table")
  
  # find the html table's position. The medal table is in a "sortable" Wiki 
  # table, so we search for this term and return its position in the list
  position <- grep("sortable", temp)
  
  # get the medal table. Add a new column storing the year
  medals <- data.frame(html_table(temp[position], fill = TRUE))
  medals <- medals %>%
    mutate(Year = curr_year)
  
  # change the names of the "Nation" column, as this is not consistent between
  # games tables
  colnames(medals)[2] <- "Nation"
  
  # remove the weird symbols from the html file (Â)
  nations <- medals$Nation
  nations <- gsub("[^\\x{00}-\\x{7f}]", "", nations, perl = TRUE)
  
  # we need to change "Soviet Union" to USSR for consistency
  nations <- gsub("Soviet Union(URS)", "Russia(RUS)", nations, fixed = TRUE)
  
  # also change West & East Germany to "Germany"
  nations <- gsub("East Germany(GDR)", "Germany(GER)", nations, fixed = TRUE)
  nations <- gsub("West Germany(FRG)", "Germany(GER)", nations, fixed = TRUE)
  medals$Nation <- nations

  # save the medal table and move to the next games
  medal_tables[[i]] <- medals

}
#------------------------------------------------------------------------------ 


#------------------------------------------------------------------------------
### loop over each host, then find how many medals they won in each games and
### store it in data frame

# initialise the data frame
final_data <- data.frame(hosts)
final_data[, as.character(years)] <- 0

for(i in 1:length(hosts$host)){
  
  # get the current host
  curr_host <- hosts$host[i]

  # loop over all years, find the number of medals won by the current host, 
  # and store it in final_data frame
  for(j in 1:length(years)){
    
    # what is the current year?
    curr_year <- years[j]
    
    # get the medal table for the current year
    curr_medals <- medal_tables[[j]]
    
    # get the row for the current host if it is present
    curr_medals <- curr_medals %>%
      filter(str_detect(Nation, curr_host))
    
    # collate the number of medals won if there is data
    if(nrow(curr_medals) > 0){
      final_data[i, j + 2] <- sum(curr_medals$Total)
    } else
      final_data[i, j + 2] <- 0
  
  } # end of each year loop
  
} # end of each host loop
#------------------------------------------------------------------------------


#------------------------------------------------------------------------------
### now do some plotting
pdf("medals.pdf", width = 12, height = 12)

# change the layout of the plotting window
par(mfrow = c(4, 4))

# loop over each hosting nation
for(i in 1:nrow(final_data)){
  
  # get the current host's data for all years
  host_data <- as.numeric(final_data[i, 3:ncol(final_data)])
  
  # what is their mean number of medals won?
  host_mean <- mean(host_data)
  
  # plot the data!
  plot(years, host_data, xlab = "Year", ylab = "Number of Medals", pch = 19, 
       type = "b", lwd = 2, 
       main = paste(hosts$host[i], "–", years[i], sep = ""))
  abline(v = final_data$year[i], lty = "dashed", col = "blue", lwd = 1.5)
  abline(h = host_mean, lty = "dashed", col = "red", lwd = 1.5)

}
#------------------------------------------------------------------------------

Solution to #BarBarPlots in R

I came across an interesting project the other day which is calling for a reconsideration of the use of bar plots (#barbarplots), with the lovely tag-line “Friends don’t let friends make bar plots!”. The project elegantly outlines convincing reasons why bar plots can be misleading, and have successfully funded a campaign to “…increase awareness of the limitations that bar plots have and the need for clear and complete data visualization”.

In this post, I want to show the limitations of bar plots that these scientists have highlighted. Then, I provide a solution to these limitations for researchers who want to continue using bar plots that can easily be cobbled together using R-statistics (with the ggplot2 package).

The Data

Say you are a researcher who collects some data (it doesn’t matter on what) from two independent groups and you are interested in whether there is a difference between them. Most researchers would maybe calculate the mean and standard error of each group to describe the data. Then the researcher might plot the data using a bar plot, together with error bars representing the standard error. To provide an inferential test on whether a difference exists, the researcher would usually conduct an independent samples t-test.

Let’s provide some example data for two conditions:

  • condition A (n = 100): mean of 200.17, a median of 196.43, and a standard error of 6.12
  • condition B (n = 100): mean of 200.11, a median of 197.87, and a standard error of 7.19

Here is the bar plot:

standardBarPlot

Pretty similar, right? The researcher sees that there is little evidence for a difference; to test this inferentially they conduct an independent samples t-test, with the outcome t(198) = 0.007, p = .995, Cohen’s d < 0.001. The researcher concludes there is no difference between the two groups.

The Problem

The problem raised by the #barbarplot campaign is that bar plots are a poor summary of the distribution of data. The bar plot above suggests there is no difference between the two groups, but the two groups are different! How do I know they are different? I simulated the data. What the bar plot hides is the shape of the underlying distribution of each data set. Below I present a density plot (basically a smoothed histogram) of the same data as above:

density

Now we can see that the two groups are clearly different! Condition A is a normal distribution, but condition B is bi-modal. The bar plot doesn’t capture this difference.

The Solution

Density plots are a nice solution to presenting the distribution of data, but can get really messy when there are multiple conditions (imagine the above density plot but with 4 or more overlapping conditions). Plus, researchers are used to looking at bar plots, so there is something to be said about continuing their use (especially for factorial designs). But how do we get around the problem highlighted by the #barbarplot campaign?

One solution is to plot the bar plots as usual, but to overlay the bar plot with individual data points. Doing this allows the reader to see the estimates of central tendency (i.e., to interpret the bar plot as usual), whilst at the same time allowing the reader to see the spread of data in each condition. This sounds tricky to do (and it probably is if you are still using Excel; yes, I’m talking to you!), but it’s simple if you’re using R.

Below is the above data plotted as a combined bar and point plot. As you can see, the difference in distribution is now immediately apparent, whilst retaining the advantages of a familiar bar plot. Everyone wins!

combinedBarPlot

 

R Code

Below is the R code for the combined plot. This includes some code that generates the artificial data used in this example.

#------------------------------------------------------------------------------
# load required packages
library(ggplot2)
library(dplyr)

#--- Generate artificial data

# set random seed so example is reproducible
set.seed(100)

# generate condition A
condition <- rep("condition_A", 100)
dv_A <- rnorm(100, 200, 60)
condition_A <- data.frame(condition, dv = dv_A)

# generate condition B
condition <- rep("condition_B", 100)
dv_B <- c(rnorm(50, 130, 10), rnorm(50, 270, 10))
condition_B <- data.frame(condition, dv = dv_B)

# put all in one data frame
raw_data <- rbind(condition_A, condition_B)

# calculate sumary statistics
data_summary <- raw_data %>%
  group_by(condition) %>%
  summarise(mean = mean(dv), 
            median = median(dv),
            se = (sd(dv)) / sqrt(length(dv)))
#------------------------------------------------------------------------------


#------------------------------------------------------------------------------
#--- Do the "combined" bar plot
p2 <- ggplot()

# first draw the bar plot
p2 <- p2 + geom_bar(data = data_summary,
                    aes(y = mean,x = condition,
                        ymin = mean - se,
                        ymax = mean + se), fill = "darkgrey",
                    stat="identity", width=0.4)

# draw the error bars on the plot
p2 <- p2 + geom_errorbar(data = data_summary,
                         aes(y = mean, x = condition,
                             ymin = mean - se,
                             ymax = mean + se), stat = "identity",
                         width = 0.1, size = 1)

# now draw the points on the plot
p2 <- p2 + geom_point(data = raw_data, aes(y = dv, x = condition),
                      size = 3, alpha = 0.3,
                      position = position_jitter(width = 0.3, height = 0.1))

# scale and rename the axes, and make font size a bit bigger
p2 <- p2 + coord_cartesian(ylim = c(50, 400))
p2 <- p2 + scale_x_discrete(name = "Condition") +
  scale_y_continuous(name = "DV")

p2 <- p2 + theme(axis.text = element_text(size = 12),
                 axis.title = element_text(size = 14,face = "bold"))

# view the plot
p2
#------------------------------------------------------------------------------

 

10 Recommendations from the Reproducibility Crisis in Psychological Science

This week I gave an internal seminar at my institution (Keele University, UK) entitled “Ten Recommendations from the Reproducibility Crisis in Psychological Science”. The audience was to be faculty members and psychology graduate students. My aim was to collate some of the “best-practices” that have emerged over the past few years and provide direct advice for how researchers and institutions can adapt their research practice. It was hard to come up with just 10 recommendations, but I finally decided on the following:

  1. Replicate, replicate, replicate
  2. Statistics (i): Beware p-hacking
  3. Statistics (ii): Know your p-values
  4. Statistics (iii): Boost your power
  5. Open data, open materials, open analysis
  6. Conduct pre-registered confirmatory studies
  7. Incorporate open science practices in teaching
  8. Insist on open science practices as reviewers
  9. Reward open science practices (Institutions)
  10. Incorporate open science into hiring decisions (Institutions)

The link to the slides are below. I might expand upon this in a fuller blog post in time, if there is interest.

 

 

“Bayesian in 8 Easy Steps” Journal Club

I’ve been trying to organise an online journal club to discuss the papers suggested in Alexander Etz and colleagues’ paper “How to become a Bayesian in 8 easy steps”. Several people have filled out the Doodle poll expressing an interest, but unfortunately not everyone can make the same time. As such, I am afraid I will have to go with the time which the majority of people can make. I am sorry that this will leave some people out.

The most popular day & time was Thursdays at 1pm UTC. Therefore, I propose the first meeting be on Thursday 10th March at 1pm. It will be on Google Hangouts, but I need to spend some time working out how to use this before I pass on details of the meet.

Please complete the following Doodle poll to indicate whether you can make this meeting or not. Please also provide your email address together with your name in the poll. Then, I can move the discussion to email rather than having to post on my blog for updates.

http://doodle.com/poll/7im5vnk9cddc3vyb

See you there!

(Pesky?) Priors

When I tell people I am learning Bayesian statistics, I tend to get one of two responses: either people look at me blankly—“What’s Bayesian statistics?”—or I get scorned for using such “loose” methods—“Bayesian analysis is too subjective!”1. This latter “concern” arises due to (what I believe to be a misunderstanding of) the prior: Bayesian analysis requires one state what one’s prior belief is about a certain effect, and then combine this with the data observed (i.e., the likelihood) to update one’s belief (the posterior).

On the face of it, it might seem odd for a scientific method to include “subjectivity” in its analysis. I certainly had this doubt when I first started learning it. (And, in order to be honest with myself, I still struggle with it sometimes.) But, the more I read, the more I think this concern is not warranted, as the prior is not really “subjectivity” in the strictest sense of the word at all: it is based on our current understanding of the effect we are interested in, which in turn is (often) based on data we have seen before. Yes, sometimes the prior can be a guess if we2 have no other information to go on, but we would express the uncertainty of a belief in the prior itself.

The more I understand Bayesian statistics, the more I appreciate the prior is essential. One under-stated side-effect of having priors is that it can protect you from dubious findings. For example, I have a very strong prior against UFO predictions; therefore, you are going to have to present me with a lot more evidence than some shaky video footage to convince me otherwise. You would not have to provide me with much evidence, however, if you claimed to have roast beef last night. Extraordinary claims require extraordinary evidence.

But, during my more sceptical hours, I often succumbed to the the-prior-is-nothing-but-subjectivity-poisoning-your-analysis story. However, I now believe that even if one is sceptical of the use of a prior, there are a few things to note:

  • If you are concerned your prior is wrong and is influencing your inferences, just collect more data: A poorly-specified prior will be washed away with sufficient data.

  • The prior isn’t (really) subjective because it would have to be justified to a sceptical audience. This requires (I suggest) plotting what the prior looks like so readers can familiarise themselves with your prior. Is it really subjective if I show you what my prior looks like and I can justify it?

  • Related to the above, the effect of the prior can be investigated using robustness checks, where one plots the posterior distribution based on a range of (plausible) prior values. If your conclusions don’t depend upon the exact prior used, what’s the problem?

  • Priors are not fixed. Once you have collected some data and have a posterior belief, if you wish to examine the effect further you can (and should) use the posterior from the previous study as your prior for the next study.

These are the points I mention to anti-Bayesians I encounter. In this blog I just wanted to skip over some of these with examples. This is selfish; it’s not really for your education (there really are better educators out there: My recommendation is Alex Etz’s excellent “Understanding Bayes” series, from where this blog post takes much inspiration!). I just want somewhere with all of this written down so next time someone criticises my interest in Bayesian analysis I can just reply: “Read my blog!”. (Please do inform me of any errors/misconceptions by leaving a comment!)

As some readers might not be massively familiar with these issues, I try to highlight some of the characteristics of the prior below. In all of these examples, I will use the standard Bayesian “introductory tool” of assessing the degree of bias in a coin by observing a series of flips.

A Fair Coin

If a coin is unbiased, it should produce roughly equal heads and tails. However, often we don’t know whether a coin is biased or not. We wish to estimate the bias in the coin (denoted theta) by collecting some data (i.e., by flipping the coin); a fair coin has a theta = 0.5. Based on this data, we can calculate the likelihood of various theta values. Below is the likelihood function for a fair coin.

fairCoin-1

In this example, we flipped the coin 100 times, and observed 50 heads and 50 tails. Note how the peak of the likelihood is centered on theta = 0.5. A biased coin would have a true theta not equal to 0.5; theta closer to zero would reflect a bias towards tails, and a theta closer to 1 would reflect a bias towards heads. The animation below demonstrates how the likelihood changes as the number of observed heads (out of 100 flips) increases:

likelihood

So, the likelihood contains the information provided by our sample about the true value for theta.

The Prior

Before collecting data, Bayesian analysts would specify what their prior belief was about theta. Below I present various priors a Bayesian may have using the beta distribution (which has two parameters: a and b):betas-1

The upper left plot reflects a prior belief that the coin is fair (i.e., the peak of the distribution is centered over theta = 0.5); however, there is some uncertainty in this prior as the distribution has some spread. The upper right plot reflects total uncertainty in a prior belief: that is, the prior holds that any value of theta is likely. The lower two plots reflect prior beliefs that the coin is biased. Maybe the researcher had obtained the coin from a known con-artist. The lower left plot reflects a prior for a biased coin, but uncertainty about which side the coin is biased towards (that is, it could be biased heads or tails); the lower right plot reflects a prior that the coin is biased towards heads.

The effect of the prior

I stated above that one of the benefits of the prior is that it allows protection (somewhat) from spurious findings. If I have a really strong prior belief that the coin is fair, 9/10 heads isn’t going to be all that convincing evidence that it is not fair. However, if I have a weak prior that the coin is fair, then I will be quite convinced by the data.

This is illustrated below. Both priors below reflect the belief that the coin is fair; what differs between the two is the strength in this belief. The prior on the left is quite a weak belief, as the distribution (although peaked at 0.5) is quite spread out. The prior on the right is a stronger belief that the coin is fair.

In both cases, the likelihood is the result of observing 9/10 heads.

priorStrength-1

You can see that when the prior is a weak belief, the posterior is very similar to the likelihood; that is, the posterior belief is almost entirely dictated by the data. However, when we have a strong prior belief, our beliefs are not altered much by observing just 9/10 heads.

Now, I imagine that this is the anti-Bayesian’s point: “Even with clear data you haven’t changed your mind.” True. Is this a negative? Well, imagine instead this study was assessing the existence of UFOs rather than simple coin flips. If I showed you 9 YouTube videos of UFO “evidence”, and 1 video showing little (if any) evidence, would you be convinced of UFOs? I doubt it. You were the right-hand plot in this case. (I know, I know, the theta distribution doesn’t make sense in this case, but ignore that!)

What if the prior is wrong?

Worried that your prior is wrong3, or that you cannot justify it completely? Throw more data at it. (When is this ever a bad idea?) Below are the same priors, but now we flip the coin 1,000 times and observe 900 heads. (Note, the proportion heads is the same in the previous example.) Now, even our strong prior belief has to be updated considerably based on this data. With more data, even mis-specified priors do not affect inference.

priorStrengthSampleSize-1

To get an idea of how sample size influences the effect of the prior on the posterior, I created the below gif animation. In it, we have a relatively strong (although not insanely so) prior belief that the coin is biased “heads”. Then, we start flipping the coin, and update the posterior after each flip. In fact, this coin is fair, so our prior is not in accord with (unobservable) “reality”. As flips increases, though, our posterior starts to match the likelihood in the data. So, “wrong” priors aren’t really a problem. Just throw more data at it.

increasingN

“Today’s posterior is tomorrow’s prior” — Lindley (1970)

After collecting some data and updating your prior, you now have a posterior belief of something. If you wish to collect more data, you do not use your original prior (because it no longer reflects your belief), but you instead use the posterior from your previous study as the prior for your current one. Then, you collect some data, update your priors into your posteriors…and so on.

In this sense, Bayesian analysis is ultimately “self-correcting”: as you collect more and more data, even horrendously-specified priors won’t matter.

In the example below, we have a fairly-loose idea that the coin is fair—i.e., theta = 0.5. We flip a coin 20 times, and observe 18 heads. Then we update to our posterior, which suggests the true value for theta is about 0.7 ish. But then we wish to run a second “study”; we use the posterior from study 1 as our prior for study 2. We again observe 18 heads out of 20 flips, and update accordingly.

examplePosteriorToPrior-1

Conclusion

One of the nicest things about Bayesian analysis is that the way our beliefs should be updated in the face of incoming data is clearly (and logically) specified. Many peoples’ concerns surround the prior. I hope I have shed some light on why I do not consider this to be a problem. Even if the prior isn’t something that should be “overcome” with lots of data, it is reassuring to know for the anti–Bayesian that with sufficient data, it doesn’t really matter much.

So, stop whining about Bayesian analysis, and go collect more data. Always, more data.

Click here for the R code for this post


  1. Occasionally (althought his is happening more and more) I get a slow, wise, agreeing nod. I like those. 
  2. I really wanted to avoid the term “we””, as it implies I am part of the “in-group”: those experts of Bayesian analysis who truly appreciate all of its beauty, and are able to apply it to all of their experimental data. I most certainly do not fall into this camp; but I am trying. 
  3. Technically, it cannot be “wrong” because it is your belief. If that belief is justifiable, then it’s all-game. You may have to update your prior though if considerable data contradict it. But, bear with me. 

Surviving a conference!

I am currently in London at my first conference of the year, speaking at the Experimental Psychology Society (EPS) conference at University College London. The EPS meets three times per year, but the London meeting is often the best-attended. (This might be in no small part due to people in early January being fed up of being at home stuffing their faces with turkey & trimmings.) Conferences are an essential part of an academic’s life, but I have not always found conferences so easy. In fact, I still find them difficult on many levels.

This post is not really for anyone other than myself; it’s a post to my old-self, as if I have travelled back in time and have passed-on wisdom I have gleaned from my five years of conferencing. You see, I am sure I could have got much more out of these five years of being surrounded by fellow researchers that I feel I have missed out. Were I to go back in time and try all over again, I would probably try to follow this advice.

For those new to conferencing, I hope you might find something of interest here, too.

Speak to People

I start off being guilty of hypocrisy here. “Do as I say, not as I do!” You see, I find it very difficult to talk to people at conferences. I am told I am not as socially-awkward as I fear I am in my own head, but I just am never comfortable talking to new people. But, this is (or so I am told) one of the main benefits of going to conferences. You are surrounded by fellow researchers, some of whom may be working on similar questions, but all of whom are interested in research. Your next successful research collaboration could be born over coffee.

The difficulty is (for me) that speaking to people is very difficult. Indeed, this was one of the main attractions of academia: being locked away in your office in isolation thinking about what interests you, and only communicating via papers.

How wrong I was.

So, when speaking to my old-self, I would say “Speak to people!”. This doesn’t make it easier, so maybe we need a trick. If you attended a talk you found interesting, try to formulate an interesting question. Then, instead of approaching the speaker at coffee time with mundane small-talk, tell them you found their talk interesting and ask them the question you had.

Attend the Talks

I think a lot of people do this anyway, but I often when I speak to people about their last conference visit overseas I tend to hear more of the local sight-seeing opportunities rather than the science presented. I am in danger of sounding like an incredible bore now, but attend the talks! By all means see the sights, but you are here for the science, so do science.

Ask a Question

Linked to how to make sensible conversations with conference delegates, I always like to ask a question during talks I attend. Now, there is a fine line here, because there is nothing worse than the person who always asks irritating questions just to make themselves look clever. I have been guilty of this in the past, so I am not one to cast stones, but it is not the reason to ask a question.

Listening to a talk with the intention of asking a question forces you to pay more attention to the talk. It forces you to think critically about the science being presented. How easy is it to switch off during the talks and think about your talk (or where you will go for dinner)? If you know you are going to ask a question, you will get more out of the talk. (As a side note, pay attention to the session chair during the talk; they will likely be making notes of questions because it is their job to ask a question if no one in the audience does, to avoid embarrassing silences.)

I tend to start off my question by introducing myself. This helps the speaker track you down later if they wanted to follow up your question, but I tend to do it just out of politeness: I like to know who I am speaking to, so I make sure people know who they are speaking to. I tend to say “Hi, I am Jim Grange from Keele University. Thank you for an interesting talk. I was wondering…[insert question]”.

Sometimes your formulated question should only be an exercise, and shouldn’t be asked. (This reminds me of a delightful quip by Christopher Hitchens: “It’s true that everyone has a book in them. In most cases, that is where it should stay.”) There are times when asking your question is not recommended. Want a nice guide as to whether you should ask the question you have formulated? See this (note that this is not my graph; I don’t know the original source, so if you do know it please let me know so I can reference accordingly!):

conferenceQuestions

Think Actively During Talks

I guess this is related to the advice above, but there is one thing I always do during talks. For every experimental talk I attend, I like to think of one experiment I would like to do to extend the work presented. I think of how I would design it, what I would expect to find, etc. This is just good practice for thinking about how to design experiments.

I find this very useful because I get so used to thinking about experimental design in task switching contexts because this is the research I do. There is the danger, then, that when I have a real reseach question outside of task switching, my design ends up looking rather like a task switching experiment. (“If all you have is a hammer, everything you look at will look like a nail.”)

Want some bonus points? Collect these hypothetical experiments and actually run one! Side-projects are fun.

Look for New Research Programmes

The past two years I have been attending talks hoping to listen to a talk that introduces me to a new area of research that excites me that I can then go back to my lab and start work in. You see, I am getting a bit bored with my research area. I have done task switching research almost continuously since my undergraduate thesis (about 8 years, now!). I am looking for something new to excite me for the next 10 years as much as task switching has. I always live in hope that my eyes will be opened during a talk about a new research programme that I can get my teeth stuck in to. This is one major motivator for attending conferences at the moment.

Attend the Poster Sessions

The poster sessions at most conferences I attend are populated by PhD students presenting their work, so this is an excellent opportunity to speak to “up-and-coming” scientists. Be kind to them; for many, this will be their first step into academic presentations, and will probably be nervous. Compliment them on their work (but don’t bullshit).

Give a Talk!

This may seem obvious, but don’t be a fly on the wall at conferences. Get stuck in. Giving a talk is the best way. People will be exposed to your research, you will (likely) get critical feedback on your ideas (which is great), and people get to know you (linked with some of the topics above). Talks also allow you to present “work-in-progress”, which will allow you to test your ideas before your project has fully developed. This is important.

Publish your Slides

Many people are now publishing their research papers online so anyone can access them. Why aren’t people (generally) doing the same with their presentation slides? Likely the answer is related to the fact that conferences tend to present unpublished research, so people don’t wish to be scooped. I can sympathise with this, but I think it is short-sighted. You have already “released” your ideas when you gave the talk. So, publish your slides, too.

From 2016, I will be publishing all of my slides online. For those interested, here are my slides for the talk I am giving this Friday:

Conclusion

In sum, enjoy the conference. It’s a time to listen to great ideas and share yours. After all, isn’t this what science is all about? Just don’t be as shy as me. Try and speak to people; they (probably) won’t bite!

Statistics Tables: Where do the Numbers Come From?

This is a blog for undergraduates grappling with stats!

Last week I was having a chat with an undegraduate student who was due to analyse some data. She was double-checking how to determine the statistical significance of her analysis. I mentioned that she could either use SPSS (which would provide the value directly), or obtain the t-value via hand-calculation and look up the critical value in the back of her textbooks. Below is the type of table I was referring to; something all undergraduate students are familiar with. This one is for the t-test:

tTable

She then asked a very interesting question: “Where do all of these numbers come from, anyway?”. I tried my best to explain, but without any writing utensils to hand, I felt I couldn’t really do the answer justice. So, I thought I’d write a short blog-post for other curious students asking the same question: Where do these numbers come from?

A Simple Experiment

Let’s say a researcher is interested in whether alcohol affects response time (RT). She recruits 30 people into the lab, and tests their RT whilst sober. Then, she plies them with 4 pints of beer and tests their RT again. (Please note this is a poor design. Bear with me.) She finds that mean RT whilst sober was 460ms (SD = 63.63ms), and was 507ms (SD = 73.93ms) whilst drunk. The researcher performs a paired-samples t-test on the data, and finds t(29) = 2.646. Using the table above, she notes that in order for the effect to be significant at the 5% level (typically used in psychology), the t-value needs to exceed 2.043. As the observed value does exceed this “critical value”, she declares the effect significant (p<.05).

Students are familiar with this procedure, and perform it plenty of times during their study. But how many have stopped to really ask “Why does the t-value need to exceed 2.043? Why not 2.243? Or 1.043?”.

A satisfctory answer requires an appreciation of what the p-value is trying to inform you. The correct definition of the p-value is the probability of observing a test statistic as extreme—or more extreme—as the one you have observed, if the null hypothesis is true. That is, if there is truly no effect of alcohol on RT performance, what is the probability you would observe a t-value equal or higher to 2.646 (the one obtained in analysis)? The statistics table tells us that—with 30 subjects (therefore 29 degrees of freedom)—that there is only a 5% chance of observing a t-value above 2.043 (or below -2.043). But where did this number come from?

The Power of Simulations

We can work out the answer to this question mathematically (and in fact this is often covered on statistics courses), but I think it is more powerful for students to see the answer via simulations. What we can do is simulate many experiments where we KNOW that the null hypothesis is true (because we can force the computer to make this so), and perform a t-test for each experiment. If we do this many times, we get a distribution of observed t-values when the null hypothesis is true.

Animation of the Simulation

Below is a gif animation of this simulation collecting t-values. This simulation samples 30 subjects in two conditions, where the mean and standard deviation of each condition is fixed at 0 and 1, respectively. This gif only demonstrates the collation of t-values up to 300 experiments. The histogram shows the frequency of certain t-values as the number of experiments increases. The red vertical lines show the critical values for the t-value for 29 degrees of freedom.

output_1rmWho

Note that as the simulation develops, the bulk of the distribution of observed t-values falls within the critical values (i.e., they are contained within the limits defined by the red lines). In fact, in the long-run, 95% of the distribution of t-values will fall within this window. This is where the critical value comes from! It’s the value for which, in the long-run, 95% of t-values from null experiments will fall below.

A Larger Simulation

To show this, I repeated the simulation but now increased the number of experiments to 100,000. The histogram is below.

tTests120-1

As before, the bulk of the distribution is contained within the critical t-value range. If we count exactly what percentage of these simulated experiments produced a t-value of 2.043 or greater (or less than -2.043), we see this value is 4.99%—just off the 5% promised by the textbooks! Therefore, 95.01% of the simulated t-distribution falls within the red lines.

Summing Up

In this simulation, we repeated an experiment many times where the effect was known to be null. We found that 95% of the observed t-distribution fell within the range of -2.043 to 2.043. This is what the critical values are telling us. They are the t-values for which, in the long run, 95% of t-values will be less extreme than when there is no real effect. Therefore, so the argument goes, if you observe a more extreme value, this is reason to reject the null hypothesis.

The critical value changes depending on the degrees of freedom because the shape of the t-distribution under the null changes with the number of subjects in the experiment. For example, below is a histogram of null t-values in simulated experiments with 120 subjects. The textbooks tell us the critical value is 1.980. Therefore, we can predict that 95% of the distribution should fall within the window -1.980 to 1.980 (shown as the red lines below).

tTests120-1

That’s where the numbers come from!