I’m very proud of my most recent paper (Schuch & Grange, in press), soon to be published in the Journal of Experimental Psychology: Learning, Memory, & Cognition. The pride comes not just from being accepted in a prestigious cognitive journal—but wow, I’m certainly thrilled by it—but from aspects of the research that have no metrics. Forget your impact factors, they don’t give me the same buzz as what I experienced during this project:
- It reports an effect I discovered rather serendipitously one summer whilst going through some old (published) data sets. The lay-person’s perception is that science moves forward from “Eureka” moments, whereas the truth is it occurs more along the “Hmm, that’s odd…” paths. It’s actually quite a difficult realisation sometimes because you don’t feel in control (and I’m quite the control freak!), but it’s very exciting indeed; you never know what finding is just around the corner!
- It involved my first collaboration with a member of a lab in Germany I have had great respect for since starting my work in this area (the great respect is for the individual AND the lab). I remember reading a paper by Stefanie Schuch (my co-author) when I was doing my undergraduate thesis (Schuch & Koch, 2003; JEP:HPP), and that I now have a paper with her blows my mind every time I think about it. If I went back in time and told my younger self that I would soon have a paper with these authors, I really wouldn’t have believed myself. Yet here it is! It’s funny how things turn out. It’s like the academic equivalent of a guitar enthusiast getting the chance to jam with Slash.
- It involved exploratory analysis and confirmatory analysis, and as such is probably my most “robust” paper to date. E-J Wagenmakers has a nice paper extolling the virtues of confirmatory studies, and I am pleased to have followed much of this advice (although, the confirmatory studies were not pre-registered). As I found the effect during exploratory analysis, it would have been hasty to publish this data without seeing if it replicates in a confirmatory study, yet this is something I would have tried to do even as recently as last year. Instead, I decided to contact the German lab to see whether they find the effect in their data, too (they did). Then, we decided to run a confirmatory study. It was very nice (and re-assuring) to see it replicated in a different lab. Plus, I felt a little more “grown up” in scientific terms doing something the right (and rigorous) way.
- I got to use “mini meta-analyses” for the first time. I love data analysis. Sometimes I joke with my colleagues that I would be just as happy in my job if there were no psychology involved, just so long as I was doing some form of data analysis. I just love it. Seriously, I could be measuring & analysing rainfall and I would be just as happy. I just love the process of testing ideas and getting data. So, to be able to try out a new analysis method for the first time was great fun. Meta-analyses are typically used in large-scale reviews where the meta-analysis might contain tens (or more) of data points. However, in a recent book, Geoff Cumming is very persuasive in asking the reader to think of every data point they are publishing as a point in a future meta analysis. Such “meta-analytical” thinking is important, as one experimental result in isolation means next to nothing. So, focus on the bigger picture. In the book, he recommends doing “mini meta-analyses”, where you perform a meta-analysis across only a few studies or experiments. Most papers reporting a new effect in cognitive psychology tend to have 3+ experiments reporting the effect and testing for boundary conditions etc. Cumming suggests you should do a mini meta-analysis across these studies to get an estimate of the true effect size of the new effect. This is what I did in my study. My initial exploratory data analysis was using data from a 2010 paper of mine that had 3 experiments, and a total of 5 conditions that showed the “standard” effect. So, in the current paper, I conducted a mini meta-analysis across these 5 conditions showing the magnitude (and consistency) of the “new” effect. The Figure reporting this meta analysis is below. It was really neat trying out new techniques, and I think it really added weight to the paper. I shall certainly be using this again in the future!
To sum up this short post, this project was very exciting and satisfying from a scientific perspective. I would have almost retained the same joy from this project had it never seen the light of day.
You can’t put an impact factor on that!