Month: August 2014

Saving Multiple Plots to PDF in R

Sometimes when doing data analysis I need to create multiple plots. In R, you can save each plot to a separate file, but this leads to the problem of how to view each plot rapidly (for example if you wish to compare several plots in quick succession). 

In the video below, I show you how to get around this problem by saving each plot to a separate page within ONE PDF file. This makes it easy to view each plot in quick succession. Indeed, in the video, I show you how this can lead to a form of animation, showing how data changes with each plot.

Advertisements

(Don’t) Lose Your Inhibitions

[Cognitive inhibition can be defined as]…the stopping or overriding of a mental process, in whole or in part, with or without intention —MacLeod (2007)

The concept of cognitive inhibition is rather controversial in cognitive science. When performing a task or an action in the face of competing tasks/actions, is it sufficient to activate a representation of the relevant task/action? Or, does the activation of the competing task/action’s representation need to be inhibited (suppressed)? 

For example, if when presented with the following stimulus I ask you to name the red object, you will be hindered by the presence of an overlapping distractor (the green item). You are trying to activate an action of responding “Chair!”, but the presence of a distracting response “Duck!” hinders efficient selection compared to if the chair was presented alone.

chair_duck

So how do you perform this task successfully? Is it enough to activate the target representation so that it is above the activation level of the distractor’s representation? Or, do you need to activate the target representation whilst at the same time inhibiting the distracting representation? There are people on both sides of this argument. Gordon Logan once stated there are two sets of researchers in cognitive psychology: inhibitophobes, and inhibitophiles. Inhibitophiles are pro-inhibition, and use inhibitory constructs in every theory/explanation of cognitive operation. On the contrary, inhibiophobes eschew inhibition in any explanation of empirical phenomena. 

I argue there is a third set in this camp, one which I firmly place myself: inhibitosceptics. Whilst I accept a potential role for inhibition, I always like to see if an empirical phenomena ascribed to inhibition can be explained with non-inhibitory processes. This is just an application of Occam’s razor: why include activation + inhibition if activation alone is sufficient. 

However, one thing can’t be denied: inhibition is an incredibly powerful—and efficient—selection mechanism. 

The Power of Inhibition

If one assumes—quite safely, given our knowledge of neural dynamics—that there is an upper-limit to activation of a particular representation, then in the face of competition it will not be sufficient to merely “boost” the activation of task-relevant representations; there is an upper limit to how much something can be activated. This means the activation from the competing representation will interfere with selection of the target representation. 

By anology, imagine a stereo playing your favourite CD in your room at home. Let’s say this stereo is on volume 7 (out of 10). Now, imagine you wish to switch to a new CD, and you do this by turning on a separate stereo (the analogy doesn’t work if you merely switch the old CD, so bear with me!). In order to hear this new CD over the noise of the old CD, you need to turn the volume of the new stereo above 7. However, there is a limit to the volume of the stereo (10), so you can only turn the volume up so much. Even though the new stereo is playing at 10, you can still hear the old stereo playing at 7, which interferes with your enjoyment of the new CD. You need to turn the volume of the old stereo down as well as turning the volume of the new stereo up. (Or, as Spinal Tap did, you could just go to 11.)

This “turning the volume down” is inhibition. Coming back to selection of tasks/actions, you cannot merely activate the target representation above that of the distracting representation, because there is a ceiling to activation. Inhibitophiles will argue that you need to activate the target representation, and inhibit the competing representation. 

To demonstrate how elegant this selection mechanism is, I ran some simulations in R of activation traces of two competing representations (a target representation, and a distractor representation) during a task similar to that above: name the red object. (The details of the simulation are in a below section.) The two representations initially receive no input, and so their activations are near zero. However, at time 1,000 the stimulus onsets, and stays on until the end. At this time, each representation receives some degree of activation (because there is a red object on the screen and a green object). The red representation receives a little more activation, to represent the bias to name the red object (this could be attributed to an attentional weight, for example). Below is a plot of the activation trace with no inhibition in the system (i.e., a pure activation model).

noInhibition

 The time of stimulus onset is signified by the vertical red dashed line. As you can see, before this time both actions are equally active, but when stimulus onsets activation begins to grow. As the excitatory input to both units is very similar (0.65 for the red target, 0.60 for the green distractor), the growth of both activations are similar. However, the red representation begins to be more active a short while later. However, if we assume selection efficiency is inversely proportional to the similarity of the activation of the two traces, we can conclude that selection of the target action will not be very efficient. This is because the activation of the distracting representation is very high, too.

So, an activation-only model (at least, this variant) leads to strong interference from distracting actions. So, let’s add some inhibition to the system. In the next plot, the activation input to each representation is the same as before, but now an inhibitory link exists between the two representations. That is, each representation spreads inhibition to the competing representations; the degree of the inhibition sent to competing representations is proportional to how active the sender is, and how strong the inhibitory control of the system is. That is, the more active a unit is, the more inhibition it will send to the competing units; also, the stronger the inhibitory control of the system, the more inhibition that will be spread. This “winner takes all” process leads to rapid and efficient selection of the target item. The plot below shows activation traces with strong inhibition in the system.

strongInhibition

 

As you can see, the target and distracting representations each begin to become active, but quickly the target representation sends strong inhibition to the distracting representation. The large vertical difference between these two traces suggests selection of the target action will be fast and accurate.

It turns out the system doesn’t need much inhibition at all to achieve this selection. Below is a plot where I incremented the inhibitory strength of the model in 0.15 steps (the value is shown in the header of each plot).

variousInhibitionInhibition thus appears an efficient selection mechanism which overcomes the ceiling effect inherent in activation-only models. So—contrary to the popular “care-free” command—it appears essential that if you want efficient error-free selection of relevant actions, don’t lose your inhibitions!

Simulation Details

The simulation uses a variant of Usher & McLelland’s leaky, competing, accumulator model. The change in activation of the target at each time-step is given by

target

and the change in activation for the distractor is given by

distractorIn these equations, I_{i} is the excitatory input into each unit, x_{i} is the current activation of unit i, \lambda is a leakage parameter (that is, the current activation of a unit is subject to decay), \beta is the inhibitory-strength parameter, \frac{dt}{\tau} is a time-step parmaeter (set to 0.05 in all simulations), and \xi is a gaussian noise component (mean of zero, sd of 0.05).

The input to each unit was zero until a time of 1,000, at which point I_{Target} was set to 0.65 and I_{Distractor} was set to 0.60. The leakage parameter \lambda was set to 0.6 in all simulations, and the inhibitory strength parameter \beta was varied as in the text. This is what was varied in each panel of the 6-figure plot above, with the value for \beta being shown in the header of each sub-plot.

A paper without p-values

All I seem to be writing about recently is rejection. So, today I thought I would write about a paper I have recently had accepted (they DO happen!). The paper is in press at the Quarterly Journal of Experimental Psychology, and I am very proud of it. (You can read it here; yay for Open Science.)

My pride is a consequence of several factors: it is a product of two years’ (hard) work and thought; it contains my first published attempts at computational modelling (see Appendices); it touches upon a very topical question (do memories for tasks/goals decay, or are they lost due to interference); Ellen Cross was an incredibly intelligent Year 2 (at the time) student at Keele Psychology, so it was a pleasure to help her career along (she is now doing a PhD in Neuroscience); and it doesn’t contain a single p-value.

Yes, that’s right. There isn’t a single p-value in the paper. No null-hypothesis significance test.  I must admit, I submitted the paper with some trepidation, and was  expecting the reviewers and editor to ask for p-values to be included (or some other form of NHST), but to my delight there was no mention of it! 

I am quite convinced by Geoff Cummings’ assertion that researchers should focus on estimation of effect sizes and meta-analytic thinking rather than NHST. This conviction began by reading his book (see here), and was cemented by his paper in Psychological Science 

So, instead of using NHST and p-values, I used confidence intervals. The paper is littered with plots like the one below; the panel on the left (A) shows the main data (mean response times [RTs] to task-switches and task-repetitions as a function of the interval between tasks, known as the response–cue interval [RCI]) together with their 95% confidence intervals. In this literature, one key effect is whether the difference between switch RT and repetition RT (the “switch cost”) decreases as RCI increases.

 

E2rci

 

To keep with the “estimation” philosophy of Cumming, I was more interested in the magnitude of the switch cost as RCI increased. So, in the right panel (B), I calculated the switch cost (switch RT minus repetition RT) for each RCI, and plotted them with their 95% confidence intervals. There are two types of confidence intervals; the thin one is the standard 95% CI, and the dark one is the Loftus & Masson CI for within-subject designs (see paper here).  (If you want to know how to do two error bars in Excel, I produced a short video here.)

I believe NHST is not essential for my purpose here (or throughout the paper). The panel on the left shows the general pattern of data, together with an estimate of the precision of these estimates. The panel on the right shows the magnitude (and precision of the estimate) of the switch cost. It is clear to see that the switch cost is reducing as RCI increases. The advantage of the CI approach—in my opinion—is that it forces us to focus on the magnitude & precision of effects, rather than their mere presence/absence. 

I’m not sure that I will never use NHST again. This paper was unique in that I was practically the sole-author on the paper, and thus can take full responsibility for it. EJ-Wagenmakers and colleagues also make a compelling argument for why hypothesis testing is essential for psychological science (see paper here). 

Either way, this paper was fun to write. It was incredibly refreshing to not discuss p-values and “significance”. It is odd, but it felt more scientific. Why not give it a try in your next paper?