Category Archives: social priming

Posts on social priming, whatever your definition of that may be

Easing the pain of preregistration: Data Unboxing Parties!

Hi all. It’s been a while, for sure – lots of stuff been going on over the past months, including giving a TEDx talk, getting a new puppy, getting an annoying diagnosis, and me presenting at the Parapsychological Association meeting in Boulder. Yes, I’ve turned to the dark side. Sort of. In Boulder, I met several wonderful people, and one of them gave me a brilliant idea. As you know, in parapsychology people happily borrow concepts from physics, often with disastrous/hilarious results (see here). However, I think this idea I got in Boulder from a physicist will appeal to many.

It is about preregistration. Yet another epic blow to my Introduction to Psychology lecture slides: smiling does not make you feel better! Well thanks, Eric-Jan, now I have to disappoint another 450 students. What’s next, terror management theory not being true, so I can throw my joke in which I remind students about their mortality right before the weekend out of the window (oh… cr@p)? Anyway, all this has led to another revival of the preregistration debate. Should we preregister our studies?

I am not going re-iterate what has been already said about the topic. The answer is unequivocally YES. Really, there is absolutely no sound argument against preregistration. It does not take away creativity, it does not take away ‘academic freedom’, all it does is MAKE YOUR SCIENCE BETTER. However, many people do fear preregistration is at best unnecessary, and, at worst, a severe limitation to academic freedom.

In all seriousness – I think we need to be a bit less stressed out about preregistration. Basically, it’s a very simple procedure in which you state your a priori information and beliefs about the outcomes of your manipulation. Together with the actual data and results, this gives a far more complete record of what an empirical observation (i.e., the outcomes of a study) actually tells us. That’s it. Nothing more. The preregistration is simply an extension of the data, telling us the beliefs and expectations of the researcher, allowing for better interpretation of the data. And yes, this is what an introduction section of a paper is for, but simply think of your preregistration as a verifiable account of that piece of data, just as your uploaded/shared data are a verifiable account of your observations.

This also means that if you have *not* preregistered your study or analysis, it’s still a valuable observation. But less so than a preregistered one, for the simple reason that we lack a verifiable account of the a priori information, and need to trust a researcher on her/his blue eyes – similar to researchers who refuse to share empirical data for no good reason.

All this does not preclude exploratory analyses – you can still do them. However, it’s up to the reader to decide upon the interpretation of such outcomes. A preregistration (or lack thereof) will make this process easier and more transparent.

Now, how to implement all this in good lab practice and make it less of a pain?

A physicist I met in Boulder told me a very interesting thing about his work (a.o. at LIGO): for any experiment, first, they develop the data analysis protocols. In this stage, they allow themselves all degrees of freedom in torturing pilot datasets. Once the team has settled on the analysis, the protocols are registerd, and data collection begins. All data is stored in a ‘big black box’. No one is allowed to look at the data, touch the data, manipulate the data, or think about the data (I think I made that last one up). Then, once the data is in, the team gathers, with several crates of beer/bottles of wine/spirit/etc., and unboxes the data by running the preregistered script. The alcohol has two main uses: either, to have a big party if the data confirm the hypothesis, or to drown the misery if the data does not confirm the hypothesis.

I found this such a great idea I’m implementing this in my lab as well. We’re going to have data unboxing parties!

Ideally, we’ll do stuff like this from now on:

[1] go crazy on experimental design
[2] run pilot *), get some data in (for most of my stuff, typically N=5)
[3] write analysis script to get desired DVs. If necessary, go back to [1]
[4] preregister study at aspredicted.org, upload analysis software and stimulus software to OSF
[5] collect data, store data immediately on secure server
[6] get lab together with beer, run analysis script
[7] sleep off hangover, write paper regardless of outcome

So – who’s in on this?!

*) the pilot as mentioned here is a full run of the procedure. This is not to get an estimate of an effect size, or to see ‘if the manipulation works’, but rather a check to see if the experimental software is running properly, if the participants understand what they need to do, if they do not come up with alternative strategies to do a task, etc. The data from these sessions is used to fine tune my analyses – often, I look for e.g. EEG components that need to be present in the data. My ‘signature paradigm’ for example evoked a strong 10 Hz oscillation. If I cannot extract that from a single dataset, I know something is wrong. So that’s what the pilot is for.

Call for suggestions!

Hi all,

Shortly we will be running a pretty cool EEG experiment on perceptual decision making in romantically involved couples. Basically, a couple (let’s call them Alice and Bob) will come into the lab, each be assigned their own computer and then take turns in a perceptual decision making task (see Jolij and Meurs, 2011, for more details on the task itself). So, first Alice will get to see a trial and give a response; then Bob will see Alice her response (as a cue) and will do the trial, and to conclude, both will see each other’s answers. During the experiment, we’ll be measuring EEG (NB: only 8 channels). Before the experiment, both partners fill out a series of questionnaires on relationship duration, quality, etc.

In the spirit of open science, I though it might be useful to ask you all what would make this dataset useful for you. I mean, we are going to test these participants anyway, in a rather non-typical setup (two EEG measurements simultaneously, meaning you can look at all kinds of interpersonal processes, EEG synchronization, ect.), so if there is anything I could add that does not take too much time so this could be an interesting dataset for you, let me know. Think of maybe a block of eyes-closed EEG data during a breathing exercise to study interpersonal synchrony, a particular questionnaire, additional markers, whatever.

As long as it does not add too much time to the experimental protocol, or takes up too much time programming, I am happy to include stuff. Please do get in touch if you want to know more: j.jolij@rug.nl.

Within-subject designs in social priming – my attempts

TL;DR summary: it’s perfectly possible to do a within-subject design for ‘social’ priming.

This is going to be an attempt at a more serious post, about some actual research I have done. Moreover, I really need to get back into writing mode after summer leave. Just starting cold turkey on the >7 manuscripts still waiting for me did not work out that well, but maybe a nice little blog will do the trick!

This weekend, I was engaged in a Twitter exchange with Rolf Zwaan, Sam Schwarzkopf, and Brett Buttliere about social priming (what else? Ah, psi maybe!)  A quick recap: social (or better: behavioural) priming refers to the modification of behaviour by environmental stimuli. For example, washing your hands (and thus ‘cleaning your conscience’) reduces severity of moral judgments. Reading words that have to do with elderly people (‘bingo’, ‘Florida’) makes you walk slower. Or, feeling happy makes you more likely to see happy faces.

The general idea behind such effects is that external stimuli trigger a cascade of semantic associations, resulting in a change in behaviour. ‘Florida’ triggers the concept ‘old’, the concept ‘old’ triggers the concept ‘slow’, and if you think about ‘slow’ this automatically makes you walk slower. Indeed, semantics are closely tied to information processing in the brain – a beautiful study from the lab from Jack Gallant shows that attention during viewing of natural scenes guides activation of semantically related concepts. However, whether the influence of external stimuli and semantic concepts is indeed so strong as some researchers want us to believe is questionable. Sam Schwarzkopf argued in a recent blog post that if we are so strongly guided by external stimuli, our behaviour would be extremely unstable. Given the recent string of failures to replicate high-profile social priming studies, many researchers have become very suspicious of the entire concept of ‘social priming’.

What does not exactly help is that the average social priming study is severely underpowered. People like Daniel Lakens and UIi Schimmack have done far better in explaining what that means than I can, but basically it boils down to this: if you’re interested in running a social priming study (example courtesy of Rolf Zwaan), you pick a nice proverb (e.g., ‘sweet makes you sweeter’), and come up with an independent variable (your priming manipulation, e.g. half of your participants drink sugary water; the other half lemon juice) and a dependent variable (e.g., the amount of money a participant would give to charity after drinking the priming beverage). I’ve got no idea whether someone did something like this… oh wait… of course someone did.

Anyway, this is called a ‘between subject’ design. You test two groups of people on the same measure (amount of money donated to charity), but the groups are exposed to different primes. To detect a difference between your two groups, you need to test an adequate number of participants (or, your sample needs to have sufficient power). How many is adequate? Well, that depends on how large the effect size is. The effect size is the mean difference divided by the pooled standard deviation of your group, and the smaller your effect size, the more participants you need to test in order to draw reliable conclusions. The problem with many social priming-like studies is that participants are only asked to produce the target behaviour once (they come into the lab, drink their beverage, fill out a few questionnaires, and that’s it). This means that the measurements are inherently noisy. Maybe one of the participants in the sweet group was in a foul mood, or happened to be Oscar the Grouch. Maybe one of the participants in the sour group was Mother Theresa. Probably three participants fell asleep, and at least one will not have read the instructions at all.

To cut a long story short, if you don’t test enough participants, you run a large risk of missing a true effect (a false negative), but also you risk finding a significant difference between your groups whilst there is no true effect present (a false positive). Unfortunately, many social priming studies have used far too few participants to draw valid conclusions. This latter thing is significant (no pun intended). Given that journal editors until recently primarily were interested in ‘significant results’ (i.e., studies that report a significant difference between two groups), getting a significant result meant ‘bingo’. A non-significant result… well, too bad. Maybe the sweet drink wasn’t sweet enough to win over Oscar the Grouch! Add a sugar cube to the mix, and test your next batch of subjects. If you test batches of around 30 participants (ie. 15 per group, which was not abnormal in the literature), you can run such an experiment in half a day. Sooner or later (at least within two weeks if you test full time), there will be one study that gives you your sought-after p < .05. Boom, paper in!

In cognitive psychology and neuroscience we tend to be a bit jealous of such ‘easy’ work. Our experiments are harder to pull off. Right before summer, one of my grad students finished her TMS experiment, for which she tested 12 participants. For 18 hours. Per participant. In the basement of a horrible 1960s building with poor airco whilst the weather outside was beautiful. Yes, a position in my lab comes with free vitamin D capsules because of occupational health & safety reasons.

Moreover, the designs that we typically employ are within-subject designs. We subject our participants to different conditions and compare performance between conditions. Each participant is his/her own control. In particular for physiological measurements such as EEG this makes sense: the morphology, latency and topography of brain evoked potentials vary wildly from person to person, but are remarkably stable within a person. This means that I can eliminate a lot of noise in my sample by using a within-subjects design. As a matter of fact, the within-subjects design is pretty much the default in most EEG (and fMRI, and TMS, and NIRS, etc.) work. Of course we have to deal with order effects, learning effects, etc., but careful counterbalancing can counteract such effects to some extent.

Coming from this tradition, when I started running my own ‘social priming’ experiments, I naturally opted for a within-subjects design. My interest in social priming comes from my work on unconscious visual processing – very briefly, my idea about unconscious vision is that we only use it for fight-or-flight responses, but that we otherwise rely on conscious vision. The reason for this is that conscious vision is more accurate, because of the underlying cortical circuitry. Given that (according to the broad social priming hypothesis) our behaviour is largely guided by the environment, it is important to base our behaviour on what we consciously perceive (otherwise we’d be  acting very odd all the time). This led me to hypothesize that social priming only works if the primes are perceived consciously.

 

I tested this idea using a typical masked priming experiment: I presented a prime (in this case, eyes versus flowers, after this paper), and measured the participant’s response in a social interaction task after being exposed to the prime, in total 120 trials (2 primes (eyes/flowers) x 2 conditions (masked/not masked) x 30 trials per prime per condition). The ‘social interaction’ was quite simple: the participant got to briefly see a target stimulus (happy versus sad face), and had to guess the identity of the face, and bet money on whether the answer was correct. Critically, we told the participant (s)he was not betting her/his own money, but that of the participant in the lab next door. Based on the literature, we expected participants to be more conservative on ‘eye’-primed trials, because the eyes would remind them to behave more prosocially and not waste someone else’s money.

Needless to say, this horrible design led to nothing. Major problem: it is very doubtful whether my DV truly captured prosocial behaviour. After this attempt, we tried again in a closer replication of earlier eye-priming studies using a between-subjects design and a dictator task, but after wasting >300 participants we came to the conclusion many others had drawn before: eye priming does not work.

But this doesn’t mean within-subjects designs cannot work for priming studies. There’s no reason why you could not use a within-subjects design to test, for example, whether having a full bladder makes you act less impulsively. As a matter of fact, I’ve proposed such a study in a blog post from last year.

Another example: I am not sure if we could call it ‘social priming’, but a study we did a while ago used a within-subject design to test the hypothesis whether happy music makes you better at detecting happy faces and vice-versa. Actually, this study fits the bill of a typical ‘social priming’ study – activation of a high level concept (happy music) has an effect on behaviour (detecting real and imaginary faces) via a very speculative route. It’s a ‘sexy’ topic and a finding anyone can relate co. It may not surprise you we got a lot of media attention for this one…

Because of the within-subjects design we got very robust effects. More importantly, though, we have replicated this experiment two times now, and I am aware of others replicating this result. As a matter of fact, we were hardly the first to show these effects… music-induced mood effects on face perception had been reported as early as the 1990s (and we nicely cite those papers). The reason I am quite confident in the effect of mood on perception is that in our latest replication, we also measured EEG, and indeed find an effect of mood congruence on visual evoked potentials. Now, I am not saying that if you cannot find a neural correlate of an effect, it does not exist, but if you do find a reliable one, it’s pretty convincing that the effect *does* exist.

What would be very interesting for the social priming field is to come up with designs that show robust effects in a within-subjects setting, and ideally, effects that show up on physiological measures. And to be frank, it’s not that difficult. Let’s suppose that elderly priming is true. If concepts related to old people makes you indeed behave like grandpa, we should not just see this in walking speed, but also in cognitive speed. Enter the EEG-amplifier! Evoked potentials can be used nicely assess speed of cognitive processing – in a stimulus recognition task, for example, latency of the P3 correlates with reaction time. If ‘old’ makes you slower, we’d expect longer P3 latencies for trials preceded by ‘old’ or a related word, than for trials preceded by ‘young’. Fairly easy experiment to set up, can be run in a week.

Or even better – if, as the broad social priming hypothesis postulates, social priming works by means of semantic association, we should be able to find evidence of semantic relations between concepts. Again something that is testable, for example in a simple associative priming task in which you measure N400 amplitudes (an index for semantic relatedness). As a matter of fact, we have already run such experiments, in the context of Erik Schoppen‘s PhD project, with some success – we were able to discriminate Apple from Android enthusiasts using a very simple associative priming test, for example.

All in all, my position in the entire social priming debate has not changed that much. I do believe that environmental stimuli can influence behaviour to quite some extent, but I am very skeptical of many of the effects reported in the literature, not in the last place because of the very speculative high-level semantic association mechanisms that are supposed to be involved. In order to lean more credibility to the claims of ‘social priming’, the (often implicit) hypotheses about the involved mechanisms have to be tested. I think we (cognitive/social neuroscientists) are in an excellent position to help flesh out paradigms and designs that are more informative than the typical between-subject designs in this field. At least I think working together our colleagues in social psychology this way is more fruitful than trying to ‘educate’ social priming researchers about how ‘wrong’ they have been, and doing direct replications (however useful) of seminal studies and bask in Schadenfreude when the yet another replication attempt fails, or meta-analysis shows how flimsy an effect is. We know that stuff already. No need to piss each other off IMO (I am referring to a rather escalated ISCON discussion here of last week).

Let’s do some cool stuff and learn something new about how the mind works. Together. Offer made last year still stands.

On (social) priming

Hmm, never thought (micro)blogging would be such an interesting experience… turns out it’s an excellent way to be exposed to different views and opinions. Last week, I posted an unpublished manuscript and dataset in which we attempted to make people behave more prosocially after priming them with eyes, an effect which has been originally published by Haley and Fessler in 2005, and conceptually replicated by Bates, Nettleson and Roberts in 2006.

I did not set out to directly replicate this effect, and to test its existence. Rather, I was interested in putting my theories on conscious versus unconscious perception to the test. In several papers, but most importantly Jolij and Lamme, 2005 we found that people can respond to unconsciously processed visual information (‘blindsight’), but only do so when they are in ‘guessing mode’. I proposed that this may the result of ‘repression’ of unconscious information.

What does this have to do with priming? Well, here I explain why I think sometimes we do, and sometimes we do not find blindsight. The idea is that unconscious information processing is great, but may be inaccurate. Since our behavior is so easily influenced by all kinds of external stimuli (yes, I was a firm believer in ‘social’ priming!), you don’t want any inaccurate information influencing you, and therefore the cognitive system represses the inaccurate information it gets from the unconscious visual pathway (most of the time).

To test that idea, we came up with the experiment I posted last week. We took a ‘social’ priming effect, and manipulated prime visibility by masking the primes. Our prediction was that priming would not work for masked primes. And that came out! Sadly, the priming effect was also absent for the visible primes. After running several studies, totaling almost 400 participants, I gave up. I simply could not find any reliable evidence for the effect I was looking for, not with tokens, not with study credits, not with money,  not with questionnaires. Now, all this was in 2008-2010.

After an inspiring talk by Zoltan Dienes about half a year ago (see here) I went through my archives to see if there might be anything there I could analyse with his methods and found this dataset back. I ran a Bayes factor analysis, and found that the Bayes factors were informative, and yielded substantial evidence for the null hypothesis of no effect. In other words: in the presented dataset it’s not just a null-effect, it’s actually 5.26 times more likely there is no effect of eye primes in this context than that there is. Given that there apparently was information in the data, despite a fairly low N (although nicely in line with your average priming experiment), we decided to give it a try and publish it.

Admittedly, with today’s knowledge, I tried to capitalize somewhat on the debate on social priming, and it turns out one sentence in particular was found somewhat offensive by some:

The lack of a firm theoretical background, problems with statistical power, potentially flawed methodology [6, 7], the exposure of several high-profile studies as fraudulent [8, 9], but most importantly, repeated failures to directly replicate several effects [10-13] has led to strong skepticism towards the notion of social priming.

In all fairness, I do think this statement is accurate. There is skepticism towards ‘social’ priming, and the reason for that is that quite some direct replications have failed, and that QRPs and fraud have been uncovered in others. Most importantly, though, we have yet to see a solid explanation as to why we sometimes do, and sometimes do not get these effects.

But the problem is: what is ‘social priming’ in the first place?

On the ISCON Facebook page, Jeff Sherman once mentioned that all priming (both ‘cognitive’ and ‘social’) is priming, because it’s all about priming behavior. Norbert Schwarz jokingly defined ‘social priming’ as ‘priming cognitive psychologists cannot replicate’.

Now, both of  these statements are obvious oversimplifications. One of the problems plaguing the ongoing debate on priming is the lack of a clear taxonomy of what comprises different kinds of priming, and indeed, I myself have been guilty of not properly defining what I mean with ‘social priming’ in the manuscript I posted last week.

So, let me give you my 2c  on the matter. I agree with Jeff Sherman to most extent. Priming is the modification of processing of subsequent stimuli and behavior by a given prime stimulus. Period. What distinguishes ‘cognitive’ from ‘social’ priming in my understanding (which, arguably, may be totally wrong) is mainly the length and complexity of the processing chain between prime and behavior. In what we call ‘cognitive’ priming, the chain is short. The archetypical ‘social priming’ study typically relies on a long chain of events between prime and behavior.

In most priming effects I typically employ in the lab, it’s about visuomotor transformations. In a task in which participants have to respond to the direction of an (target) arrow, presenting another (prime) arrow in the same direction, responses to the target will be faster, even if the prime is masked. This can be quite easily explained in terms of decision thresholds, or biasing processing of visual input. Several studies have shown direct communication between visual and motor areas in such tasks, and modulation of baseline motor activity by primes – in other words, as long as there is some direct visuomotor transformation, we can actually map the effect of our primes on brain activity in real time. I can pinpoint and measure the different subprocesses (perceptual encoding in the visual cortex between 80 ms and 100 ms, plus a second stage around 200-300 ms, decision making in the parietal cortex, starting around 100 ms, and  motor preparation in the motor areas, from 200 ms after stimulus onset, etc.) and study these independently. In other words: I know what’s going on.

However, when I am priming people with eyes to make them behave more prosocially, I do not. I can come up with a decent chain of events, though. From fMRI studies, we know that eyes are processed in dedicated areas of the visual system, and we know that prosocial behavior (in particular in ultimatum and dictator games) is mediated by the right dorsolateral prefrontal cortex from several fMRI and TMS studies. I can well imagine a modulation of the DLPFC by eye cues, only this has not been shown (yet). And maybe it will not be, because there is no such thing – I don’t know. At least, it can be tested.

For embodiment-type priming effects, it differs. There are pretty well-understood effects: take for example the SNARC-effect: if you have to respond to a number, you’re faster responding with your left hand than your right hand when the number is small, but vice versa when the number is large. This is a quite robust effect, attributed to the automatic activation of a ‘mental number line’. We typically order numbers from left to right: 1, 2, 3, 4, etc. In other words, magnitude of a number has a direct relation with spatial cognition. Indeed there is pretty good converging evidence from fMRI and TMS studies that a fronto-parietal spatial cognition network plays a critical role in number magnitude processing.

Interestingly, there is also evidence these spatial cognition networks underlie perception of social distances. This allows me to do a very specific prediction: if you prime someone with the concept ‘close’, and subsequently ask for this judgment about his social distance to someone, this should result in ‘closer’ judgments then when you prime someone with the concept ‘far away’. I am aware that Williams and Bargh did this in 2008, and indeed found this effect, but unfortunately they used a pretty poor priming strategy, and not surprisingly Pashler et al (2012) failed to replicate the effect. What’s needed as a prime is ideally a distance judgment task in 3D (so how far an object is from you) that really draws on spatial processing, rather than drawing two dots that are either close to each other or separated.

Now, the longer the association chain becomes, the more ‘line noise’ there may be, and the less credible effects become at first sight. Take for example the pee-study by Tuk et al. (this one) claims that a full bladder leads to increased impulse control, allegedly because having a full bladder requires the ‘inhibitory’ parts of the brain to prevent you from peeing, which at the same time inhibits making impulsive decisions. A bit far-fetched, but ok. I can see this work. Your brain gets somatosensory feedback from the sphincter muscle of your bladder, but you need to put in effort to hold in your wee, which allegedly activated the cognitive control circuits of your brain. This does not automatically imply that all behavior is subsequently inhibited of course. Sadly, the authors have missed out on a great opportunity to test their explanation (their study 3 is hardly convincing for their argument): why not do a task that measures response inhibition, in a ‘full bladder’ and ’empty bladder’ condition, within-subject, of course?

It’s quite easy, really. You can do an anti-saccade task (in which participants have to suppress the urge to make an eye movement to a suddenly appearing target). If the authors are correct, one should expect better performance in the bladder-full than in the bladder-empty condition. N=30 to 40, with at least 200 – 300 trials per participant should do the trick. An experiment like this can be run in about a week.

Now,  where things get quite dubious for me as a cognition researcher is the authors’ claim that priming participants with words that have to do with urination produce similar effects on behavioral inhibition. The chain of necessary events here is very long. First, it assumes that reading a word related to peeing (such as toilet, watering, etc.) activates a semantic network related to urination. Ok. There’s evidence for that. Point granted. Second, that semantic activation somehow results in a greater awareness of an urge to pee. Maybe. If you draw attention to a bodily function, participants will be more aware of it. This increased awareness leads to an actual increased urge to pee. Good, they actually tested that, and found an effect. Subsequently, this leads to increased inhibitory control (not tested), which leads to less impulsive behavior.

Notwithstanding my doubts, combining this type of priming with an anti-saccade task may be used to prove or disprove their hypothesis. Again, if seeing the word ‘urine’ activates inhibitory systems, we would expect an improvement on anti-saccade performance after priming participants with pee.

To cut a long story short, what I miss in a lot of priming research is a justification of all the individual assumptions that are made in order to explain the priming effect observed. If we can actually be more specific about these assumptions, and test them, we might actually get somewhere.

So, why not work together and figure out what’s really going on? If anyone is interested in the distance, or pee-studies, let me know, or if you know of a priming manipulation which I could use in stead of eye priming for the study that kicked this off, please get in touch!

 

Making people nicer with eye primes does not work – always.

Quick update here – I got some questions about this study, and to clarify some things:

  • this study is not a direct replication, nor is it intended as a replication. Rather, I was interested in putting my ideas on unconscious perception to the test: based on earlier work, I predicted that priming prosocial behaviour would only work for unmasked stimuli, and not for masked stimuli. Turned out the priming effect did not work for either condition.
  • Because it was not a replication attempt, I did not stick to Simonsohn’s N*2.5 rule. Nevertheless, Bayes factors turned out to be informative, and yield substantial evidence for the null.
  • Did I expect the effect to replicate? Sure as hell I did! The data published here is only our final attempt. In total, we tested almost 400 participants between 2008 and 2010, but since this was all ‘before Stapel’, and there was no effect in the data, I have been sloppy with the earlier data. This dataset, collected by Tineke de Haan for her MSc-thesis, is complete and well-documented; the other data I got in is not. Otherwise I could have presented a huge dataset without any effect. A second experiment, in which we looked at effects of eye primes on responses on questionnaire that measures moral behaviour (N=90), also – no effect at all. If you’re interested in that data, let me know.The only reason I kept on trying is because I truly believed the effect would be there, because it’s so plausible!
  • Is ‘social priming’ anything I expect not to replicate? Nonsense. See new blog post in the making for that one.

Darn, some bad news for the weekend. A paper I submitted to PLOS One got rejected. But in all fairness, it was a non-successful conceptual replication with so many potential moderators that it’s hard to draw any conclusions. We (that is, my master student and I) decided to try to publish it anyway, in line with this paper by Jelte Wicherts et al., but the editor disagreed.

So, too bad, but I can live with this rejection.

However, since I do not have time to shop around to try and get it published somewhere else, but at the same time would like to save the rest of the world interested in making people nicer with eye primes (masked and unmasked) some time, I uploaded the data to PsychFileDrawer, plus as a bonus, the full manuscript and data can be downloaded from this very website.

The manuscript can be found here, the data here. Comments of course welcome via e-mail!

Social priming and psi

Update: before being accused of being a bully because I compare social priming researchers to parapsychologists, a) I consider myself to be a psi researcher, so if anyone takes offense to being compared to myself, I am very sorry, and b) the bottom line of this post is: if you take social priming seriously based on the empirical record, you should take psi seriously, too.

Now, there you have two controversial terms in one blog post title! No, I am not going to claim that psi may be involved in social priming or vice versa. No, I won’t make any claims here about paranormal phenomena playing a role in social priming (although…), but something struck me when going over my Twitter feed this weekend.

So, what happened? Well – sh*t really hit the fan after publication of Social Psychology’s replication issue, edited by Brian Nosek and Daniel Lakens, to be found here. A lot of things have already been written and blogged on the entire issue of replications and replicating research, and the debate has turned quite ugly from time to time. But I’ll not be addressing that here – many other did a better job at that than me.

No, what struck me is that there seem to be some interesting parallels between parapsychology and social priming research I’d like to share with you. Disclaimer beforehand: I am an active researcher in both social priming and psi. I may be prejudiced with regard to both topics – please keep that in mind when reading 😉

First of all: both fields make bold claims about the nature of the human mind and – if correct – have far-stretching consequences for our understanding of who we are. If psi exists, it would mean the mind does not answer to laws of physics, and may therefore not be reduced to brain activity. Or the laws of physics are wrong, that’s also an option. At least, the confirmed existence of psi would change our view of the world. Your mind is not what your brain does – but something more! That’s an idea many people would find attractive.

Social priming research, if true, shows that our environment has an extremely large impact on our behaviour, both overt and covert – a simple prime may make us walk slower, make us buy stuff we normally would not, or even make us more or less prone to show criminal behaviour. Taken to the extreme (a point once defended by Diederik Stapel in an interview with the Dutch ‘Academische Boekengids’, if you read Dutch, you can find it here), it means that you, or your ‘self’ – the agent that decides what you body will do next – is nothing more than a series of tendencies primed by your environment. Consciousness and free will have little to do with behaviour and are just illusions. Maybe not a pleasant idea, but very tangible – it means that human behaviour is rational, and can be completely explained and understood. Again, an idea many people would find attractive.

So, it’s clear both fields have a large appeal: they challenge our native and naive ideas about who we are. It’s therefore not surprising that both parapsychology and social priming are ‘hot topics’ in the main stream popular media.

Both social priming and parapsychology have a serious problem, though: after a series of spectacular claims and promising results (for parapsychology in the 1930s, for social priming starting in the 1990s), problems arose. Key findings turned out to be difficult to replicate. In parapsychology, there is even a term for this: the decline effect, and it’s even become a topic of study. After some initial successes to demonstrate telepathy, precognition or clairvoyance, effect sizes decreased, to disappear completely after repeated replication attempts. In social priming, we see that the large effects reported by original studies quite often turn out to be far smaller or even non-existent in subsequent replications ran with larger populations. As a result, both fields are struggling to show that the effects they study even exist. Overall, meta-analyses do show there is ‘something going on’, both for psi and social priming, but the actual effects are elusive.

The emphasis on showing effects has drawn attention away from what a mature field should do: come up with theories and test those. Both parapsychology and social priming are traditionally characterized by lack of theories that explain the phenomena that are being studied. And with ‘theories’ I mean a general, and plausible framework that can produce falsifiable claims – not post-hoc explanations for effects. In social priming, for example, I once read a nice metaphor about how behaviour is akin to a piano on a sheet of ice, subject to all kind of external forces – see here) Although this sounds very reasonable, this theory cannot be falsified – if a finding does not replicate, you can always conjure up a ‘moderating’ variable that has extinguished the effect. Another reason that in particular cognitive (neuro)scientists are very critical about social priming research is that the explanations for the effects are very implausible with regard to their (neuro)cognitive implementation.

My greatest concernt, though, is the elusiveness of the effects. I do accept that the effects may exist. I doubt, though, how relevant the effects are in everyday life. In a blogpost, Simone Schnall mentioned an online replication attempt of her (in)famous finding that washing your hands makes you behave more morally. The replication failed. Schnall was not surprised – she explicitly stated that the priming procedure would only work in the lab, where subjects can be closely monitored. This is a pretty strong blow for ecolgical validity – if an effect does not replicate outside the lab, then what does it really tell you about human behaviour?

Parapsychology, though, seems to have matured a bit more than social priming over the last years. There are several falsifiable theories out there that do predict when psi phenomena will occur, and under what circumstances, for example Von Lucadou’s model of Pragmatic Information, and Bierman’s CIRTS-model. Both these models are inspired by physics, and do make sense. Most importantly, they are falsifiable: both MPI and CIRTS make very explicit predictions about psi-effects. According to the MPI, for example, psi-phenomena can be explained as non-local correlations, analogous to quantum entanglement. As in quantum theory, MPI postulates that such non-local correlations can never be used to transmit information – if that were possible, they would allow for faster-than-light communication, and thus for nasty paradoxes. This yields some weird predictions: most importantly, as soon as an effect becomes ‘informative’ it has to disappear. For example, you may be able to find presentiment in one study. However, in the next study you now know that you may expect presentiment, and thus build a presentiment-meter (see my previous post). According to MPI, you’re not allowed to – and, poof, your effect is gone.

So, how to demonstrate psi if it disappears if you’re looking for it? Von Lucadou proposed an elegant solution: don’t look for it specifically. Von Lucadou and co-workers have published several experiments in which they show that in interactions between an observer and a quantum random number generator, the output of the qRNG will correlate with aspects of the observer. However, which aspects cannot be known beforehand. So, the one time, there is a correlation between the qRNG and the observer’s intentions (which would be the classical psychokinesis-case – it looks like you’re influencing a physical system with your mind), the next time it’s a correlation between the qRNG and the observer’s shoe size. The only solid thing is that if you measure, let’s say, 100 correlations, you will always find more than you’d expect on basis of chance alone.

So, to summarize – psi and social priming are both controversial fields, where there is good reason to assume something’s going on – but we don’t know what. Both fields have come up with theories, and parapsychology seems to be doing an even better job than social priming. However, in the end, it’s very well possible both fields are chasing ghosts. Well, if that’s the case, at least the parapsychologists can say it’s their job.