Category Archives: metascience

Posts on how we do science and how to make science better

New Year’s Resolutions

A new year — time to have a quick look back, and more importantly, a look forward! Here are some of my new year’s resolutions, posted here not for your convenience, but rather to make sure I stick to them…

But first, a look back… I know 2016 is generally regarded to have been a shitstorm for Planet Earth,  but in retrospect, it was a pretty good year for myself. One first-authored paper (ok, still in press, and it’s not in psychology, but in physics, but still), and two co-authored papers, and two grant proposals accepted. Could have been worse, as they say here in Groningen. Blog-wise, a bit disappointing. Despite a good start (apparently, personal interest stories do very well), I did not manage to crank the one post per month I had set myself to. On the personal front: very excited to play in a new band, family doing very well (re: that post on working hours, my wife hit the five-year post-diagnosis mark in April 2016, and thanks all of you for your support), so very little reason to complain.

Looking forward: there are some exciting projects starting in 2017 because of the grants we’ve got in, and several interesting papers coming up. And even though 2016 was a pretty good year for my science efforts, I will try to make 2017 even better. I’ve got some ideas to make my science more transparent, and be more productive. So, without further ado, here are Dr Jacob Jolij’s New Year’s Resolutions:

1. Stop b*tching, just do good science – as you may or may not know, I am (or better: used to be) an enthusiastic follower of several methods groups on Facebook, and many methods and open science people on Twitter. Interesting stuff  going on in 2016, including several high profile non-replications, and 2017 will be even better as the PRO Initiative kicks in. However, I am kind of done with following the debates between Bayesians and Frequentists, and the sermons of open science evangelists, often preaching to the choir. I found following and engaging in these debates to take up a lot of my time and energy, and distracting from what I really want to do: figure out the mysteries of consciousness and the universe. I know that many people find this an important topic, and therefore I have written a full post on it here.

2. Open up my science – huh, wait, wasn’t I just saying I was going to stop engaging in open science stuff? Well, no – but I think it’s better to put my money where my mouth is, rather than engaging in endless debates. I am going to make my science more transparent to the people who pay for it, i.e. the general public. I am already doing so by engaging in many outreach activities, but why not write a short, monthly update of what I’ve done that month for my assitant professor’s salary? Following discussions on social media after Ben Feringa winning the Nobel Prize, it struck me that a surprising number of people haven’t got a clue about what scientists do, and think we’re getting enormous salaries for doing nothing. At least one of those latter two things is not true, so I think it’s a good idea to let some more people than just my colleagues and my family in on what it is I am doing. Maybe not the kind of transparency the OSF or the PRO have in mind, but perhaps it is the transparency Henk and Ingrid might find interesing (if you’re not Dutch, never mind the Henk and Ingrid reference).

3. Open writing – OK, this is going to be a very wild one. I am a terribly slow writer. I publish with the speed of a three-toed sloth, and the datasets (and manuscripts) keep piling up. This is not in the last place due to the fact I have some social anxiety issues (I really hate dealing with reviews so badly, it’s almost funny). So, what better way to deal with that in a kind of cold turkey way: throw all that stuff out here, on the nasty internet! I need some external pressure to keep on going, and your help is much appreciated. I am going to make my writing list public, including updates every month so you can track my progress. Sounds like fun, right (*shivers at the prospect of getting feedback of real people…*)? With a bit of luck I’ll get enough stuff published to get my promotion to associate professor this year 🙂

So, those are my New Year’s resolutions. I’ll report back in one year to let you know how things worked out 😉

 

Open Science – Epilogue

Over the past years it has become apparent we have a bit of a problem in psychology: because of wonky stats, methods, and publishing ethics, a lot of the progress in our field over the past decades turns out to be built on quicksand. 2016 has seen several high-profile replication failures which in part may be attributed to these problems. But there is hope! The so-called Open Science Movement is pushing for reforms in scientific methodology and practices that will undoubtedly leave its mark for years to come (in a good way!)

The Open Science Movement, a loose group of scientists passionate about open science, is well-represented on social media, for example on Twitter (check the hashtag #OpenScience), or on Facebook in the Psychological Methods and Practices (PsychMAP) page (please note that open science is not just about psychology, but being a psychologist I primarily follow the psychologists!) I have been following the developments in Open Science and improving statistics and methodology over the past years with great interest, because I genuinely believe open and better science to be a good thing.

Nevertheless, one of the first things I have done in the new year is to disengage from the ongoing discussions on Twitter and Facebook. I found that they take up a lot of time, and a lot of energy, without actually helping improve my science. The main reason for this is that all the arguments have been made, the open science community is increasingly preaching to the choir, and all too often, the sermons are little more than Schadenfreude at yet another failed replication (or improbable research result).

A similar observation can be made in the Bayesian versus Frequentist statistics debate. Following this debate on Twitter (or Facebook) is very interesting and entertaining. Snarky comments, interesting stats theory and beautiful examples of confirmation bias, as evidenced by papers linked to by Bayesian converts or frequenstist fundamentalists – it’s brilliant. However, what to make of it?

Finally, it seems that more and more discussions about the tone of the debate rather than the debate itself (see here for a rather [fill in your own adjective] example)… It leads to the puzzling situation where a lot of people are talking metascience on metascience. A bit to meta-squared to my taste.

It used to be fun for a while, following all this stuff, but after the latest posts on PsychMAP I realized following the methods debate is teaching me only few new things anymore, and that Open Science Advocates and Methods Reformers are more often annoying than inspiring me. That is not a good thing, because the cause of open science is a good one. So, the best way to deal with that negative emotion? Well, for me: disengage. However, not before leaving a few final comments on my position in these debates, of course 😉

1. Bayesians versus Frequentists

It seems to me that there is a deep misunderstanding of ‘Bayesianism’ at the core of this entire debate. Bayesian statistics is not just another way doing statistics, rather it is a different epistemological view on inference. You can read tonnes on Bayesian versus frequentist statistics elsewhere, so I will not reiterate this here, but it seems to me that the problems many people observe with p-values and/or Bayes factors simply boil down to a problem with inference and interpreting statistics.

Basically, a Bayesian does not believe we can express our knowledge of the world in absolutes (as in, “the null hypothesis can be rejected”), whereas a frequentist with a naive interpretation of null-hypothesis significance testing does. The Bayesian expresses her/his knowledge about the world in likelihood ratios, or how much more likely hypothesis A is than hypothesis B, which is exactly what a Bayes factor allows you to do. Unfortunately, this very nice and sensible philosophy is undermined by people who think a Bayes factor can be interpreted in a very similar way as a p-value, and are craving a cutoff at which they can say “my alternative hypothesis is true”! No, that’s not how it works, sorry. Whether you need to revise your beliefs in a hypothesis is up to you and not specified by a cutoff table. Given that a Bayes factor means something completely different than a p-value, I see very little use in reporting both p-values and Bayes factors, as some people propose.

However, of course one does not need a Bayes factor to make nuanced inferences. By actually reading papers, and looking at statistical evidence (such as a p-value) we can do the very same thing, albeit not in a quantified manner. A ‘significant’ result does not mean anything in and by itself. A replicated significant result, however, … brings me to the following:

2. Replications

…are key to scientific progress. However, when replicating a study, there is such a thing as ‘flair’, a concept introduced by Roy Baumeister, and subsequently widely ridiculed by methodological progressives. I don’t think that is entirely justified – there are effects that require quite a bit of expertise to obtain. In my own field, I am thinking of TMS-induced masking, for example. There’s a lot of tacit knowledge required with regard to subject handling, coil positioning, and stimulus design to get good masking effects. However, I think the same goes for ‘social’ manipulations. Sometimes you need to make a participant believe something that isn’t true (such as that they are participating in a group effort). Not every experimenter is equally good at this. Therefore I tend to be a bit careful when seeing a non-replication, rather than basking in Schadenfreude as seems to be a bit more customary than it should – especially when a non-replication is reported by a sceptic researcher. Experimenter effects are a thing, after all… Personally, I take the extreme sceptic view of wanting to replicate something for myself (which does not always work out…) before I believe it.

3. Effect sizes and power

Yeah, about that – what Andrew and Sabrina said. Small effects may matter, but on a group these are most often the result of a small number of participants showing a stronger response to a manipulation for some unknown reason. A small effect typically means (at least, in my book) that you do not know how the mechanism works, and perhaps need a better theory.

4. Preregistration

No-brainer. Data unboxing parties protocols are in full effect in my lab as of this year.

5. Open access and open data

Open access: that’s a no-brainer, too. Everyone should have access to scientific papers, all the more if those papers are funded by the tax payer.

Open materials: sure. All in. Feel free to download my bread-and-butter paradigm!

Open data: that depends. In psychology, we have a problem with data sharing. Several studies have shown that ‘data is available on request’ doesn’t mean sh*t when it comes to data sharing. The PRO Initiative, which has come into effect per Janurary 1, suggests that to counter this, scientists should make their raw data publicly available. I have some issues with this, and the concerns I express in that and subsequent posts have not been taken away. I am preparing a more detailed response in a full paper, including actual data and legal stuff, but I don’t think it is up to scientists themselves to decide what data can be publicly shared for the benefit of science, and that we should err on the side of caution, and not have public sharing of human participant data as default. In sum, I am still not joining PRO. However, with regard to my own data sharing practices: my university already requires me to store all my data on the university servers, and to share with other researchers (which is not the same as public sharing!), so I think I am as open with my data as I can (and want to) be at this stage.

Concluding

The Open Science Movement has definitely changed my scientific practices for the better, and I have learnt a great deal following the debate. However, apart from the open data issue, I think I am kind of done with it. Time to move on, and use all the great things I have learnt to do some real science!

 

 

 

Easing the pain of preregistration: Data Unboxing Parties!

Hi all. It’s been a while, for sure – lots of stuff been going on over the past months, including giving a TEDx talk, getting a new puppy, getting an annoying diagnosis, and me presenting at the Parapsychological Association meeting in Boulder. Yes, I’ve turned to the dark side. Sort of. In Boulder, I met several wonderful people, and one of them gave me a brilliant idea. As you know, in parapsychology people happily borrow concepts from physics, often with disastrous/hilarious results (see here). However, I think this idea I got in Boulder from a physicist will appeal to many.

It is about preregistration. Yet another epic blow to my Introduction to Psychology lecture slides: smiling does not make you feel better! Well thanks, Eric-Jan, now I have to disappoint another 450 students. What’s next, terror management theory not being true, so I can throw my joke in which I remind students about their mortality right before the weekend out of the window (oh… cr@p)? Anyway, all this has led to another revival of the preregistration debate. Should we preregister our studies?

I am not going re-iterate what has been already said about the topic. The answer is unequivocally YES. Really, there is absolutely no sound argument against preregistration. It does not take away creativity, it does not take away ‘academic freedom’, all it does is MAKE YOUR SCIENCE BETTER. However, many people do fear preregistration is at best unnecessary, and, at worst, a severe limitation to academic freedom.

In all seriousness – I think we need to be a bit less stressed out about preregistration. Basically, it’s a very simple procedure in which you state your a priori information and beliefs about the outcomes of your manipulation. Together with the actual data and results, this gives a far more complete record of what an empirical observation (i.e., the outcomes of a study) actually tells us. That’s it. Nothing more. The preregistration is simply an extension of the data, telling us the beliefs and expectations of the researcher, allowing for better interpretation of the data. And yes, this is what an introduction section of a paper is for, but simply think of your preregistration as a verifiable account of that piece of data, just as your uploaded/shared data are a verifiable account of your observations.

This also means that if you have *not* preregistered your study or analysis, it’s still a valuable observation. But less so than a preregistered one, for the simple reason that we lack a verifiable account of the a priori information, and need to trust a researcher on her/his blue eyes – similar to researchers who refuse to share empirical data for no good reason.

All this does not preclude exploratory analyses – you can still do them. However, it’s up to the reader to decide upon the interpretation of such outcomes. A preregistration (or lack thereof) will make this process easier and more transparent.

Now, how to implement all this in good lab practice and make it less of a pain?

A physicist I met in Boulder told me a very interesting thing about his work (a.o. at LIGO): for any experiment, first, they develop the data analysis protocols. In this stage, they allow themselves all degrees of freedom in torturing pilot datasets. Once the team has settled on the analysis, the protocols are registerd, and data collection begins. All data is stored in a ‘big black box’. No one is allowed to look at the data, touch the data, manipulate the data, or think about the data (I think I made that last one up). Then, once the data is in, the team gathers, with several crates of beer/bottles of wine/spirit/etc., and unboxes the data by running the preregistered script. The alcohol has two main uses: either, to have a big party if the data confirm the hypothesis, or to drown the misery if the data does not confirm the hypothesis.

I found this such a great idea I’m implementing this in my lab as well. We’re going to have data unboxing parties!

Ideally, we’ll do stuff like this from now on:

[1] go crazy on experimental design
[2] run pilot *), get some data in (for most of my stuff, typically N=5)
[3] write analysis script to get desired DVs. If necessary, go back to [1]
[4] preregister study at aspredicted.org, upload analysis software and stimulus software to OSF
[5] collect data, store data immediately on secure server
[6] get lab together with beer, run analysis script
[7] sleep off hangover, write paper regardless of outcome

So – who’s in on this?!

*) the pilot as mentioned here is a full run of the procedure. This is not to get an estimate of an effect size, or to see ‘if the manipulation works’, but rather a check to see if the experimental software is running properly, if the participants understand what they need to do, if they do not come up with alternative strategies to do a task, etc. The data from these sessions is used to fine tune my analyses – often, I look for e.g. EEG components that need to be present in the data. My ‘signature paradigm’ for example evoked a strong 10 Hz oscillation. If I cannot extract that from a single dataset, I know something is wrong. So that’s what the pilot is for.

Working hours

Today’s short rant is not on science, but rather on the process of science. This tweet by Erika Salomon really resonated with me. It is about working hours in academia. Erika calls for academics to call out when someone asks students (or colleagues for that matter) to work for unreasonable hours.

Consider this my pledge for support. Upfront, I apologize for some strong language in this post.

Academia is facing a serious problem with regard to working ethos. Sure, it is great if you love your job so much you are willing to put in more than 60, or even 80 hours per week. It is great is you love your job so much that you are willing to move abroad, dragging along your significant other, and tolerating a string of temporary contracts before you finally can settle somewhere.

It is not so great that this work ethos has become the standard. I have put in 60 to 80 hours per week for my PhD. It resulted in a pretty good thesis, with some nice, well cited (and replicated ;-)) papers. I did move abroad, and dragged along my significant other. It did land me a nice, permanent position, even with prospect of promotion. I am a really happy little camper.

But I cannot (and do not) put in 60 to 80 hours per week any more. Over the past five years, we (as a happy family of four) have struggled with cancer, anxiety, and depression, and the one thing this has taught me is that no job is worth sacrificing your personal well-being. Especially not an academic job. You, your significant other (if you have one in your life), and if you have them, your children are more important than anything else. Those hours you spend writing on your grant proposal in your attic office are hours you cannot play with your kids, and cannot enjoy taking a long walk with your wife (or husband). And although you take them for granted, they are not. Inge (my wife) was 30 when she felt a lump in her breast. Six weeks later she was in surgery, eight weeks later getting her first chemo to treat an aggressive, triple-negative breast cancer. Now, almost exactly five years after her first visit to the GP, her cancer is in full remission. Our kids were 3 and 1, respectively when all this happened. Only now, we are slowly getting back on track. This kind of sh*t really has a tendency of messing up your life and shifting your priorities, I can tell you.

So, even though I am only 36 (37 next week), and I am considered to be ‘young’/’early career’ scientist, who should be putting in a lot of time and work to secure these ‘prestigious’ personal grants, like the Dutch ‘Vidi’ or even a ‘Vici’, and publish work in the ‘top-journals’ to progress my career, really, f*ck that sh*t.

I am not wasting my time anymore on stuff that’s only for ‘helping my career’. I am doing this job because I am fascinated by what I study (yes, psi, amongst other things. Got a problem with that?), and I love teaching. Not for the sake of becoming a hotshot professor anymore.  Would be nice along the line, but really, it’s not worth it to sacrifice my sanity and precious time with my family for. It’s the very least thing I can do for them, and especially for my wife, who moved all across Europe with me, leaving behind family and friends, and a job she loved, all for the sake of my career. It is just too sad that it took a family crisis for me to realize that.

Sadly, though, when it gets to putting in working hours in academia it’s kind of a nuclear arms race. If I don’t put in the time, someone else will, get out more papers, and thus secure the grants, and earn tenure/promotions/etc.

That has to change. The intense competition in this field is not normal. We have become addicted to prospect of publishing high-impact papers, for crying out loud! This is not very healthy, as indicated by the incidence of stress-related mental health problems many academics suffer from. I have seen too many people going down over the past years, burning themselves out, just to play the game. We have patted ourselves on the back for a while, believing that competition and focus on output made our field better, but it has become very painfully evident that this is not the case. Science is broken, but scientists as well.

The intense pressure that many young scientists feel permeates all aspects of life. Job security depends on the grants you get. The grants you’ll get depend on your papers. As a PhD student, you do not know whether you’ll get that postdoc. As a postdoc, your next international move is just another two years away. When finally landed that tenure-track position, it’s up or out… You’re financially dependent on your job performance, you have to compete with the smartest people in the world, so as long as you do not have the security of a permanent position, you’re going to work your ass off.

However, I think that most scientists might find themselves in a similar position as myself: we were the clever kids in school. The clever students during our undergraduate studies. Our friends, families, and teachers saw great promise in us, starting from an early age on. I think that few people in my direct surroundings have imagined me growing up to be anything else than a university professor. For me, unconsciously this has become part of my identity. Professor was not my job description, but who I was. You can imagine that this results in some messed up perceptions of a healthy work-life balance. Moreover, academic rejections, disputes and failures feel like, no, are personal failures – if you do not get that grant, it means someone else is better than you. Did someone else fail to replicate your paper – that means a personal attack! I am sure I am not the only one for whom this is true. Only over the past two to three years I have learnt to let go, and accept that I am more than an academic, that ‘academic’ is just my job description. It’s not who I am. It was (and is, and will be) quite a struggle, but for the better.

The combination of economic and personal insecurity makes academics so vulnerable for work related stress. We work harder than we should, and we will not easily speak up because we have created an awful system for ourselves that feeds upon our own vulnerabilities. It has to stop.

I cannot fix this, nor do I want to give my students the impression that things will change on the very short term. If we want to change the system, and stop it from exploiting our vulnerabilities, change has to occur on several levels. Policy makers need to stop thinking science can be evaluated on purely quantitative measures. Granting agencies need to stop funding people, and start funding ideas. Universities need to provide job security earlier on, and evaluate how an academic functions within the context of a department or school, rather than looking at how well someone has succeeded in establishing her or his own little fiefdom without burning out, which is the current practice.

However, these changes are slow. They will occur, though. We see them happening already. Many great people are already speaking out against the ridiculous competition in science, and against the ridiculous ideas about quantifying research quality. But it will take time. Besides, we need to change as well. As long as we are addicted to scoring our high impact papers things will not change.

So, if you’re working on your PhD, and feeling worn out, here are some words from a young guy who sometimes feels quite a bit older than he is: research is awesome. But put it into perspective. In the grand scheme of things, that paper you are working on right now, late in the evening, is not so important. Maybeit gets read by a few hundred people. Maybe it gets some media attention, but people will forgot that in a few weeks anyway. And if you’re really unlucky, in a couple of years, some smug replicators will try to replicate your study, fail, and your result ends up on the enormous heap op false positives in the literature. So, really, is the time you are spending on this paper right now worth it to not be with the people you love? Your paper will be there for you tomorrow morning, 9:00, when you get back to your desk. Trust me.

And is that course you need to teach bothering you because it gets in the way of your precious research? Think about it this way – your research has limited impact. Again – a couple of hundred people may read it, and knowing scientitsts, most of them will find it crap, anyway. But your teaching — I did a quick calculation, but since I started my academic career 15 years ago, I think I have taught over 3000 students in classes, supervised at least 100 bachelor theses, and over 50 master theses. Personally, I have enjoyed the many conversations I had with my students, helping them realize their potential, and seeing them grow and find their own way far and far more than any glowing comment I got on any of my papers. Talking about impact…

And that academic job? Well… several of my PhD-friends have not found an academic job, either. One of them has become a house painter. Last I heard from him was he is now happier than ever before in academia. Let that sink in for while.

Anyway, enough rambling. Time to act like my own age again, and not like grandpa. Folks, it was my pleasure, I’m going to check in on the kids, give them a nightly kiss, and then tuck in for the night. That paper I was going to write can wait until tomorrow 😉

On difficult surnames, reputation traps and a loose cable

Leonid Schneider asked me for my thoughts on his post on Frontiers in Paranormal Activities, in response to my sharing of Sam Schwarzkopf’s annoyance with people getting his last name wrong. I’ve got a difficult surname as well – it’s pronounced ‘yolay’, should that be of interest; ‘ij’ is a diphtong in Dutch, Jolij is the Dutchified version of my French ancestors’ name Joly – hence, the understanding. I had read Leonid’s post before, actually, when I saw it in relation to the Bial drug trial tragedy. At that time I did not respond, although I certainly did have a thought or two on the matter, but now that Leonid is asking, here we go.

What is the deal? Early 2014, a special issue in Frontiers in Psychology (or better, a Research Topic) was hosted by Etzel Cardeña and Enrico Facco on ‘Non-ordinary Mental Expressions‘. Some of the papers included in this topic are actually fairly ‘main stream’ (effects of psychedelics on neural activity, for example), but other papers were slightly more radical, including a paper on retro-priming and Cardeña’s editorial calling for an open view to the study of consciousness. These topics are, to say the least, controversial, and I do not think I have to elaborate on why that is so. This entire issue resurfaced this week when Etzel Cardeña published an ‘uncensored’ version of his editorial, and pointed out that research into the paranormal is typically ridiculed, researchers in the field are not taken seriously, and the ideas are basically dismissed without any consideration of data and or theory. There is a reputation trap: once you get associated with ‘weird stuff’, people will not talk with you anymore. Huw Price wrote a very worthwhile piece on this.

As said, Leonid Schneider wrote a long post on the special issue on ‘NOMEs’ in Frontiers, basically asking himself whether this is not one big practical joke on Editor-in-Chief Hauke van Heekeren. Because, you know, paranormal stuff?

The snark is strong in Leonid’s post. It’s quite clear that he does not take the study of psi as serious business. As I have indicated earlier, it does annoy me that skeptics all too easily ridicule researchers who are engaged in this type of research. This sentiment is very clear in Schneider’s piece, and it is also the reason for me not to comment earlier. I simply do not like the tone. What settled it for me, though, is the final addendum in which psi research is linked to the Bial drug trial tragedy. But more on that later.

I have argued before in several posts that I do believe psi can be a valid and relevant topic of study. Given that I am getting more and more involved with this debate, this may be a good occasion to give full disclosure on how and why I arrived at this position and show my true colours to friend and foe. Decide for yourself whether you want to group me with psi opponents, proponents, skeptics, or wafflers (though I am curious to hear from you with whom you would group me!)

First, there is a clear sentiment in Leonid’s post that psi research is not real science. I disagree. The sentiment seems to be based on the idea that psi cannot exist, and therefore researchers studying this topic cannot be taken seriously, and are probably running psychic hotlines next to their day-jobs, or are gullible fools who believe in fairies, Martians, and the Illuminati. More on that later.

With regard to what is science, I think science is not a belief system, but rather a structured method to increase knowledge about the world. As long as you stick to the rules of the game, there should be no taboo research areas. Of course, there may be research areas that make more or less sense than others, but as long as you stick to the scientific method, you’re doing science. In that respect, I do understand that Van Heekeren had no problems with a special issue on non-ordinary mental expressions in Frontiers. People do have weird experiences, after all. Regardless of what is actually going on, people do report out-of-body experiences, near-death experiences, and so on. These experiences are empirical fact (as in: people report having them). Therefore they are fair game for further study. I mean, if we could not study crazy experiences, psychiatrists and clinical psychologists would be out of a job, right? That said, let me be the first to admit that there is A LOT of god-awful (a.o. self-published) psi research and theories. Sturgeon’s Law (90% of everything is crap) applies to psi research more than to any other field I know of.

But shouldn’t Sturgeon’s Law for Psi Research not read ‘100% of everything is crap’, because a) psi cannot exist, b) psi researchers are idiots, and b) there is no theoretical framework in psi? No. First, it’s an easy straw man to craft a story about how psi researchers study clairvoyants (or may be clairvoyant themselves), run around with EM-meters to study haunted houses, and commune with the spirits to channel their research results. Admittedly, there are people doing that kind of stuff. And, no, I do not think we should take them very seriously.

However, as a science, experimental parapsychology has known higher methodological standards than many other areas in psychology. Preregistration, Bayesian statistics, publication of negative results – parapsychologists did all that stuff in the 1980s already, way before some main stream psychologists realized such methodological rigor is a must for any serious science. In that respect, if you think parapsychology is not a science, you should be fair and extend that opinion to all areas of psychology, and quite some more fields.

Anyway, there are quite some people who have found odd effects in carefully set up experiments that call for further investigation. Contrary to popular belief, there are some models/hypotheses out there for these lab-induced phenomena that are not completely at odds with our present understanding of physics. I say that as someone who studied physics for a couple of years (although I am the first to admit that the fact I got a degree in experimental psychology in the end is telling about my qualities as a physicist). Although these models rely on a rather specific interpretation of in particular the metaphysical status of consciousness, this is not a reason to dismiss them out of hand. I would like to remind the audience that the mainstream physicalist position on consciousness (i.e., consciousness is a brain process) itself is a metaphysical assumption about the nature of consciousness, and a position that is even slowly eroding.

This is where things got interesting for me. My research focuses on consciousness, and in particular on the mind-body problem. Psi phenomena, should they exist, would shine an entire new light on the metaphysical assumptions we make about consciousness. Just for that reason I think it’s worthwhile to have at least a look into the matter. My research interest in this area goes back to my early years a psychology undergraduate when professor Dick Bierman used to be my academic mentor. We talked a lot about this line of work, but I lost my touch with the area when I started doing my PhD with Victor Lamme, with a very strict materialist agenda. Dick and I got back in touch a couple of years ago when I got back in the Netherlands.

As I briefly mentioned in an earlier post, things got really interesting when anomalous effects started popping up in my own data. For example, using a single-trial EEG classifier I was able to decode the identity of an upcoming stimulus in a visual detection task, on basis of the baseline alone… Upon closer inspection of the data, it turned out there was a randomization problem. Ergo, I thought I had cracked the problem of all these alleged precognitive effects (improper randomization), fixed that by using a combination of hardware RNGs (if you see these odd photos of green glassware from my lab – that’s my hardware RNG ;-), and present that at the conference. Except the precognitive effect was still there. I triple-checked everything – stimulus script, analysis protocol, filter settings, hardware filters in the EEG amplifier: nothing. Yet, the effect is huge (d = 1.44).

So, what do you do? I decided to be honest and report what I found: I could decode the future, which I submitted as an abstract for the bi-annual winter meeting of the Dutch Psychonomic Society. Sure, I could filedrawer this weird effect, but why would or should I do that? I had a hypothesis, tested it, and failed. I was wrong – ‘precognitive’ effects are not caused by improper randomization. For those of you interested: we are going to replicate this study in a multi-lab setup. Drop me a line if you want to have a look at the data and code.

Anyway, to cut a long story short – over past five years I have done several quite large replication experiments on controversial areas (social priming, psi). The bottom line is that all my attempts at replication social priming effects failed, but the psi ones did not… So, hell yeah, I’m fascinated, and all of you would be if you got three whopping psi replications in a row. As a matter of fact, Dick and I are now getting some people together to work on a large-scale, multi-site adversarial collaboration project to run a number of high-powered replication studies to figure out if there really is such thing as a replicable psi effect. The only way to do this is by maintaining the highest methodological standards. Adversarial collaboration, preregistration, high power, open data, open materials, and proper experimental design are essential, otherwise you might as well not do it.

Now, to get back to Leonid Schneider’s post – I already mentioned I did not like the tone. I’ve read some of his other work, and fortunately, this piece is not representative of his qualities as a science journalist. Beacuse, as a science journalism piece, it fails, in my opinion. Audi alterem partem is what is missing here. That is disappointing. Moreover, you don’t have to agree with someone to still show at least some respect (or at least pretend you do). I’m totally fine with people thinking psi research is nonsense. I am also fine with people thinking my research is nonsense (I call such people “reviewers”). I am not fine with making fun of people and bullying them, which is what happens in this post.

Anyway, making fun of psi researchers is one thing, and I guess most are used to it. However, I think Schneider crosses a line when he suggests a link between the psi research and Bial trial incident. Bial is a Portugese pharmaceutical company that got into the news recently because of a clinical trial gone horribly wrong: healthy participants showed a severe adverse effect to a new drug, resulting in the death of one volunteer, and serious brain damage in several others. Schneider flat out suggests a relation between this tragedy and the fact that the Bial Foundation, a foundation sponsored by the founder of the company, funds psi research.

This suggestion is nothing short of slander. First of all, there is no relation between the activities of the company and the foundation, other than that the foundation annually gets a big bag of money from the guy who owns the company. Second, even if there would have been a direct link between the activities of the foundation and the company: as I mentioned earlier, the research standards in experimental parapsychology are at least comparable to those in ‘normal’ psychology. Third, clinical trials are legally regulated and closely monitored by medical ethics committees which assess the protocol and guard participant safety. Even in the case Bial did ask a psychic to develop the protocol for this trial using a crystal ball, or had a necromancer come up with the drug formula, the French authorities would/should have stopped this. All in all, the fact that Schneider uses this tragedy to make a point about parapsychological research is a really, really low blow.

In June, I attended the TSC 2015 conference, which also has quite a large number of talks on anomalous phenomena, and I had the pleasure to meet the kind of the people who are at the receiving end of Schneider’s snarky comments. They turned out to be fairly normal scientists, working at universities, about as knowledgeable or even more knowledgeable about research methods than the average psychologist. Most did not believe in fairies, they did not hold seances during their talks, not a single one brought a crystal ball, and there were no nightly shamanic sessions involving druidic dancing around monoliths (or at least, I was not invited to such happenings). The main difference is that these people work on effects most scientists find very, very implausable.

I think that we should measure psi researchers (or any researcher for that matter) not by their topic of study, but by the way they study their topic. Any researcher who holds up to high methodological standards, and is open to constructive criticism deserves to be taken seriously, regardless of what kind of effect she or he is working on. Period.

However. The fact that some skeptics cannot resist the urge to ridicule is no reason for the self-styled martyrdom some psi researchers engage in. Yes, psi researchers are being bullied, ridiculed, and even silenced. Schneider’s post is an excellent example. There is a reputation trap. That reputation trap, though, is often of one’s own making. Too often, psi researchers engage in wild, unfalsifiable speculation. Quantum teleportation, entaglement telepathy, that kind of stuff. Modesty should prevail: there is no convincing evidence psi effects exist, otherwise we would not have this discussion. Therefore, it is best to stay away from wild theoretical speculations that often involve misrepresented physics until there is at least some consensus between skeptics and proponents on whether anomalous effects are anything more than statistical noise. We’re not there. Yet.

Similar to most psi researchers not being fairy-worshipping druids, most skeptics are not narrow minded, sour critics. Most are actually very willing to discuss anomalous phenomena. But as data. Based on my personal interactions with them, I’d say both EJ Wagenmakers and Sam Schwarzkopf are perfectly willing to discuss experiments and datasets, but not if you come rushing in LOOK OMG HERE I FOUND PSI IN MY DATA LOOKATIT YOU WERE WRONG QUANTUM FTW! No, you found an interesting anomaly that begs for further exploration/explanation, but first we need to make sure your pattern of results is not the result of something trivial or just a random accident. Neither EJ or Sam laughed in my face when I told them about my data containing anomalies. Rather, the reply was “Interesting, what could be going on here?”

The thing is – it’s all about framing. In my mind, the present situation is very much like the faster-than-light neutrino anomaly of 2011. Researchers found evidence of particles moving faster than light, which according to special relativity is impossible. Rather than the entire field going bonkers, skeptics at CERN calling their colleagues at OPERA spirit-channeling fairy lovers, and OPERA researchers starting an anti-oppression movement because they were not allowed to share their result, the general response was “Hey, that’s interesting, let’s figure out what caused this result!” And that is the only reasonable response – if indeed particles can travel faster than light, it means we need to completely re-examine our ideas physics. Awesome! Work for generations of physicists to come!

Why can’t we do the same in psychology? There are people who seem to consistently find weird results. What’s going on? Clearly, we have not settled this matter – there is no conclusive evidence in favour of psi, but oppositely, clearly the psi proponents are also not convinced by the skeptics’ arguments and replication attempts. Skeptics should accept that there are consistent anomalies being found by intelligent, reasonable people all over the world that call for a deeper explanation than “it’s just statistical noise” or “it’s just publication bias” – I mean, weird results are popping up in my lab, FFS! Psi researchers should accept their case for the existence of psi is not strong enough, and that only with adversarial collaboration we can figure out what’s going on.

Oh, and those neutrinos? Turned out to be a loose cable in the Italian setup…

(note to self: check lab cables after the weekend)

Call for suggestions!

Hi all,

Shortly we will be running a pretty cool EEG experiment on perceptual decision making in romantically involved couples. Basically, a couple (let’s call them Alice and Bob) will come into the lab, each be assigned their own computer and then take turns in a perceptual decision making task (see Jolij and Meurs, 2011, for more details on the task itself). So, first Alice will get to see a trial and give a response; then Bob will see Alice her response (as a cue) and will do the trial, and to conclude, both will see each other’s answers. During the experiment, we’ll be measuring EEG (NB: only 8 channels). Before the experiment, both partners fill out a series of questionnaires on relationship duration, quality, etc.

In the spirit of open science, I though it might be useful to ask you all what would make this dataset useful for you. I mean, we are going to test these participants anyway, in a rather non-typical setup (two EEG measurements simultaneously, meaning you can look at all kinds of interpersonal processes, EEG synchronization, ect.), so if there is anything I could add that does not take too much time so this could be an interesting dataset for you, let me know. Think of maybe a block of eyes-closed EEG data during a breathing exercise to study interpersonal synchrony, a particular questionnaire, additional markers, whatever.

As long as it does not add too much time to the experimental protocol, or takes up too much time programming, I am happy to include stuff. Please do get in touch if you want to know more: j.jolij@rug.nl.

Open data, better data?

The New England Journal of Medicine has done it – post an [editorial], invent a new term (data parasites) and nicely evoke the wrath of Twitter (or in particular,, the Open Science Movement). Nice move, NEJM!

As a quick recap, the editorial lists some pros and cons of data sharing, a practice we see shockingly little of in psychological science. The most offending bit of the editorial was that part about ‘data parasites’, a new class of researcher living exclusively from the shared data other researchers so painstakingly collected. And, God forbid, these data parasites might even use all this shared data to disprove the hypotheses of the original authors!

No, really, we don’t want that, do we now?

Anwyay, it’s quite obvious that Longo and Drazen did not quite think things through. It is rather ridiculous to claim full authorship for papers in which your data is (re)used. That is what we have citations for. Similarly, if I use a Stroop task, I cite Dr Stroop rather than asking him to be a co-author. Which would kind of stretch my paranormal research quite a bit, given the good Emeritus Professor of Bible Studies has been dead for over forty years.

Summarizing, nice move, NEMJ, nice move.

Nevertheless, in the context of Open Science and Open Data, there are some interesting observations to be made. I will not regurgitate all the arguments in favour of open science here. It’s pretty much a no-brainer that data sharing is -in principle- a good thing. However, as an experimentalist generating a lot of human subject data (by measuring human subjects that is, not making sh!t up), some of the arguments by Longo and Drazen do resonate with me.

Lest I evoke the wrath of Twitter, let me state that I am 100% in favour of data sharing. I am less convinced of making data available publicly, because of ethics and privacy reasons, though, but I’ve already voiced my concerns about that earlier (and had a great talk with Rink Hoekstra, one of the co-authors of the PRO Initiative afterwards), but basically it’s a technical matter we disagree on.

In this post, I would like to put the entire Open Data ideal somewhat into perspective. If you go through Twitter feeds, it seems that science without open data is bad science, and I know that there are some people who think about it in such a way. That’s fine. However, my take on this is slightly different. I am an experimental scientist, which means I try to understand phenomena in the world by making people behave in particular ways in a laboratory setting. These laboratory tasks are necessarily abstractions from reality. Often these abstractions work, and tell us something about real world settings (e.g., learning and memory experiments). Sometimes, it does not matter, because we’re testing the limits of cognition (e.g., visual psychophysics). And sometimes, we miserably fail in capturing anything meaningful (any named example here would make someone mad). This latter case is what we call a bad experiment.

Last year we have seen a brilliant example in the literature: [Sadness impairs color perception]. In this paper, it was claimed that ‘feeling blue’ leads to a specific impairment in colour perception on the yellow-blue axis. It is an interesting example, because the data underlying this paper were open. Given the claim, it did not take long before the first skeptics re-analyzed the data and found some rather serious errors in the data analysis. However, when I looked at the paper I did not have to look at the data in order to conclude something was seriously wrong: the methods did not make sense at all! The emotion manipulation was cr@p, and even worse, the measurement of color perception was just, well, wrong! To measure colour perception, we typically rely on carefully calibrated psychophysical methods. The authors of above mentioned paper used a, well, suboptimal method. It suffices to say that the paper was quickly withdraws, to the credit of the authors, but it should not have passed peer review.
Mind you, this is a paper with open data. But open data is not necessarily good data. In this case, the data is rather awful and actually completely meaningless. Sadly, bad experiments are surprisingly common. One of the problems I see with the present focus on data, and in tandem, with statistical power, is that the quality of the experiment generating the data gets overlooked. I have argued before that we cannot and should not reduce experiments to data. Even if you have N =10,000 and a beautifully coded data sheet which is open to all, your data is still worthless if your experiment sucks. In experimental science, data is only as good as the experiment underlying it.

So how do you know whether an experiment is any good? Well, sadly, this is something that requires expertise. In particular where rather trikcy concepts (things like ‘consciousness’, but also ‘colour perception’, see above), or methods (fMRI, EEG) are used, you need to know what you (or the authors of a dataset) are doing if you want to use a dataset properly and evaluate its merits. Actually, this is the reason I am not too fond on using other people’s data. I rather replicate the studies I like and gather my own data. Sure, that is more work, but it gives me a better understanding of what is going on in a particular manipulation, and thus a better position to do science. Now, I do realize that this is not feasible for a lot of fields, but I think that for a lot of experimental work in my field, it is.

So, what I am looking for in a paper is not data per se, but a great new manipulation, or analysis method that gives new theoretical insight. The data is great, but the idea behing empirical science is that when lab A makes an observation, a decently competent experimenter in lab B should be able to make the same observation when using lab A’s methods. Rather than a paper with open data, I’d have a paper which shares its stimulus material and analysis scripts. Open data is to me somewhat like someone else’s toothbrush. Surely it can get the job done, and it’s very welcome if you don’t have one of your own, but I prefer my own.

The bottom line of this story is this: sure, data sharing is great. But let’s not pretend it is our Holy Grail. At least in my field it is not, and there are more important things to focus on. The focus on Open Science is great, as long as it does not steal any glory from a great experiment. It’s my impression that we’re becoming a bit too obsessed with data at the moment, at the expense of experimental methods.

Why I am not signing the PRO Initiative (yet)

This week, Richard Morey et al.’s PRO (Peer Reviewer Openness) Initiative launched, the revamped version of their Agenda for Open Research. The PRO Initiative is a laudable step by a group of devoted Open Science proponents to make our science more transparent. And for good reason – science should be open, and accessible to everyone. The PRO Initiative aims to do this by asking reviewer to withhold in-depth review of academic papers if data and materials are not made open. The arguments for improving the way we handle data are compelling. The present practice, in which data is ‘available on request’ simply does not work, as has been shown several times. Moreover, data sharing encourages collaboration and emphasizes that science is a collaborative enterprise. We’re in this together, figuring out how the world works, and hopefully making it a better place. Sharing data is helping towards that ideal.

If you’re following Richard (if you do not, you should – even if you’re not into Open Science, his posts on Bayesian statistics cannot be missed!) or other members of the PRO Initiative (whom you should follow, too, again, even if you’re not into Open Science, because they’re all pretty good bloggers with sensible things to say), you will have seen many calls to sign the Initiative. As a signatory, you pledge that, starting January 2017, you will request that authors make their data publicly available when you review a paper, and withhold further review if they do not do so without good reason. Of course I have been thinking about this, starting a couple of months ago when Richard published the first version of the ‘Agenda for Open Science’ – what is not to like about Open Science?

But something did not sit well with me. I decided to wait for the updated version of the Agenda, which now is the PRO Initiative, and there was still this something that made me feel uneasy, a bit worried even. As a matter of fact, the past weeks I have been writing on a manuscript detailing these concerns, but maybe it’s better to throw some of these ideas out here and see what you think of this. I am still not decided.

I have a big problem with the PRO Initiative’s definition of ‘open data’. The PRO Initiative asks researchers to make data publicly available. Sharing, or depositing data on a server to which only researchers have access is not enough – data, preferably raw data, has to be publicly accessible in order to count as ‘open’. In principle, there is nothing wrong with wide open data –  on the contrary. CERN streams its data live to a publicly accessible server, some major archaeological discoveries have been made in the openly accessible data of Google Maps, and undoubtedly, if you want to discover extraterrestrial life, you’re free to roam NASA’s open database of images from other worlds. So, why do I feel uneasy about open data, then? The main reason: because we (cognitive neuroscientists/psychologists) observe people. Our raw data is a detailed description of human behaviour and neurophysiology. I have a problem throwing such data out in the open.

What I dearly miss in most discussions on open data is the perspective of the research participant. All arguments are centered around scientists and the process of science. We seem to forget that our (psychologists’) data is about actual people. In the discussion on open data, participants are stakeholders, too. It’s their data (not ours) we are planning to throw on the internet. As a scientist studying human behaviour, I feel my very first responsibility is to the participants in my experiments. I am obliged to guard them as well as I can from any harm coming from their participation. Moreover, I think that they should have strong voice in stating how their data can and should be used – stronger than that of the scientist. If a participant requests to be taken out of a dataset, so be it.

So, what may be harmful about publishing properly anonymized raw data? Well, I am trained in thinking in doomsday scenarios, so let’s come up with a potential disaster:

I participate in an fMRI/EEG experiment of a colleague in which by brain responses to a pornographic clip with very inappropriate material (insert your favourite fetish here) are measured, together with the physiological response of my Private Willy Johnson. The participant after me happens to be one of my first year students. This student unfortunately has an unhealthy obsession for his lecturer, and makes a note that I participated in this weird experiment, on Dec 3, around noon. One year later, the research paper with raw data is published. Being a good experimenter, my colleague notifies all research participants of this joyful occasion. Our student now downloads the data, and although I am known as Participant-007, our student checks the time stamps, and presto, he can now work on his blog post “How My Professor Got A Stiffy From Copulating Hippopotamuses And He Really Enjoyed It! (with data)”. Moreover, the student now also has my fMRI and EEG data. A recent study has shown that individuals can be reliably identified on basis of the neural connectivity data, so this means my stalker can now also identify my data in the study on The Effects of Mindfulness of Believing in Bullshit – an EEG Connectivity Study, and see that I score massive bonus points on the bullshit scale for knowing who Deepak Chopra is (and actually having talked to him)

Ok, sure, this is a strictly hypothetical scenario – but it does show how vulnerable wide open data is for breaches of privacy. Open data means basically giving up privacy to anyone who knows you participated in a particular experiment at a given time, and such knowledge can fairly easily be obtained by someone who wants to. So, delete the time stamps! Well, my colleague from the example would love to, but she pre-registered her study and she needs the time stamps to show she did collect the data after she submitted her preregistration…

Fine, you say. Let’s not post such sensitive data then. The PRO Initiative leaves enough space for this – if a researcher has a good reason to not make data publicly available, she/he can say so. However, I still have some more issues.

If the PRO Initiative gains momentum, petabytes of behavioural and neurophysiological data will become publicly accessible. Given that the vast majority of our studies are carried out in undergraduate psychology students, it is relatively easy to identify particular strata (e.g. students from the class 2015-2016 – I can just look at the timestamped data). For example, most of our freshmen are on Facebook, where they started a group page, and in the kindness of their hearts, they allowed me to be a member as well. This means I have access to all their profiles, and as such, I can compile a pretty interesting profile of the average psychology student. But with research data out in the open, I can also mine actual research data measuring validated psychological constructs. A lot will not be particularly interesting, but data about cognitive abilities, implicit prejudice, or attitudes towards political ideas may all be quite worthwhile from the perspective of, let’s say, a marketing company, or another party with an interest in nudging behaviour.

This is not a direct threat for any individual research participant (contrary to a breach of anonynimity), but if I, as a research participant, come into the for the benefit of science, I would be somewhat displeased to figure out that a shady marketing company uses my data for profiling. To aggravate matters – it is very well conceivable that matching up open research data with data from social networks or other sources can lead to identification – for example, individual ‘likes’ on Facebook predict personality; a measure I may be able to cross-reference with all the open data I just downloaded from the Groningen psychology servers. The more data, the happier my data-crunching algorithms will be.

You may think this is all very hypothetical, and very unlikely. Maybe you say I am scaremongering. Then again, that is what you need to do when thinking about research ethics, I’d say – I served in a couple of Ethics Committees over the past years. What is the absolute worst that can happen, and how likely is that scenario? My personal evaluation is that a breach of anonymity is conceivable, and in some cases even likely (e.g., when someone knows you participated in a particular experiment, which at least is not uncommon among first year participants here in Groningen).

So, yes, I am worried about the PRO Initiative. I am not at all convinced making data wide open is such a good idea. “But the data belongs to the public, the tax payer paid your salary!”, I hear you say. Well, sure, the tax payer also pays for the construction of the road they’re building next to our office building, but that doesn’t mean I (yes, I also pay taxes) can go to the construction site and help myself to a nice supply of concrete. I am entitled to use the road, however, once it’s finished. Metaphorically, the concrete is the data underlying the research paper. I think access to the research paper is what tax payers pay for – so making research papers open access should be a no-brainer to anyone.

I believe that posting raw data on the internet according to the guidelines of the PRO Initiative, results in an increased risk to the well-being my participants I am not willing to expose them to, no matter how small it is. I am happy the PRO Initiative leaves enough room to voice such concerns on an individual basis, but if the Initiative gains momentum, many research participants will be exposed to the risk of their data being used in manners they did not anticipate or consented with.

This is all the more pressing since there are excellent alternatives to ‘wide open data’ – hosting data on an institutional or national repository, for example, where access to data is regulated by an Ethics Committee or dedicated data officer, who can grant access to data on a case-by-case basis, or to registered users, independent of the researcher. As a matter of fact, this is already a requirement by many funding agencies and institutions, including mine (mind you, depositing data is required, making data publicly available typically is not!) Publicly posting raw data is – in my opinion – exposing participants to unnecessary risk for adverse effects of their research participation. Asking other scientists to do the same, and putting pressure on them to do so, makes me feel uneasy. Maybe this feeling is unjustified, I don’t know. But in a rather long nutshell, this is why I have not signed the PRO Initiative.

So, my Open Science pledge is that I will not make raw data public in any way unless my research participants request me to do so, nor will I ask others to expose their participants to unnecessary risk. I will pre-register my studies, upload my stimulus materials and analysis scripts to a publicly accessible place, but I will post my raw data to a reliable depository hosted by either my institution or a third party, where anyone who needs my data for research purposes can have access to it without intervention from my part. Moreover, I promise my participants that I will share their data with anyone who needs it for her or his research to advance science, but also to not share their data if they request me so. And finally, I will make sure that all my research output is openly accessible to everyone.

That’s what I have to offer, Team PRO. Hope we can still be friends?

The Open Data Pitfall II – Now With Data

Yesterday I wrote something on why I think providing unrestricted access data from psychological experiments, as advocated by some, is not a good idea. Today I was in the opportunity to actually collect some data surrounding this issue, from the people who are neglected in this discussion: the participants.

I have used Mentimeter to ask the 60 first year students who showed up for my Biopsychology lecture whether they would participate in an experiment of which the data would be made publicly available.

At the beginning of the lecture, I gave a short introduction on open data. I referred to the LaCour case, and to Wicherts et al.’s work on the lack of willingness to share data, and emphasized the necessity of sharing data. I also mentioned that there is a debate going on on how data should be shared. I mentioned that some researchers are in favour of storing data in institutional repositories, whereas other researchers are in favour of posting data on publicly accessible repositories. I then explicitly told I would give my thoughts on the matter after I asked them two short questions via Mentimeter.

I read out two vignettes to the students:

1. “Imagine you signed up via Sona for one *name of researcher* studies on sexual arousal. Data of the study will be shared with other researchers. The dataset will be anonymized – it may contain some information such as your gender and age, but no personally identifiable information. Would you consent to participate in this study?”

2. “Imagine you signed up for the same study. However, now *name of researcher* will make the data publicly available on the internet. This means other researchers will have easier access to it, but also that anyone, such as your fellow students, companies, or the government, can see the data. Of course, the dataset will be anonymized – it may contain your gender, or age, but no personally identifiable information. Would you consent to participate in this study?”

After each vignette, they submited their response to Mentimeter.com.

As I said, respondents were 60 first year psychology students, of the international bachelor in psychology of the University of Groningen, most of them German. It is my experience that this population generally guards its privacy a lot more than their Dutch counterparts – please keep this in mind.

The results? For scenario 1, 13.3% indicated they would *not* partcipate. This percentage indicates the data may be a bit skewed – for most studies I run (EEG work on visual perception and social interaction) I have a non-consent rate of about 5 to max. 10%. For my TMS work this can go op to 33%. However, given the nature of the research I used as an example (I named a researcher they know, and her research involves the role of disgust in sexual arousal – stuff like touching the inside of a toilet bowl after watching a porn clip), 13.3% might not be totally unreasonable.

For scenario 2, the percentage of non-consenters was obviously higher. But not just a little bit – it went up to a whopping 52.4%. More than half of the students present indicated they would not want to participate in this study if the data were to be made publicly available, even though I clearly indicated all data would be anonymized.

The Mentimeter result can be found here. Please note that there are 61 votes for vignette 2; one student was late and voted only for vignette 2. Feel free to remove one ‘no’ vote from the poll – it’s now 51.6% non-consenters.

What does this tell us? Well, there are some obvious caveats. First of all – this was a very ad-hoc experiment in a rather select and possibly biased group of students (ie., students who took the trouble of going to a lecture from 17:00 to 19:00 in a lecture hall 15 mins from the city centre, knowing I would lecture about consiousness, my favourite topic). Second, the experimenter (me) was biased, and even though I explicitly mentioned I would only give my view after the experiment, we all know how experimenter bias affects outcome of experiments. Maybe I did not defend the ‘open’ option furiosly enough. Maybe I made a weird face during vignette 2. Finally, the vignettes I used were about experiments in which potentially sensitive data (sexual arousal) is collected.

Nevertheless, I was surprised by the result. I expected an increase in non-consent, but not to such an extent that more than half would decline. Either I am very good at unconsciously influencing people, or this sample actually has a problem with having their data made publicly accessible. Anyway, it confirmed my hunch that in the debate on open data we should involve the people it is really about: our participants.

I do not wish to use this data as a plea against open data. But I do think researchers should talk to participants. Have a student on your IRB if you use first year participant pools, or otherwise someone from your paid participant pool. Set up a questionnaire to find out what participants find acceptable with regard to data sharing. In the end, if you post a dataset online without restrictions, it’s *their* data and *their* privacy that are at stake.

As a side note, going through some paperwork about consent forms, it actually turned that data storage and sharing in my default consent form is phrased as such:

“My data will be stored anonymously, and will only be used for scientific purposes, including publication in scientific journals.”

This formulation, which is presribed by my IRB, allows for data sharing between researchers, but forbids unrestricted (open) publication. I was actually quite happy to rediscover this – it means I can adhere to the Agenda for Open Research (or better, can not adhere to it with good reason)… publication of data would be a breach of consent in this case. If I were to put my data publicly online, I cannot keep my promise that data would only be used for scientific purposes.

But why not add something to the informed consent?

“The researcher will take care my data is stored at an institutional repository and guarantees she or he will share my data upon request with other researchers.”

Everybody happy.