Monthly Archives: February 2016

Working hours

Today’s short rant is not on science, but rather on the process of science. This tweet by Erika Salomon really resonated with me. It is about working hours in academia. Erika calls for academics to call out when someone asks students (or colleagues for that matter) to work for unreasonable hours.

Consider this my pledge for support. Upfront, I apologize for some strong language in this post.

Academia is facing a serious problem with regard to working ethos. Sure, it is great if you love your job so much you are willing to put in more than 60, or even 80 hours per week. It is great is you love your job so much that you are willing to move abroad, dragging along your significant other, and tolerating a string of temporary contracts before you finally can settle somewhere.

It is not so great that this work ethos has become the standard. I have put in 60 to 80 hours per week for my PhD. It resulted in a pretty good thesis, with some nice, well cited (and replicated ;-)) papers. I did move abroad, and dragged along my significant other. It did land me a nice, permanent position, even with prospect of promotion. I am a really happy little camper.

But I cannot (and do not) put in 60 to 80 hours per week any more. Over the past five years, we (as a happy family of four) have struggled with cancer, anxiety, and depression, and the one thing this has taught me is that no job is worth sacrificing your personal well-being. Especially not an academic job. You, your significant other (if you have one in your life), and if you have them, your children are more important than anything else. Those hours you spend writing on your grant proposal in your attic office are hours you cannot play with your kids, and cannot enjoy taking a long walk with your wife (or husband). And although you take them for granted, they are not. Inge (my wife) was 30 when she felt a lump in her breast. Six weeks later she was in surgery, eight weeks later getting her first chemo to treat an aggressive, triple-negative breast cancer. Now, almost exactly five years after her first visit to the GP, her cancer is in full remission. Our kids were 3 and 1, respectively when all this happened. Only now, we are slowly getting back on track. This kind of sh*t really has a tendency of messing up your life and shifting your priorities, I can tell you.

So, even though I am only 36 (37 next week), and I am considered to be ‘young’/’early career’ scientist, who should be putting in a lot of time and work to secure these ‘prestigious’ personal grants, like the Dutch ‘Vidi’ or even a ‘Vici’, and publish work in the ‘top-journals’ to progress my career, really, f*ck that sh*t.

I am not wasting my time anymore on stuff that’s only for ‘helping my career’. I am doing this job because I am fascinated by what I study (yes, psi, amongst other things. Got a problem with that?), and I love teaching. Not for the sake of becoming a hotshot professor anymore.  Would be nice along the line, but really, it’s not worth it to sacrifice my sanity and precious time with my family for. It’s the very least thing I can do for them, and especially for my wife, who moved all across Europe with me, leaving behind family and friends, and a job she loved, all for the sake of my career. It is just too sad that it took a family crisis for me to realize that.

Sadly, though, when it gets to putting in working hours in academia it’s kind of a nuclear arms race. If I don’t put in the time, someone else will, get out more papers, and thus secure the grants, and earn tenure/promotions/etc.

That has to change. The intense competition in this field is not normal. We have become addicted to prospect of publishing high-impact papers, for crying out loud! This is not very healthy, as indicated by the incidence of stress-related mental health problems many academics suffer from. I have seen too many people going down over the past years, burning themselves out, just to play the game. We have patted ourselves on the back for a while, believing that competition and focus on output made our field better, but it has become very painfully evident that this is not the case. Science is broken, but scientists as well.

The intense pressure that many young scientists feel permeates all aspects of life. Job security depends on the grants you get. The grants you’ll get depend on your papers. As a PhD student, you do not know whether you’ll get that postdoc. As a postdoc, your next international move is just another two years away. When finally landed that tenure-track position, it’s up or out… You’re financially dependent on your job performance, you have to compete with the smartest people in the world, so as long as you do not have the security of a permanent position, you’re going to work your ass off.

However, I think that most scientists might find themselves in a similar position as myself: we were the clever kids in school. The clever students during our undergraduate studies. Our friends, families, and teachers saw great promise in us, starting from an early age on. I think that few people in my direct surroundings have imagined me growing up to be anything else than a university professor. For me, unconsciously this has become part of my identity. Professor was not my job description, but who I was. You can imagine that this results in some messed up perceptions of a healthy work-life balance. Moreover, academic rejections, disputes and failures feel like, no, are personal failures – if you do not get that grant, it means someone else is better than you. Did someone else fail to replicate your paper – that means a personal attack! I am sure I am not the only one for whom this is true. Only over the past two to three years I have learnt to let go, and accept that I am more than an academic, that ‘academic’ is just my job description. It’s not who I am. It was (and is, and will be) quite a struggle, but for the better.

The combination of economic and personal insecurity makes academics so vulnerable for work related stress. We work harder than we should, and we will not easily speak up because we have created an awful system for ourselves that feeds upon our own vulnerabilities. It has to stop.

I cannot fix this, nor do I want to give my students the impression that things will change on the very short term. If we want to change the system, and stop it from exploiting our vulnerabilities, change has to occur on several levels. Policy makers need to stop thinking science can be evaluated on purely quantitative measures. Granting agencies need to stop funding people, and start funding ideas. Universities need to provide job security earlier on, and evaluate how an academic functions within the context of a department or school, rather than looking at how well someone has succeeded in establishing her or his own little fiefdom without burning out, which is the current practice.

However, these changes are slow. They will occur, though. We see them happening already. Many great people are already speaking out against the ridiculous competition in science, and against the ridiculous ideas about quantifying research quality. But it will take time. Besides, we need to change as well. As long as we are addicted to scoring our high impact papers things will not change.

So, if you’re working on your PhD, and feeling worn out, here are some words from a young guy who sometimes feels quite a bit older than he is: research is awesome. But put it into perspective. In the grand scheme of things, that paper you are working on right now, late in the evening, is not so important. Maybeit gets read by a few hundred people. Maybe it gets some media attention, but people will forgot that in a few weeks anyway. And if you’re really unlucky, in a couple of years, some smug replicators will try to replicate your study, fail, and your result ends up on the enormous heap op false positives in the literature. So, really, is the time you are spending on this paper right now worth it to not be with the people you love? Your paper will be there for you tomorrow morning, 9:00, when you get back to your desk. Trust me.

And is that course you need to teach bothering you because it gets in the way of your precious research? Think about it this way – your research has limited impact. Again – a couple of hundred people may read it, and knowing scientitsts, most of them will find it crap, anyway. But your teaching — I did a quick calculation, but since I started my academic career 15 years ago, I think I have taught over 3000 students in classes, supervised at least 100 bachelor theses, and over 50 master theses. Personally, I have enjoyed the many conversations I had with my students, helping them realize their potential, and seeing them grow and find their own way far and far more than any glowing comment I got on any of my papers. Talking about impact…

And that academic job? Well… several of my PhD-friends have not found an academic job, either. One of them has become a house painter. Last I heard from him was he is now happier than ever before in academia. Let that sink in for while.

Anyway, enough rambling. Time to act like my own age again, and not like grandpa. Folks, it was my pleasure, I’m going to check in on the kids, give them a nightly kiss, and then tuck in for the night. That paper I was going to write can wait until tomorrow 😉

On difficult surnames, reputation traps and a loose cable

Leonid Schneider asked me for my thoughts on his post on Frontiers in Paranormal Activities, in response to my sharing of Sam Schwarzkopf’s annoyance with people getting his last name wrong. I’ve got a difficult surname as well – it’s pronounced ‘yolay’, should that be of interest; ‘ij’ is a diphtong in Dutch, Jolij is the Dutchified version of my French ancestors’ name Joly – hence, the understanding. I had read Leonid’s post before, actually, when I saw it in relation to the Bial drug trial tragedy. At that time I did not respond, although I certainly did have a thought or two on the matter, but now that Leonid is asking, here we go.

What is the deal? Early 2014, a special issue in Frontiers in Psychology (or better, a Research Topic) was hosted by Etzel Cardeña and Enrico Facco on ‘Non-ordinary Mental Expressions‘. Some of the papers included in this topic are actually fairly ‘main stream’ (effects of psychedelics on neural activity, for example), but other papers were slightly more radical, including a paper on retro-priming and Cardeña’s editorial calling for an open view to the study of consciousness. These topics are, to say the least, controversial, and I do not think I have to elaborate on why that is so. This entire issue resurfaced this week when Etzel Cardeña published an ‘uncensored’ version of his editorial, and pointed out that research into the paranormal is typically ridiculed, researchers in the field are not taken seriously, and the ideas are basically dismissed without any consideration of data and or theory. There is a reputation trap: once you get associated with ‘weird stuff’, people will not talk with you anymore. Huw Price wrote a very worthwhile piece on this.

As said, Leonid Schneider wrote a long post on the special issue on ‘NOMEs’ in Frontiers, basically asking himself whether this is not one big practical joke on Editor-in-Chief Hauke van Heekeren. Because, you know, paranormal stuff?

The snark is strong in Leonid’s post. It’s quite clear that he does not take the study of psi as serious business. As I have indicated earlier, it does annoy me that skeptics all too easily ridicule researchers who are engaged in this type of research. This sentiment is very clear in Schneider’s piece, and it is also the reason for me not to comment earlier. I simply do not like the tone. What settled it for me, though, is the final addendum in which psi research is linked to the Bial drug trial tragedy. But more on that later.

I have argued before in several posts that I do believe psi can be a valid and relevant topic of study. Given that I am getting more and more involved with this debate, this may be a good occasion to give full disclosure on how and why I arrived at this position and show my true colours to friend and foe. Decide for yourself whether you want to group me with psi opponents, proponents, skeptics, or wafflers (though I am curious to hear from you with whom you would group me!)

First, there is a clear sentiment in Leonid’s post that psi research is not real science. I disagree. The sentiment seems to be based on the idea that psi cannot exist, and therefore researchers studying this topic cannot be taken seriously, and are probably running psychic hotlines next to their day-jobs, or are gullible fools who believe in fairies, Martians, and the Illuminati. More on that later.

With regard to what is science, I think science is not a belief system, but rather a structured method to increase knowledge about the world. As long as you stick to the rules of the game, there should be no taboo research areas. Of course, there may be research areas that make more or less sense than others, but as long as you stick to the scientific method, you’re doing science. In that respect, I do understand that Van Heekeren had no problems with a special issue on non-ordinary mental expressions in Frontiers. People do have weird experiences, after all. Regardless of what is actually going on, people do report out-of-body experiences, near-death experiences, and so on. These experiences are empirical fact (as in: people report having them). Therefore they are fair game for further study. I mean, if we could not study crazy experiences, psychiatrists and clinical psychologists would be out of a job, right? That said, let me be the first to admit that there is A LOT of god-awful (a.o. self-published) psi research and theories. Sturgeon’s Law (90% of everything is crap) applies to psi research more than to any other field I know of.

But shouldn’t Sturgeon’s Law for Psi Research not read ‘100% of everything is crap’, because a) psi cannot exist, b) psi researchers are idiots, and b) there is no theoretical framework in psi? No. First, it’s an easy straw man to craft a story about how psi researchers study clairvoyants (or may be clairvoyant themselves), run around with EM-meters to study haunted houses, and commune with the spirits to channel their research results. Admittedly, there are people doing that kind of stuff. And, no, I do not think we should take them very seriously.

However, as a science, experimental parapsychology has known higher methodological standards than many other areas in psychology. Preregistration, Bayesian statistics, publication of negative results – parapsychologists did all that stuff in the 1980s already, way before some main stream psychologists realized such methodological rigor is a must for any serious science. In that respect, if you think parapsychology is not a science, you should be fair and extend that opinion to all areas of psychology, and quite some more fields.

Anyway, there are quite some people who have found odd effects in carefully set up experiments that call for further investigation. Contrary to popular belief, there are some models/hypotheses out there for these lab-induced phenomena that are not completely at odds with our present understanding of physics. I say that as someone who studied physics for a couple of years (although I am the first to admit that the fact I got a degree in experimental psychology in the end is telling about my qualities as a physicist). Although these models rely on a rather specific interpretation of in particular the metaphysical status of consciousness, this is not a reason to dismiss them out of hand. I would like to remind the audience that the mainstream physicalist position on consciousness (i.e., consciousness is a brain process) itself is a metaphysical assumption about the nature of consciousness, and a position that is even slowly eroding.

This is where things got interesting for me. My research focuses on consciousness, and in particular on the mind-body problem. Psi phenomena, should they exist, would shine an entire new light on the metaphysical assumptions we make about consciousness. Just for that reason I think it’s worthwhile to have at least a look into the matter. My research interest in this area goes back to my early years a psychology undergraduate when professor Dick Bierman used to be my academic mentor. We talked a lot about this line of work, but I lost my touch with the area when I started doing my PhD with Victor Lamme, with a very strict materialist agenda. Dick and I got back in touch a couple of years ago when I got back in the Netherlands.

As I briefly mentioned in an earlier post, things got really interesting when anomalous effects started popping up in my own data. For example, using a single-trial EEG classifier I was able to decode the identity of an upcoming stimulus in a visual detection task, on basis of the baseline alone… Upon closer inspection of the data, it turned out there was a randomization problem. Ergo, I thought I had cracked the problem of all these alleged precognitive effects (improper randomization), fixed that by using a combination of hardware RNGs (if you see these odd photos of green glassware from my lab – that’s my hardware RNG ;-), and present that at the conference. Except the precognitive effect was still there. I triple-checked everything – stimulus script, analysis protocol, filter settings, hardware filters in the EEG amplifier: nothing. Yet, the effect is huge (d = 1.44).

So, what do you do? I decided to be honest and report what I found: I could decode the future, which I submitted as an abstract for the bi-annual winter meeting of the Dutch Psychonomic Society. Sure, I could filedrawer this weird effect, but why would or should I do that? I had a hypothesis, tested it, and failed. I was wrong – ‘precognitive’ effects are not caused by improper randomization. For those of you interested: we are going to replicate this study in a multi-lab setup. Drop me a line if you want to have a look at the data and code.

Anyway, to cut a long story short – over past five years I have done several quite large replication experiments on controversial areas (social priming, psi). The bottom line is that all my attempts at replication social priming effects failed, but the psi ones did not… So, hell yeah, I’m fascinated, and all of you would be if you got three whopping psi replications in a row. As a matter of fact, Dick and I are now getting some people together to work on a large-scale, multi-site adversarial collaboration project to run a number of high-powered replication studies to figure out if there really is such thing as a replicable psi effect. The only way to do this is by maintaining the highest methodological standards. Adversarial collaboration, preregistration, high power, open data, open materials, and proper experimental design are essential, otherwise you might as well not do it.

Now, to get back to Leonid Schneider’s post – I already mentioned I did not like the tone. I’ve read some of his other work, and fortunately, this piece is not representative of his qualities as a science journalist. Beacuse, as a science journalism piece, it fails, in my opinion. Audi alterem partem is what is missing here. That is disappointing. Moreover, you don’t have to agree with someone to still show at least some respect (or at least pretend you do). I’m totally fine with people thinking psi research is nonsense. I am also fine with people thinking my research is nonsense (I call such people “reviewers”). I am not fine with making fun of people and bullying them, which is what happens in this post.

Anyway, making fun of psi researchers is one thing, and I guess most are used to it. However, I think Schneider crosses a line when he suggests a link between the psi research and Bial trial incident. Bial is a Portugese pharmaceutical company that got into the news recently because of a clinical trial gone horribly wrong: healthy participants showed a severe adverse effect to a new drug, resulting in the death of one volunteer, and serious brain damage in several others. Schneider flat out suggests a relation between this tragedy and the fact that the Bial Foundation, a foundation sponsored by the founder of the company, funds psi research.

This suggestion is nothing short of slander. First of all, there is no relation between the activities of the company and the foundation, other than that the foundation annually gets a big bag of money from the guy who owns the company. Second, even if there would have been a direct link between the activities of the foundation and the company: as I mentioned earlier, the research standards in experimental parapsychology are at least comparable to those in ‘normal’ psychology. Third, clinical trials are legally regulated and closely monitored by medical ethics committees which assess the protocol and guard participant safety. Even in the case Bial did ask a psychic to develop the protocol for this trial using a crystal ball, or had a necromancer come up with the drug formula, the French authorities would/should have stopped this. All in all, the fact that Schneider uses this tragedy to make a point about parapsychological research is a really, really low blow.

In June, I attended the TSC 2015 conference, which also has quite a large number of talks on anomalous phenomena, and I had the pleasure to meet the kind of the people who are at the receiving end of Schneider’s snarky comments. They turned out to be fairly normal scientists, working at universities, about as knowledgeable or even more knowledgeable about research methods than the average psychologist. Most did not believe in fairies, they did not hold seances during their talks, not a single one brought a crystal ball, and there were no nightly shamanic sessions involving druidic dancing around monoliths (or at least, I was not invited to such happenings). The main difference is that these people work on effects most scientists find very, very implausable.

I think that we should measure psi researchers (or any researcher for that matter) not by their topic of study, but by the way they study their topic. Any researcher who holds up to high methodological standards, and is open to constructive criticism deserves to be taken seriously, regardless of what kind of effect she or he is working on. Period.

However. The fact that some skeptics cannot resist the urge to ridicule is no reason for the self-styled martyrdom some psi researchers engage in. Yes, psi researchers are being bullied, ridiculed, and even silenced. Schneider’s post is an excellent example. There is a reputation trap. That reputation trap, though, is often of one’s own making. Too often, psi researchers engage in wild, unfalsifiable speculation. Quantum teleportation, entaglement telepathy, that kind of stuff. Modesty should prevail: there is no convincing evidence psi effects exist, otherwise we would not have this discussion. Therefore, it is best to stay away from wild theoretical speculations that often involve misrepresented physics until there is at least some consensus between skeptics and proponents on whether anomalous effects are anything more than statistical noise. We’re not there. Yet.

Similar to most psi researchers not being fairy-worshipping druids, most skeptics are not narrow minded, sour critics. Most are actually very willing to discuss anomalous phenomena. But as data. Based on my personal interactions with them, I’d say both EJ Wagenmakers and Sam Schwarzkopf are perfectly willing to discuss experiments and datasets, but not if you come rushing in LOOK OMG HERE I FOUND PSI IN MY DATA LOOKATIT YOU WERE WRONG QUANTUM FTW! No, you found an interesting anomaly that begs for further exploration/explanation, but first we need to make sure your pattern of results is not the result of something trivial or just a random accident. Neither EJ or Sam laughed in my face when I told them about my data containing anomalies. Rather, the reply was “Interesting, what could be going on here?”

The thing is – it’s all about framing. In my mind, the present situation is very much like the faster-than-light neutrino anomaly of 2011. Researchers found evidence of particles moving faster than light, which according to special relativity is impossible. Rather than the entire field going bonkers, skeptics at CERN calling their colleagues at OPERA spirit-channeling fairy lovers, and OPERA researchers starting an anti-oppression movement because they were not allowed to share their result, the general response was “Hey, that’s interesting, let’s figure out what caused this result!” And that is the only reasonable response – if indeed particles can travel faster than light, it means we need to completely re-examine our ideas physics. Awesome! Work for generations of physicists to come!

Why can’t we do the same in psychology? There are people who seem to consistently find weird results. What’s going on? Clearly, we have not settled this matter – there is no conclusive evidence in favour of psi, but oppositely, clearly the psi proponents are also not convinced by the skeptics’ arguments and replication attempts. Skeptics should accept that there are consistent anomalies being found by intelligent, reasonable people all over the world that call for a deeper explanation than “it’s just statistical noise” or “it’s just publication bias” – I mean, weird results are popping up in my lab, FFS! Psi researchers should accept their case for the existence of psi is not strong enough, and that only with adversarial collaboration we can figure out what’s going on.

Oh, and those neutrinos? Turned out to be a loose cable in the Italian setup…

(note to self: check lab cables after the weekend)