Category Archives: cognitive neuroscience

Posts on cognitive neuroscience

Call for suggestions!

Hi all,

Shortly we will be running a pretty cool EEG experiment on perceptual decision making in romantically involved couples. Basically, a couple (let’s call them Alice and Bob) will come into the lab, each be assigned their own computer and then take turns in a perceptual decision making task (see Jolij and Meurs, 2011, for more details on the task itself). So, first Alice will get to see a trial and give a response; then Bob will see Alice her response (as a cue) and will do the trial, and to conclude, both will see each other’s answers. During the experiment, we’ll be measuring EEG (NB: only 8 channels). Before the experiment, both partners fill out a series of questionnaires on relationship duration, quality, etc.

In the spirit of open science, I though it might be useful to ask you all what would make this dataset useful for you. I mean, we are going to test these participants anyway, in a rather non-typical setup (two EEG measurements simultaneously, meaning you can look at all kinds of interpersonal processes, EEG synchronization, ect.), so if there is anything I could add that does not take too much time so this could be an interesting dataset for you, let me know. Think of maybe a block of eyes-closed EEG data during a breathing exercise to study interpersonal synchrony, a particular questionnaire, additional markers, whatever.

As long as it does not add too much time to the experimental protocol, or takes up too much time programming, I am happy to include stuff. Please do get in touch if you want to know more: j.jolij@rug.nl.

The Feel Good Song Formula

Update 22/9/2016

I see the Feel Good Formula has been getting some attention again! Since last year, we have repeated this study in a Dutch sample, but now with a continuous rating (i.e. “How ‘feel good’ is this song on a scale from 1-100?”) That allows for a far better statistical model. Fortunately, the results do confirm the earlier work (i.e. Don’t Stop Me Now is still firmly in the Top 3). The full Dutch list can be found here (of course, it is edited for radio-friendliness). For those of you interested, based on the Dutch data, the full regression formula is:

Rating = 60 + (0.00165 * BPM – 120)^2 + (4.376 * Major) + 0.78 * nChords – (Major * nChords)

Where BPM is beats per minute (tempo), Major is 1 if the song is in a major key and 0 is the song is in a minor key, and nChords is the number of chords in the song (including modulations etc.) The formula basically says we generally like songs with a tempo that deviates from the average pop song tempo, that are in a major key, and are a bit more complex than 3 chord songs, UNLESS the song is in a major key.

If you’re in the UK, maybe you have seen or heard something about the ultimate feel good song formula uncovered by a real scientist with a somewhat unpronouncable name from a university with an equally unproncouncable name. Well, that scientist was yours truly! I got quite some questions about this feel good formula and how I ‘uncovered’ it, so here’s a short blog post on it!

The research was commissioned by a British electronics brand called Alba, who did a large customer survey in the UK and the Republic of Ireland, asking respondents for their musical preference, where they got their musical taste from, and, most importantly, their favourite songs to improve their moods. Probably given my background is using music as a mood manipulation (a.o. in Jolij & Meurs, 2011) my name popped up when they were looking for an academic to help them analyze this enormous dataset. Basically, they asked me whether I could find a general pattern in the songs that respondents reported as ‘feel good songs’, and whether they could use this pattern to come up with a ‘formula’. I found this an interesting challenge, so I said yes. One week later I received the data, and I could get to work.

A ‘feel good song’ is rather tricky to define. Music appreciation is highly personal and strongly depends on social context, and personal associations. In that respect, the idea of a ‘feel good formula’ is a bit odd – factoring in all these personal aspects is next to impossible, in particular if you want to come up with a quantitave feel good formula. Basically, what you need are song features that you can express in numbers.

Fortunately, music does have specific features that are known to play an important role in emotional reception of songs. In particular these are mode (major or minor) and tempo. So, the first thing I did was to identify all unique songs that respondents listed as ‘feel good’, and find the scores of these songs to determine key and tempo. Next, I looked at some additional variables, such as season in which the song was released, genre, lyrical theme, and overall emotionality of the lyrics.

So, now I’d got myself a big matrix with numbers. Now what? Originally, I planned to fit a linear mixed model to predict whether a song is a feel good song or not. A mixed model would be ideal – it would allow me to include a random factor for song, or even for respondent, and thus correct (somewhat) for individual differences such as social context, associations, and what more. Unfortunately, the list I got only listed feel-good songs. That’s a problem for an LMM, because you cannot fit a model if your outcome variable (feel good-song or not a feel good-song) has zero variability. Same thing for a machine learning algorithm – you need exemplars of both categories you want to classify. And I had just one…

The perfect solution is of course to come up with a baseline of songs that were not classified as ‘feel good songs’. Given I had only a very limited amount of time for this analysis, that was not feasible. I therefore decided to have a look at the means and in particular distributions of the key variables tempo and key to see if they would differ from the average pop song. The pattern was very clear – the average tempo of a ‘feel good’-song was substantially higher than the average pop song. Where the average tempo of pop songs is around 118 BPM, the list of feel good songs had an average tempo of around 140 to 150 BPM. Next I had a look at key (major or minor). Again a very clear pattern: only two or three songs were in a minor key, the rest was all in a major key. Of course, the proof of the pudding is in eating. I’ve created four short clips, two in a major key (C G Am F, the famous I-V-vi-IV progression), and two in a minor key (Am Am Em Bm), each at 118 BPM and 148 BPM, with a 4-to-the-floor beat under them. Listen to the differences, and decide which one would make the best feel good song.

Of course, a song is more than its score. I have also looked at lyrical themes. Predominantly, the feel good songs were about positive events (going to a beach, going to a party, doing something with your love, etc.) or did not make sense at all.

At the end of the story, I had to cook up a formula. My client had asked me to come up with a formula for PR-purposes: a formula can nicely explain the ‘main’ ingredients of a feel good song at a glance. The formula I came up with takes the number of positive lyrical elements in a song, and divides that by how much a song deviates from 150 BPM and from the major key. It’s not perfect at all – it’s mostly an illustration (all four clips I posted here would score 0 on my formula, simply because they have no lyrics, for example).

So, how to get from the ‘formula’ to the list of ultimate feel good songs? I had little to with that actually – we simply took the most often mentioned song per decade. Given that these modal feel good songs contribute to the averages, of course they fit the ‘formula’ reasonably well.

All in all, this was a fun assignment to do. Of course the main purpose for Alba was marketing, but that’s ok. They are to commended for doing this in such a data-driven fashion, in stead of making something up. Is this hardcore science? No, it’s data crunching – for me as a scientist, it’s useful because I now have a list of songs I can use for mood manipulations. However, the truly interesting questions are still open. Is this model predictive, that is, can it be used by composers to write specific feel good songs? What is so special about the major key that it makes us feel good? Why do fast songs work so well? Stuff to work on in the future – and maybe the most exciting thing about this commission is the sheer amount of responses I got from people interested in this work, and interested in finding an answer to the questions I mentioned earlier. I’m sure you’ll be hearing more about this topic from us in the near future!

PS: as this research was a private commission, I am afraid there is not going to be a peer-reviewed publication in the short term, nor am I at liberty to release the data. However, the reception of this work has inspired me to put my music-related work on top of my to-do list. Watch this space for more music research soon!

Within-subject designs in social priming – my attempts

TL;DR summary: it’s perfectly possible to do a within-subject design for ‘social’ priming.

This is going to be an attempt at a more serious post, about some actual research I have done. Moreover, I really need to get back into writing mode after summer leave. Just starting cold turkey on the >7 manuscripts still waiting for me did not work out that well, but maybe a nice little blog will do the trick!

This weekend, I was engaged in a Twitter exchange with Rolf Zwaan, Sam Schwarzkopf, and Brett Buttliere about social priming (what else? Ah, psi maybe!)  A quick recap: social (or better: behavioural) priming refers to the modification of behaviour by environmental stimuli. For example, washing your hands (and thus ‘cleaning your conscience’) reduces severity of moral judgments. Reading words that have to do with elderly people (‘bingo’, ‘Florida’) makes you walk slower. Or, feeling happy makes you more likely to see happy faces.

The general idea behind such effects is that external stimuli trigger a cascade of semantic associations, resulting in a change in behaviour. ‘Florida’ triggers the concept ‘old’, the concept ‘old’ triggers the concept ‘slow’, and if you think about ‘slow’ this automatically makes you walk slower. Indeed, semantics are closely tied to information processing in the brain – a beautiful study from the lab from Jack Gallant shows that attention during viewing of natural scenes guides activation of semantically related concepts. However, whether the influence of external stimuli and semantic concepts is indeed so strong as some researchers want us to believe is questionable. Sam Schwarzkopf argued in a recent blog post that if we are so strongly guided by external stimuli, our behaviour would be extremely unstable. Given the recent string of failures to replicate high-profile social priming studies, many researchers have become very suspicious of the entire concept of ‘social priming’.

What does not exactly help is that the average social priming study is severely underpowered. People like Daniel Lakens and UIi Schimmack have done far better in explaining what that means than I can, but basically it boils down to this: if you’re interested in running a social priming study (example courtesy of Rolf Zwaan), you pick a nice proverb (e.g., ‘sweet makes you sweeter’), and come up with an independent variable (your priming manipulation, e.g. half of your participants drink sugary water; the other half lemon juice) and a dependent variable (e.g., the amount of money a participant would give to charity after drinking the priming beverage). I’ve got no idea whether someone did something like this… oh wait… of course someone did.

Anyway, this is called a ‘between subject’ design. You test two groups of people on the same measure (amount of money donated to charity), but the groups are exposed to different primes. To detect a difference between your two groups, you need to test an adequate number of participants (or, your sample needs to have sufficient power). How many is adequate? Well, that depends on how large the effect size is. The effect size is the mean difference divided by the pooled standard deviation of your group, and the smaller your effect size, the more participants you need to test in order to draw reliable conclusions. The problem with many social priming-like studies is that participants are only asked to produce the target behaviour once (they come into the lab, drink their beverage, fill out a few questionnaires, and that’s it). This means that the measurements are inherently noisy. Maybe one of the participants in the sweet group was in a foul mood, or happened to be Oscar the Grouch. Maybe one of the participants in the sour group was Mother Theresa. Probably three participants fell asleep, and at least one will not have read the instructions at all.

To cut a long story short, if you don’t test enough participants, you run a large risk of missing a true effect (a false negative), but also you risk finding a significant difference between your groups whilst there is no true effect present (a false positive). Unfortunately, many social priming studies have used far too few participants to draw valid conclusions. This latter thing is significant (no pun intended). Given that journal editors until recently primarily were interested in ‘significant results’ (i.e., studies that report a significant difference between two groups), getting a significant result meant ‘bingo’. A non-significant result… well, too bad. Maybe the sweet drink wasn’t sweet enough to win over Oscar the Grouch! Add a sugar cube to the mix, and test your next batch of subjects. If you test batches of around 30 participants (ie. 15 per group, which was not abnormal in the literature), you can run such an experiment in half a day. Sooner or later (at least within two weeks if you test full time), there will be one study that gives you your sought-after p < .05. Boom, paper in!

In cognitive psychology and neuroscience we tend to be a bit jealous of such ‘easy’ work. Our experiments are harder to pull off. Right before summer, one of my grad students finished her TMS experiment, for which she tested 12 participants. For 18 hours. Per participant. In the basement of a horrible 1960s building with poor airco whilst the weather outside was beautiful. Yes, a position in my lab comes with free vitamin D capsules because of occupational health & safety reasons.

Moreover, the designs that we typically employ are within-subject designs. We subject our participants to different conditions and compare performance between conditions. Each participant is his/her own control. In particular for physiological measurements such as EEG this makes sense: the morphology, latency and topography of brain evoked potentials vary wildly from person to person, but are remarkably stable within a person. This means that I can eliminate a lot of noise in my sample by using a within-subjects design. As a matter of fact, the within-subjects design is pretty much the default in most EEG (and fMRI, and TMS, and NIRS, etc.) work. Of course we have to deal with order effects, learning effects, etc., but careful counterbalancing can counteract such effects to some extent.

Coming from this tradition, when I started running my own ‘social priming’ experiments, I naturally opted for a within-subjects design. My interest in social priming comes from my work on unconscious visual processing – very briefly, my idea about unconscious vision is that we only use it for fight-or-flight responses, but that we otherwise rely on conscious vision. The reason for this is that conscious vision is more accurate, because of the underlying cortical circuitry. Given that (according to the broad social priming hypothesis) our behaviour is largely guided by the environment, it is important to base our behaviour on what we consciously perceive (otherwise we’d be  acting very odd all the time). This led me to hypothesize that social priming only works if the primes are perceived consciously.

 

I tested this idea using a typical masked priming experiment: I presented a prime (in this case, eyes versus flowers, after this paper), and measured the participant’s response in a social interaction task after being exposed to the prime, in total 120 trials (2 primes (eyes/flowers) x 2 conditions (masked/not masked) x 30 trials per prime per condition). The ‘social interaction’ was quite simple: the participant got to briefly see a target stimulus (happy versus sad face), and had to guess the identity of the face, and bet money on whether the answer was correct. Critically, we told the participant (s)he was not betting her/his own money, but that of the participant in the lab next door. Based on the literature, we expected participants to be more conservative on ‘eye’-primed trials, because the eyes would remind them to behave more prosocially and not waste someone else’s money.

Needless to say, this horrible design led to nothing. Major problem: it is very doubtful whether my DV truly captured prosocial behaviour. After this attempt, we tried again in a closer replication of earlier eye-priming studies using a between-subjects design and a dictator task, but after wasting >300 participants we came to the conclusion many others had drawn before: eye priming does not work.

But this doesn’t mean within-subjects designs cannot work for priming studies. There’s no reason why you could not use a within-subjects design to test, for example, whether having a full bladder makes you act less impulsively. As a matter of fact, I’ve proposed such a study in a blog post from last year.

Another example: I am not sure if we could call it ‘social priming’, but a study we did a while ago used a within-subject design to test the hypothesis whether happy music makes you better at detecting happy faces and vice-versa. Actually, this study fits the bill of a typical ‘social priming’ study – activation of a high level concept (happy music) has an effect on behaviour (detecting real and imaginary faces) via a very speculative route. It’s a ‘sexy’ topic and a finding anyone can relate co. It may not surprise you we got a lot of media attention for this one…

Because of the within-subjects design we got very robust effects. More importantly, though, we have replicated this experiment two times now, and I am aware of others replicating this result. As a matter of fact, we were hardly the first to show these effects… music-induced mood effects on face perception had been reported as early as the 1990s (and we nicely cite those papers). The reason I am quite confident in the effect of mood on perception is that in our latest replication, we also measured EEG, and indeed find an effect of mood congruence on visual evoked potentials. Now, I am not saying that if you cannot find a neural correlate of an effect, it does not exist, but if you do find a reliable one, it’s pretty convincing that the effect *does* exist.

What would be very interesting for the social priming field is to come up with designs that show robust effects in a within-subjects setting, and ideally, effects that show up on physiological measures. And to be frank, it’s not that difficult. Let’s suppose that elderly priming is true. If concepts related to old people makes you indeed behave like grandpa, we should not just see this in walking speed, but also in cognitive speed. Enter the EEG-amplifier! Evoked potentials can be used nicely assess speed of cognitive processing – in a stimulus recognition task, for example, latency of the P3 correlates with reaction time. If ‘old’ makes you slower, we’d expect longer P3 latencies for trials preceded by ‘old’ or a related word, than for trials preceded by ‘young’. Fairly easy experiment to set up, can be run in a week.

Or even better – if, as the broad social priming hypothesis postulates, social priming works by means of semantic association, we should be able to find evidence of semantic relations between concepts. Again something that is testable, for example in a simple associative priming task in which you measure N400 amplitudes (an index for semantic relatedness). As a matter of fact, we have already run such experiments, in the context of Erik Schoppen‘s PhD project, with some success – we were able to discriminate Apple from Android enthusiasts using a very simple associative priming test, for example.

All in all, my position in the entire social priming debate has not changed that much. I do believe that environmental stimuli can influence behaviour to quite some extent, but I am very skeptical of many of the effects reported in the literature, not in the last place because of the very speculative high-level semantic association mechanisms that are supposed to be involved. In order to lean more credibility to the claims of ‘social priming’, the (often implicit) hypotheses about the involved mechanisms have to be tested. I think we (cognitive/social neuroscientists) are in an excellent position to help flesh out paradigms and designs that are more informative than the typical between-subject designs in this field. At least I think working together our colleagues in social psychology this way is more fruitful than trying to ‘educate’ social priming researchers about how ‘wrong’ they have been, and doing direct replications (however useful) of seminal studies and bask in Schadenfreude when the yet another replication attempt fails, or meta-analysis shows how flimsy an effect is. We know that stuff already. No need to piss each other off IMO (I am referring to a rather escalated ISCON discussion here of last week).

Let’s do some cool stuff and learn something new about how the mind works. Together. Offer made last year still stands.

Why a meta-analysis of 90 studies does not tell that much about psi, or why academic papers should not be reduced to their data

Social psychologist-turned-statistics-and-publication-ethics crusader Daniel Lakens has recently published his review of a meta-analysis of 90 studies by Bem and colleagues that allegedly shows that there is strong evidence for precognition. Lakens rips apart the meta-analysis in his review, in particular because of the poor control for publication bias. According to Lakens, who recently converted to PET-PEESE as the best way to control for publication bias, there is a huge publication bias in the literature on psi, and if one, contrary to the meta-analysis’ authors, properly controls for that, the actual effect size is not different from zero. Moreover, Lakens suggests in his post that doing experiments without a theoretical framework is like running around like a headless chicken – every now and then you bump into something, but it’s not as if you were actually aiming.

I cannot comment on Daniel’s statistical points. I have not spent the last two years freshing up my stats, as Daniel so thoroughly has done, so I have to assume that he knows to some extent what he’s talking about. However, it may be worthwhile noting that the notion of what is an effect, and how to determine its existence has become somewhat fluid over the past five years. An important part of the debate we’re presently having in psychology is no longer about interpretations of what we have observed, but increasingly about the question whether we have observed anything at all. Daniel’s critical review of Bem et al.’s meta-analysis is an excellent example.

However, I do think Daniel’s post shows something interesting about the role of theory and methods in meta-analyses as well, that in my opinion stretches beyond the present topic. After reading Daniel’s post, and going through some of the original studies included in the meta-analysis it struck me that there might be going something wrong here. And with ‘here’ I mean reducing experiments to datasets and effect sizes. We all know that in order to truly appreciate an experiment and its outcomes, it does not suffice to look at the results section, or to have access to the data. You also need to carefully study the methods section to verify that an author has actually carried out the experiment in such a way that is measured what the author claims has been measured. And this is where many studies go wrong. I will illustrate this with a (in)famous example; Maier et al.’s 2014 paper ‘Feeling the Future Again’.

To give you some more background: Daniel claims that psi lacks a theoretical framework. This statement is incorrect. In fact, there are quite some theoretical models that potentially explain psi effects. Most of these make use or abuse concepts from (quantum) physics, and a as a result many psychologists either do not understand the models, or do not bother to try understand them, and simply draw the ‘quantum waffling’ card. Often this is the appropriate response, but it’s nothing more than a heuristic.

Maier et al. (2014) did not start running experiments like headless chickens hoping to find a weird effect. In fact, they quite carefully crafted a hypothesis about what can be expected from precognitive effects. Precognition is problematic from a physics point of view, not because it’s impossible (it isn’t), but because it creates the possibility for grandfather paradoxes. In typical precognition/presentiment experiments, an observer shows an anomalous response to an event that will take place in the near future, let’s say a chandelier falling down from the ceiling. However, if the observer is aware of his precognitive response, (s)he can act in order to prevent the future event (fixing new screws to the chandelier). However, now said event will not occur anymore, so how can it affect the past? Similarly, you cannot win roulette using your precognitive powers – any attempt to use a signal from the future to intentionally alter your behaviour leads to time paradoxes.

In order to avoid this loophole, Maier et al. suggest that precognition may only work unconsciously; that is, if there are precognitive effects they may only work in a probabilistic way, and only affect unconsciously initiated behaviour. Very superficially, this line of reasoning resembles Deutsch’s closed timelike curves proposal for time-travel of quantum information, but that’s besides the point here. The critical issue is that Maier et al. set up a series of experiments in which they manipulated consciousness of stimuli and actions that were believed to induce or be influenced by precognitive signals.

And that is where things go wrong in their paper.

Maier et al. used stimuli from the IAPS to evoke emotional responses. Basically, the methodology is this: participants had to press two buttons, left and right. Immediately after, two images would appear in on the screen, one of which would have negative emotional content. The images were masked in order to avoid them from entering conscious awareness. The idea is that participants would respond slower pressing the button at the same side as where the negative image would appear (ie., they would precognitively avoid the negative image). However, since this would be a strictly unconscious effect, it would avoid time paradoxes (although one could argue about that one).

What Maier et al. failed to do, though, is properly checking whether their masking manipulation worked. Properly masking stimuli is deceivingly difficult, and reading through their method sections, I am actually very skeptical whether they could have been successful at all. The presentation times of the masked stimuli were 1 video-frame, which would be necessary to properly mask the stimuli, but the presentation software used (E-Prime) is quite notorious for its timing errors, especially under Windows 7 or higher, with video cards low on VRAM. The authors, however, do not provide any details on what operating system they used, or what graphics board they used. To add insult to injury, they did not ask participants on a trial-by-trial basis whether the masked image was seen or not (and even that may not be the best way to check for awareness). Therefore, I have little faith the authors actually succeeded in successfully masking their emotional images in the lab. Their important, super-high-powered N=1221 study, which is often cited, has been carried out on-line. It’s very dubious whether masking was successful in this case at all.

If we follow the reasoning of Maier et al., conscious awareness of stimuli is important in getting precognitive effects (or not). Suppose that E-Prime’s timing messed up in 1 out of 4 trials, and the stimuli were visible – what does that mean for the results? Should these trials have been excluded? Can’t it be the case that such trials diluted the effect, so we end up with an underestimation? And, can’t it be that the inclusion of non-masked trials in the online experiment has affected the outcomes? Measuring ‘unconscious’ behaviour, as in blindsight-like behaviour, in normal individuals is extremely difficult and sensitive to general context – could this have played a role?

In sum, if you do not carefully check your critical manipulations you’re left with a high-powered study that may or may not tell us something about precognition. However, it matters when you include it in your meta-analysis – a study with such a high N will appear very informative because of its (potential) power, but if the methodology is not sound, it is not informative at all.

On a separate note, Maier et al.’s study is not the only one where consciousness is sloppily manipulated – the average ‘social priming’ or ‘unconscious thinking’ study is far worse – make sure you read Tom Stafford’s excellent commentary on this matter!

So, how it this relevant to Bem’s meta-analysis? Quite simply put: what studies you put in matters. You cannot reduce an experiment to its data if you are not absolutely sure the experiment has been carried out properly. And in particular with sensitive techniques like visual masking, or manipulations of consciousness, having some expertise matters. To some extent, Neuroskeptic’s Harry Potter Theory makes perfect sense – there are effects and manipulations which require specific expertise and technical knowledge to replicate (ironically, Neuroskeptic came up with HPT to state the opposite). In order to evaluate an experiment you not only need to have access to the data, but also to the methods used. Given that this information seems to be lacking, it is unclear what this meta-analysis actually tells us.

Now, one problem that you will run in to a whole series ‘No True Scotsman’ arguments (“we should leave Maiers’s paper out of our discussions of psi, because they did not really measure psi”), but some extent that is inevitable. The data of an experiment with clear methodological problems is worthless, even if it is preregistered, open, and high-powered. Open data is not necessarily good data, more data does not necessarily mean better data, and a replication of a bad experiment will not result in better data. The present focus in the ‘replication debate’ draws attention away from this – Tom Postmes referred to this ‘data fetishism’ in a recent post, and he is right.

So how do we solve this problem? The answer is not just “more power”. The answer is “better methods”. And a better a priori theoretical justification of our experiments and analysis. What do we measure, how do we measure it, and why? Preferably, such a justification has to be peer-reviewed, and ideally a paper should be accepted on basis of such a proposal rather than on basis of the results. Hmm, this sounds somewhat familiar

Hypnosis workshop materials

Here are the materials I used in the VIP Hypnosis Workshop of 4 December. Feel free to use the induction MP3s, but please do not listen to these tracks while driving, operating a nuclear power plant, performing brain surgery, or doing anything that requires you to be awake. The Power of Boring is strong is these tracks. The background sound has been generated using http://www.naturesoundfor.me.

Slides: Hypnosis workshop

Induction with backing track:

Reversal with backing track:

Recording of my Quantum Mind lecture (12 Nov)

Last week, I have given a lecture for the VIP, our student association, on my not-at-all controversial work and ideas on consciousness, quantum physics, and psi. Thanks to Jeffrey Harris, who made an audio recording, you can now listen to my waffling, whilst looking at the pdf with slides!

Slides: Quantum Mind

Audio:

Part 1: Philosophy of Consciousness and the Hard Problem

Part 2: Introduction to Quantum Mechanics, in particular the Measurement Problem

Part 3: The Quantum Mind?

Scientists build Rube Goldberg machine, sell it to press as Brain-to-Brain Interface

Have you seen this study? It made headlines all over the world – first direct brain-to-brain interface! Brain activity of one person is recorded, decoded and sent over the internet to a computer which controls a brain stimulator, which in turn stimulates another person’s brain! Sounds quite sci-fi, right?!

Well, I hate to spoil the fun, but apart from a pretty cool demonstration of two well-established techniques in cognitive neuroscience, this study literally tells us nothing about brain function we did not already know, nor leads to any practical application in the (near) future. Besides – it’s not even new. Andrea Stocco and Ranesh Rao did almost the exact same thing over a year ago. Both these ‘brain-to-brain’ interfaces are hardly spectacular, however, not even as a proof-of-concept. At best, they are the neuroscientist’s equivalent of a Rube Goldberg machine. Fun art projects,  not science.

So, what’s going on? Brain-computer interfacing, or BCI, refers to the decoding of brain activity in realtime and transform this into a control signal. BCI has been around for a while, and there are several reliable ways to use brain activity to generate control signals. The P300 is an example – ‘P300’ refers to a positive peak in the electroencephalogram that occurs roughly 300 ms after an observer spots something that captures her/his attention. The P300 is widely used as a dependent measure in cognitive neuroscience and psychophysiology, and can be detected in single events with quite a good accuracy. So, if you are monitoring a participant’s brain responses to particular events, finding a P300 tells you that the this event captured the participant’s attention. You can use this, for example, to guess someone’s PIN code: let a participant hold his PIN code in mind, and flash the numbers 0 – 9. The numbers that evoke that largest P300s are likely to be in his/her PIN code. How to get to the full PIN code, I leave up to your own imagination and evil genius…

However, there are more benevolent applications. P300s are used widely to drive spelling programmes for patients with different kinds of disabilities. However, the P300 is a passive measure: it’s something your brain does in response to an external stimulus. Ideally, for BCI applications, you want a measure you as a user can have some control over. Motor imagery is increasingly used to generate control signals. If you think of moving your arm, this generates activity in a specific part of the motor cortex. Think of moving your foot, and you activate another part of your motor cortex. With EEG you can pick this up and discriminate between different imagined movements with high accuracy. My colleague Ritske de Jong has developed algorithms that pretty much work out-of-the box and are up to 99% accurate. Once you know what movement a participant is imagining, converting that to a control signal is trivial. Both Rao and Stocco, and Grau et al. use this type of BCI for their Rube Goldberg machines.

But what about the brain stimulation part? In the brain-to-brain interfacing studies, researchers use transcranial magnetic stimulation (TMS) to induce brain activity. TMS is a very useful tool to study brain functioning because it allows us to interfere with brain processes. You cannot just disrupt brain processing though – stimulating the motor cortex can lead to (rather jerky) movements, and stimulating the occipital cortex results in seeing a flash of light, a phosphene. In order to stimulate a given area, you simply position your coil over that area. However, that is as sophisticated as it gets. You can induce a jerky movement, or you can induce a very brief lasting phosphene with TMS. You cannot influence someone’s thoughts, or fine-tune someone’s actions with TMS, apart from annoying your participants.

Both this ‘first’ demonstration of a brain-to-brain interface and the actual first demonstration of a brain-to-brain interface use TMS to stimulate the brain of the receiving participant. And it’s exactly this what makes me characterize it as Rube Goldberg machines. The sophistication of the computer-to-brain interface is about that of hitting someone with a hammer. There is simply nothing practical that you can do with TMS, as opposed to implanted electrodes. Even worse, there is reason to believe that TMS has a theoretical limitation in terms of what kind of brain activity and brain areas can be stimulated. It is very unlikely that TMS or any other kind of non-invasive brain stimulation will ever have the level of sophistication to induce brain activity at the fine-grained level required to control thoughts and actions.

So, what do both brain-to-brain studies, that are supposedly ‘proofs-of-concept’, show? Well, they show that:

  • you can use BCI to generate a control signal. Great, we already know that for quite a while.
  • you can transmit this control signal via the internet. Well, that’s obvious too, you wouldn’t reading this blog if we could not signals via the internet
  • you can use a control signal to trigger a TMS device. Since TMS devices are around, they can be triggered by a TTL-pulse… so this is not really new, too…

All in all, there is really nothing new or surprising in these studies. There are no fundamental issues that are resolved, or new technological insights that truly show a new way of getting information from one brain into another. The setups presented in these studies are really nothing more than Rube Goldberg machines – hilariously complicated setups to perform a task that can be solved much, much simpler.

But they do make cool demonstrations. It’s just not science.

Oh, and as for brain-to-brain interfacing… what about this?

 

On (social) priming

Hmm, never thought (micro)blogging would be such an interesting experience… turns out it’s an excellent way to be exposed to different views and opinions. Last week, I posted an unpublished manuscript and dataset in which we attempted to make people behave more prosocially after priming them with eyes, an effect which has been originally published by Haley and Fessler in 2005, and conceptually replicated by Bates, Nettleson and Roberts in 2006.

I did not set out to directly replicate this effect, and to test its existence. Rather, I was interested in putting my theories on conscious versus unconscious perception to the test. In several papers, but most importantly Jolij and Lamme, 2005 we found that people can respond to unconsciously processed visual information (‘blindsight’), but only do so when they are in ‘guessing mode’. I proposed that this may the result of ‘repression’ of unconscious information.

What does this have to do with priming? Well, here I explain why I think sometimes we do, and sometimes we do not find blindsight. The idea is that unconscious information processing is great, but may be inaccurate. Since our behavior is so easily influenced by all kinds of external stimuli (yes, I was a firm believer in ‘social’ priming!), you don’t want any inaccurate information influencing you, and therefore the cognitive system represses the inaccurate information it gets from the unconscious visual pathway (most of the time).

To test that idea, we came up with the experiment I posted last week. We took a ‘social’ priming effect, and manipulated prime visibility by masking the primes. Our prediction was that priming would not work for masked primes. And that came out! Sadly, the priming effect was also absent for the visible primes. After running several studies, totaling almost 400 participants, I gave up. I simply could not find any reliable evidence for the effect I was looking for, not with tokens, not with study credits, not with money,  not with questionnaires. Now, all this was in 2008-2010.

After an inspiring talk by Zoltan Dienes about half a year ago (see here) I went through my archives to see if there might be anything there I could analyse with his methods and found this dataset back. I ran a Bayes factor analysis, and found that the Bayes factors were informative, and yielded substantial evidence for the null hypothesis of no effect. In other words: in the presented dataset it’s not just a null-effect, it’s actually 5.26 times more likely there is no effect of eye primes in this context than that there is. Given that there apparently was information in the data, despite a fairly low N (although nicely in line with your average priming experiment), we decided to give it a try and publish it.

Admittedly, with today’s knowledge, I tried to capitalize somewhat on the debate on social priming, and it turns out one sentence in particular was found somewhat offensive by some:

The lack of a firm theoretical background, problems with statistical power, potentially flawed methodology [6, 7], the exposure of several high-profile studies as fraudulent [8, 9], but most importantly, repeated failures to directly replicate several effects [10-13] has led to strong skepticism towards the notion of social priming.

In all fairness, I do think this statement is accurate. There is skepticism towards ‘social’ priming, and the reason for that is that quite some direct replications have failed, and that QRPs and fraud have been uncovered in others. Most importantly, though, we have yet to see a solid explanation as to why we sometimes do, and sometimes do not get these effects.

But the problem is: what is ‘social priming’ in the first place?

On the ISCON Facebook page, Jeff Sherman once mentioned that all priming (both ‘cognitive’ and ‘social’) is priming, because it’s all about priming behavior. Norbert Schwarz jokingly defined ‘social priming’ as ‘priming cognitive psychologists cannot replicate’.

Now, both of  these statements are obvious oversimplifications. One of the problems plaguing the ongoing debate on priming is the lack of a clear taxonomy of what comprises different kinds of priming, and indeed, I myself have been guilty of not properly defining what I mean with ‘social priming’ in the manuscript I posted last week.

So, let me give you my 2c  on the matter. I agree with Jeff Sherman to most extent. Priming is the modification of processing of subsequent stimuli and behavior by a given prime stimulus. Period. What distinguishes ‘cognitive’ from ‘social’ priming in my understanding (which, arguably, may be totally wrong) is mainly the length and complexity of the processing chain between prime and behavior. In what we call ‘cognitive’ priming, the chain is short. The archetypical ‘social priming’ study typically relies on a long chain of events between prime and behavior.

In most priming effects I typically employ in the lab, it’s about visuomotor transformations. In a task in which participants have to respond to the direction of an (target) arrow, presenting another (prime) arrow in the same direction, responses to the target will be faster, even if the prime is masked. This can be quite easily explained in terms of decision thresholds, or biasing processing of visual input. Several studies have shown direct communication between visual and motor areas in such tasks, and modulation of baseline motor activity by primes – in other words, as long as there is some direct visuomotor transformation, we can actually map the effect of our primes on brain activity in real time. I can pinpoint and measure the different subprocesses (perceptual encoding in the visual cortex between 80 ms and 100 ms, plus a second stage around 200-300 ms, decision making in the parietal cortex, starting around 100 ms, and  motor preparation in the motor areas, from 200 ms after stimulus onset, etc.) and study these independently. In other words: I know what’s going on.

However, when I am priming people with eyes to make them behave more prosocially, I do not. I can come up with a decent chain of events, though. From fMRI studies, we know that eyes are processed in dedicated areas of the visual system, and we know that prosocial behavior (in particular in ultimatum and dictator games) is mediated by the right dorsolateral prefrontal cortex from several fMRI and TMS studies. I can well imagine a modulation of the DLPFC by eye cues, only this has not been shown (yet). And maybe it will not be, because there is no such thing – I don’t know. At least, it can be tested.

For embodiment-type priming effects, it differs. There are pretty well-understood effects: take for example the SNARC-effect: if you have to respond to a number, you’re faster responding with your left hand than your right hand when the number is small, but vice versa when the number is large. This is a quite robust effect, attributed to the automatic activation of a ‘mental number line’. We typically order numbers from left to right: 1, 2, 3, 4, etc. In other words, magnitude of a number has a direct relation with spatial cognition. Indeed there is pretty good converging evidence from fMRI and TMS studies that a fronto-parietal spatial cognition network plays a critical role in number magnitude processing.

Interestingly, there is also evidence these spatial cognition networks underlie perception of social distances. This allows me to do a very specific prediction: if you prime someone with the concept ‘close’, and subsequently ask for this judgment about his social distance to someone, this should result in ‘closer’ judgments then when you prime someone with the concept ‘far away’. I am aware that Williams and Bargh did this in 2008, and indeed found this effect, but unfortunately they used a pretty poor priming strategy, and not surprisingly Pashler et al (2012) failed to replicate the effect. What’s needed as a prime is ideally a distance judgment task in 3D (so how far an object is from you) that really draws on spatial processing, rather than drawing two dots that are either close to each other or separated.

Now, the longer the association chain becomes, the more ‘line noise’ there may be, and the less credible effects become at first sight. Take for example the pee-study by Tuk et al. (this one) claims that a full bladder leads to increased impulse control, allegedly because having a full bladder requires the ‘inhibitory’ parts of the brain to prevent you from peeing, which at the same time inhibits making impulsive decisions. A bit far-fetched, but ok. I can see this work. Your brain gets somatosensory feedback from the sphincter muscle of your bladder, but you need to put in effort to hold in your wee, which allegedly activated the cognitive control circuits of your brain. This does not automatically imply that all behavior is subsequently inhibited of course. Sadly, the authors have missed out on a great opportunity to test their explanation (their study 3 is hardly convincing for their argument): why not do a task that measures response inhibition, in a ‘full bladder’ and ’empty bladder’ condition, within-subject, of course?

It’s quite easy, really. You can do an anti-saccade task (in which participants have to suppress the urge to make an eye movement to a suddenly appearing target). If the authors are correct, one should expect better performance in the bladder-full than in the bladder-empty condition. N=30 to 40, with at least 200 – 300 trials per participant should do the trick. An experiment like this can be run in about a week.

Now,  where things get quite dubious for me as a cognition researcher is the authors’ claim that priming participants with words that have to do with urination produce similar effects on behavioral inhibition. The chain of necessary events here is very long. First, it assumes that reading a word related to peeing (such as toilet, watering, etc.) activates a semantic network related to urination. Ok. There’s evidence for that. Point granted. Second, that semantic activation somehow results in a greater awareness of an urge to pee. Maybe. If you draw attention to a bodily function, participants will be more aware of it. This increased awareness leads to an actual increased urge to pee. Good, they actually tested that, and found an effect. Subsequently, this leads to increased inhibitory control (not tested), which leads to less impulsive behavior.

Notwithstanding my doubts, combining this type of priming with an anti-saccade task may be used to prove or disprove their hypothesis. Again, if seeing the word ‘urine’ activates inhibitory systems, we would expect an improvement on anti-saccade performance after priming participants with pee.

To cut a long story short, what I miss in a lot of priming research is a justification of all the individual assumptions that are made in order to explain the priming effect observed. If we can actually be more specific about these assumptions, and test them, we might actually get somewhere.

So, why not work together and figure out what’s really going on? If anyone is interested in the distance, or pee-studies, let me know, or if you know of a priming manipulation which I could use in stead of eye priming for the study that kicked this off, please get in touch!

 

Why no one will win the Randi or Chopra Challenges

Whoa! Deepak Chopra is offering 1 million dollars to anyone able to present a falsifiable theory of consciousness, in response to James Randi’s $1 Million Dollar challenge to show paranormal (psi) effects exist! Of course, Twitter and Facebook are going bonkers over this. And I have been going a little bit bonkers over all the responses, to be frank. Just to blow off some steam, here are my thoughts on Chopra’s challenge, and the responses to it.

First of all, many people responded to Chopra’s call with sarcasm and cynicism, and made fun of Chopra’s lack of understanding of science.

It struck me how many of these people lack any understanding of science themselves, but I guess that’s Twitter for you. I’d like to say to these people who ‘fucking love science’: proclaiming yourself an atheist or tweeting ‘WOO WOO’ to @jref does not make you a scientist any more than making a coherent sentence with the words “quantum”, “universal”, and “spirit”.

 So, what’s this all about? Years ago, James Randi, a professional stage magician, and renowned skeptic put out a 1 million dollar prize to any individual able to show true ‘paranormal’ ability. Anyone who would be able to read the future, do telekinesis, or make money as Ghostbuster, Randi would pay one million dollars. To date the prize remains unclaimed.

Deepak Chopra, on the other hand, is an Indian MD who writes books on consciousness and quantum mysticism using the Deepak Chopra Quote Generator, and apparently makes enough money to throw a million dollars at anyone coming up with a falsifiable theory of consciousness.

Neither of these challenges make sense.

Randi’s challenge does not make sense because it operates on a straw man argument: it makes a caricature of psi and then shoots at it. No, there are no such things as seeing in the future, telekinesis, or mind reading. No matter how sad it makes me to admit this, Professor X and Jean Grey DO NOT EXIST (come on, you all at least fantasized about being able to read minds and get the remote and/or your beer and pizza without having to leave your couch!) Period. Does not work, cannot work – not according to the laws of physics, not according to present theories on psi. What might exist, though, are weak, anomalous effects that if they exist, may only be detected in high-powered studies involving a large number of subjects set up in a very specific manner, that need to be pre-registered, and replicated, and replicated again before we can even start drawing conclusions about the existence of psi. So, no individual will ever be able to show paranormal ability, and thus claim the Randi prize. Safe bet, Mr. Randi.

Chopra’s challenge makes no sense because it is horribly ill-defined. Coming up with a falsifiable scientific theory of consciousness is not possible without properly defining ‘falsifiable’ and ‘consciousness’. What Chopra means to say is he will give a million dollars to anyone who can come up with a falsifiable materialist theory of conscious experience, that is, a theory of the subject of consciousness – the experience itself; not (necessarily) its contents. And that is an impossible challenge, because it is a contradiction in terms. You cannot come up with a falsifiable materialist theory of consciousness, and claim the Chopra prize. Safe bet, Mr. Chopra.

But how does mind relate to matter, then? Why can’t we have a falsifiable theory of consciousness?

I am not going to repeat Introduction to Philosophy of Mind here, but roughly we have four classes of mind/matter-theories:

  1. Only matters exists, mind is an illusion
  2. Mind exists, independent of matter
  3. Mind is dependent on matter (or vice versa)
  4. Only mind exists, matter is an illusion

Now, let’s be good scientists, and shoot at these propositions to falsify them, shall we? Classes 1 and 2 are fairly easy to shoot at, so I’ll use proper bullets 😉 Classes 3 and 4 are somewhat more challenging, though.

Let’s start with 1, which you may call orthodox materialism. It’s easy to debunk (with one caveat, though).

  • Cogito ergo sum. I have conscious experiences. Even if these experiences (including the feeling of being the subject of conscious expriences) are illusions, I am still experiencing these illusions. Therefore consciousness exists – even if all other apparent conscious beings in the universe would be philosophical zombies (that is, beings that act rational, but lack conscious experience).
  • If consciousness exists, there is ‘mind’. This rules out orthodox materialist monism (the notion that there is only matter, and that mind is an illusion).
  • Caveat: I can only falsify this for myself, because I cannot with certainty claim anyone else has conscious experiences. Vice versa, you cannot verify my conscious experiences, so you should not believe my claim, but base your evaluation on your own conscious experience (or lack thereof).

Number 2 is good old Cartesian substance dualism. Let’s shoot!

  • In order to move a body, the mind needs a way to operate it
  • Operating a body requires brain cells to fire
  • In order to make a brain cell fire, energy is required – the mind needs to add energy to the brain in order to make this work
  • Physics (ie our understanding of matter) does not allow the creation of energy within a closed system. How can mind get energy into the brain?
  • The probabilistic nature of quantum mechanics will not save you here, Church of the Quantum Spirit. Quantum mechanics describes physical reality at the finest grained level, and contrary to classical mechanics, which is deterministic in nature, quantum mechanics is probabilistic. In other words: the x(t) = v*t gives the position of a moving object at time (t) with absolute certainty; the Schroedinger equation (or better, a transformation thereof) only gives the probability that a particle will be at a given position at a given time. A typical ‘quantum woo’-argument is that the probabilistic nature of QM potentially allows for a mechanism via which mind can influence matter. However, QM is probabilistic – the outcome of a quantum measurement is inherently unpredictable. That may sound very convenient if you want to believe in free will, but in fact it is a terrible property for a cognitive system, or social beings like us. Our entire social network, and our own mental sanity flourish by the mere fact that we are (in general) quite predictable in our actions and thoughts.  Let’s please not introduce fundamental randomness in there, I’d say…

Classes three and four are more difficult to shoot down. Since WordPress does not allow me to use mortar grenade points, but only bullet points I’ll switch back to full text.

Number 3 is the class of what I call ‘weak monism’. We accept mind and matter exist. However, the one substance is dependent on the other (or: one substance can be reduced to the other). This is the category in which we will find main stream theories of consciousness. Weak monist theories come in two flavours. Materialist (or physicalist) theories propose that mind is the result of physical processes, and can be described as such. The Orthodox Skeptics are adherents of these theories, as are most main stream scientists. Idealist theories state that mind is supreme, and that matter is created by the mind. Chopra’s Church of the Quantum Spirit is of this denomination.

Weak monism, though, suffers from the dreaded Hard Problem. How does a change in one substance result in changes in the other? This goes for both materialism and idealism. Materialists need to explain how a change in matter (brain cells) translates into consciousness, and why some physical processes (action potentials) result in consciousness in some circumstances, whereas the same physical process do not in other circumstances. However, also if you’re from the Church of the Quantum Spirit, you have a hard problem. If matter is a result of mind, how come not all mental activity results in changes in matter?

According to many materialists, including Dan Dennett, the Hard Problem is not really a problem at all. Consciousness simply is the sum of all brain activity, period. In slightly more subtle wording: consciousness is believed to be an emergent phenomenon, resulting from the complexity of the neural networks of our brain. This is called supervenience – reality can be described at different levels, and higher levels of description (consciousness, mind) are dependent on features of lower levels of description (brain, neurons). Or, as Kalat has put it in Biological Psychology for generations of psychology students: you can look at the Mona Lisa as a painting of La Giaconda, and talk about in the sense of her mysterious smile, or you can give a detailed descripton of the canvas and pigments used. Same thing, different levels of description. Similarly, mind is the same thing as brain activity, but simply described in different terms. Obviously, we can easily swap around the words ‘mind’ and ‘matter’ to fix the Hard Problem for idealism.

Now, I hate to bring this news to the Orthodox Skeptics, but this is Woo in its purest form. You cannot call any theory that only says ‘if you make something complex enough it becomes conscious’ a serious theory! How complex does a system have to be in order to become conscious? At what level of description does consciousness emerge? Does the physical system need to be a brain, or would any physical system do? In other words – calling consciousness an emergent property of brain activity and leave it at that is hardly any more scientific than declaring universal quantum love and spirit (or insert your favourite Deepak Chopra quantumism here).

There are several problems with the emergence/supervenience theories of consciousness, but I personally think John Searle brought up the best argument against supervenience theories. Let me paraphrase it in terms of the Chinese Room thought experiment: in this thought experiment, we lock up a man who only speaks English in a room. Via a slot in the door he is given sheets of paper with Chinese characters. Using a manual in the room, he is able to look up an appropriate response in English. He writes the reply on another sheet of paper, which he returns via the slot. From the outside, it looks like the man knows Chinese! In reality, of course, he does not. Searle used this to argue that true artificial intelligence does not exist – for example, if you are training a system to respond to a user in natural language, what you’re doing is giving an artificial system a manual. The system does not understand language in the sense we understand language.

The Chinese Room can also serve as a thought experiment on consciousness. Take a system (a body), and pop a computational unit in there  that can map inputs to outputs it (let’s call this magic device a ‘brain’). The brain or parts of it are not conscious in any sense – they simply map inputs to outputs. However, looking at the system, operating in the world, it is conscious, or at least, bears all signs of it. This is pretty much in the line of Alva Noë’s ideas of how consciousness depends on embodiment.

In his book “Intuition Pumps and Other Tools for Thinking”, Dan Dennett defuses Searle’s argument by stating that the thought experiment is flawed. It does not matter if the ‘guy inside’ understands Chinese or not – the system (that is, the room) does. Digging deeper for ‘understanding’ or ‘consciousness’ makes no sense. There is no ‘Hard Problem’ – conscious experience is just what a system is doing at a particular level of description.

Now, I would like to very explicitly state here that Dan Dennett is probably one of the greatest minds alive, and I am nowhere in his league. I am a great fan of his work, and I feel that it should be compulsory reading for any undergraduate in psychology. However, I think he is wrong here. The reason for that? He plays a trick on us in defusing the Chinese Room.

The trick is this: he smuggles in an external observer. The Chinese Room understands Chinese only if observed by and interacting with an external observer. The ‘understanding’ of Chinese by the room only exists in the mind of the observer! Otherwise, the actions of the Chinese Room are meaningless. Likewise, the brain-in-a-body-operating-in-the-world is only conscious if observed in an appropriate context. Following Searle, I do find this problematic. Consciousness is a first-person perspective. I know I am conscious, because I am both subject and object of my experiences. Who or what is then describing the activity of my brain-body in such a way it enables my first person consciousness? It cannot be me because I am the result of this observation, and unless we allow paradoxical cause-effect relations (which I doubt any materialist would be very keen of), we are left with a very urgent question: in whose mind do I exist?

In sum, I see pretty big problems with materialist theories of consciousness. However, converting to idealism does not solve the problem. As argued earlier, idealism also suffers from the Hard Problem, and the above analysis applies as well. The Hard Problem is deviously difficult to defuse if you accept that mind and matter exist .

One possible solution is to give consciousness a ‘fundamental’ status. Consciousness is a fundamental property of the universe, like the universal forces. Hameroff and Penrose’s Orch OR model rests on this assumption, but also Giulio Tononi’s highly fashionable and critically acclaimed IIT 3.0 model of consciousness puts as its ‘zeroth’ postulate ‘consciousness exists‘. In a recent online article, Christof Koch even explicitly explored panpsychism (the idea that everything is conscious) as a solution to the mind-body problem. However, this does not explain why consciousness exists. And given that physicists are not satisfied with merely stating that ‘gravity exists’, we as psychologists should not be satisfied with stating that ‘consciousness exists’.

Anyway, in a rather large nutshell, this is why the Chopra Challenge makes no sense. Apart from the fact it is poorly defined, we are nowhere near an empirically verifiable (or falsifiable) theory of consciousness. All we’re doing now and have been doing since the era of brain scanning is looking for neural correlates of consciousness, which is a very useful enterprise because it provides boundary conditions for consciousness, but we have not been cracking on the Hard Problem at all. The Hard Problem is probably fundamentally unsolvable within a weak monist framework. In itself this does not prove Chopra right, of course.

On a separate note: what would advance our understanding is a potential falsification of  the idea that mind can be reduced to matter. This is actually the reason I started doing psi research, apart from my lifelong wish of being a Ghostbuster when I finally grow up. If we can convincingly demonstrate that certain aspects of mental functioning cannot be reduced to physical processes, we would have a strong case to either revise our physical models, or falsify materialism. Given the potentially huge impact of psi research, and the fact that the present corpus of data does not allow for clear falsification of psi, I think it is a very worthwhile area of research. But that’s my 2c.

Oh, we have one class of theories left, don’t we? Absolute idealism or monist idealism states that only mind exists, and that matter is an illusion. Well, to quote Sherlock Holmes, “when you have eliminated the impossible, whatever remains, however improbable, must be the truth.” 😉

Woo woo…

 

PS regarding my last point, I can recommend reading Schroedinger’s “What is Life?” It is a short book, which you can read in a couple of hours, but will stay with you for a lifetime. Yes, I know I stole that quote from the reviews of the book, but it’s very true.