Category Archives: Research

Research is Hard: Procrastination is Easy

Before and after a quick trip to NYC (see the photo), I’m teaching the research class in our Department of Counseling this year. This leads me to re-affirm a conclusion I reached long ago: Research is hard.

Research is hard for many reasons, not the least of which is that scientific language can look and feel opaque. If you don’t know the terminology, it’s easy to miss the point. Even worse, it’s easy to dismiss the point, just because the language feels different. I do that all the time. When I come upon terminology that I don’t recognize, one of my common responses is to be annoyed at the jargon and consequently dismiss the content. As my sister Peggy might have said, that’s like “throwing the baby out with the bathtub.”  

Teaching research to Master’s students who want to practice counseling and see research as a bothersome requirement is especially hard. It doesn’t help that my mastery of research design and statistics and qualitative methods is limited. Nevertheless, I’ve thrown myself into the teaching of research this semester; that’s a good thing, because it means I’m learning.

This week I shared a series of audio recordings of a woman bereaved by the suicide of her former husband. The content and affect in the recordings are incredible. Together, we all listened to the woman’s voice, intermittently cracking with pain and grief. We listened to each excerpt twice, pulling out meaning units and then building a theory around our observations and the content. More on the results from that in another blog.

During the class before, I got several volunteers, hypnotized them, and then used a single-case design to evaluate whether my hypnotic interventions improved or adversely affected their physical performance on a coin-tossing task. The results? Sort of and maybe. Before that, I gave them fake math quizzes (to evaluate math anxiety). I also used graphology and palmistry to conduct personality assessments and make behavioral and life predictions. I had written the names of four (out of 24 students) who would volunteer for the graphology and palmistry activities, placed them in an envelope, and got ¾ correct. Am I psychic? Nope. But I do know the basic rule of behavioral prediction: The best predictor of future behavior is past behavior.

Today is Friday, which means I don’t have many appointments, which means I’m working on some long overdue research reports. Two different happiness projects are burning a hole in my metaphorical research pocket. The first is a write-up of a short 2.5-hour happiness workshop on counseling students’ health and wellness. As it turns out, compared with the control group, students who completed the happiness workshop immediately and significantly had lower scores on the Center for Epidemiologic Studies Depression scale (p = .006). Even better, after 6-months, up to 81% of the participants believed they were still experiencing benefits from the workshop on at least one outcome variable (i.e., mindfulness). The point of writing this up is to emphasize that even brief workshops on evidence-based happiness interventions can have lasting positive effects on graduate students in counseling.

Given that I’m on the cusp of writing up these workshop results, along with a second study of the outcomes of a semester-long happiness course, I’m stopping here so I can get back to work. Not surprisingly, as I mentioned in the beginning of this blog, research is hard; that means it’s much easier for me to write this blog than it is to force myself to do the work I need to do to get these studies published.

As my sister Peggy used to say, I need to stop procrastinating and “put my shoulder to the grindstone.”

The Efficacy of Antidepressant Medications with Youth: Part II

After posting (last Thursday) our 1996 article on the efficacy of antidepressant medications for treating depression in youth, several people have asked if I have updated information. Well, yes, but because I’m old, even my updated research review is old. However, IMHO, it’s still VERY informative.

In 2008, the editor of the Journal of Contemporary Psychotherapy, invited Rita and I to publish an updated review on medication efficacy. Rita opted out, and so I recruited Duncan Campbell, a professor of psychology at the University of Montana, to join me.

Duncan and I discovered some parallels and some differences from our 1996 article. The parallels included the tendency for researchers to do whatever they could to demonstrate medication efficacy. That’s not surprising, because much of the antidepressant medication research is funded by pharmaceutical companies. Another parallel was the tendency for researchers to overstate or misstate or twist some of their conclusions in favor of antidepressants. Here’s the abstract:

Abstract

This article reviews existing research pertaining to antidepressant medications, psychotherapy, and their combined efficacy in the treatment of clinical depression in youth. Based on this review, we recommend that youth depression and its treatment can be readily understood from a social-psycho-bio model. We maintain that this model presents an alternative conceptualization to the dominant biopsychosocial model, which implies the primacy of biological contributors. Further, our review indicates that psychotherapy should be the frontline treatment for youth with depression and that little scientific evidence suggests that combined psychotherapy and medication treatment is more effective than psychotherapy alone. Due primarily to safety issues, selective serotonin reuptake inhibitors should be initiated only in conjunction with psychotherapy and/or supportive monitoring.

The main difference from our 1996 review was that in the late 1990s and early 2000s, there were several SSRI studies where SSRIs were reported as more efficacious than placebo. Overall, we found 6 of 10 reporting efficacy. An excerpt follows:

Our PsychInfo and PubMed database searches and cross- referencing strategies identified 10 published RCTs of SSRI efficacy. In total, these studies compared 1,223 SSRI treated patients to a similar number of placebo controls. Using the researchers’ own efficacy criteria, six studies returned significant results favoring SSRIs over placebo. These included 3 of 4 fluoxetine studies (Emslie et al. 1997, 2002; Simeon et al. 1990; The TADS Team 2004), 1 of 3 paroxetine studies (Berard et al. 2006; Emslie et al. 2006; Keller 2001), 1 of 1 sertraline study (Wagner et al. 2003), and 1 of 1 citalopram study (Wagner et al. 2004).

Despite these pharmaceutical-funded positive outcomes, medication-related side-effects were startling, and the methodological chicanery discouraging. Here’s an excerpt where we take a deep dive into the medication-related side effects and adverse events (N.B., the researchers should be lauded for their honest reporting of these numbers, but not for their “safe and effective” conclusions).

SSRI-related medication safety issues for young patients, in particular, deserve special scrutiny and articulation. For example, Emslie et al. (1997) published the first RCT to claim that fluoxetine is safe and efficacious for treating youth depression. Further inspection, however, uncovers not only methodological problems (such as the fact that psychiatrist ratings provided the sole outcome variable and the possibility that intent-to-treat analyses conferred an advantage for fluoxetine due to a 46% discontinuation rate in the placebo condition), but also, three (6.25%) fluoxetine patients developed manic symptoms, a finding that, when extrapolated, suggests the possibility of 6,250 mania conversions for every 100,000 treated youth.

Similarly, in the much-heralded Treatment of Adolescents with Depression Study (TADS), self-harming and suicidal adverse events occurred among 12% of fluoxetine treated youth and only 5% of Cognitive Behavioral Therapy (CBT) patients. Additionally, psychiatric adverse events were reported for 21% of fluoxetine patients and 1% of CBT patients (March et al. 2006; The TADS Team 2004, 2007). Keller et al. (2001), authors of the only positive paroxetine study, reported similar data regarding SSRI safety. In Keller et al.’s sample, 12% of paroxetine-treated adolescents experienced at least one adverse event, and 6% manifested increased suicidal ideation or behavior. Interestingly, in the TCA and placebo comparison groups, no participants evinced increased suicidality. Nonetheless, Keller et al. claimed paroxetine was safe and effective.

When it came to combination treatment, we found only two studies, one of which made a final recommendation that was nearly the opposite of their findings:

Other than TADS, only one other RCT has evaluated combination SSRI and psychotherapy treatment for youth with depression. Specifically, Melvin et al. (2006) directly compared sertraline, CBT, and their combination. They observed partial remission among 71% of CBT patients, 33% of sertraline patients, and 47% of patients receiving combined treatment. Consistent with previously reviewed research, Sertraline patients evidenced significantly more adverse events and side effects. Surprisingly and in contradiction with their own data, Melvin et al. recommended CBT and sertraline with equal strength.

As I summarize the content from our article, I’m aware that you might conclude that I’m completely against antidepressant medication use. That’s not the case. For me, the take-home points include, (a) SSRI antidepressants appear to be effective for some young people with depression, and (b) at the same time, as a general treatment, the risk of side effects, adverse effects, and minimal treatment effects make SSRIs a bad bet for uniformly positive outcomes, but that doesn’t mean there won’t be any positive outcomes. In the end, for my money—and for the safety of children and adolescents—I’d go with counseling/psychotherapy or exercise as primary treatments for depressive symptoms in youth, both of which have comparable outcomes to SSRIs, with much less risk.

And here’s a link to the whole article:

 

Antidepressant Medications for Treating Depression in Youth: A 25-Year Flashback

About 25-years ago Rita and I published an article titled, “Efficacy of antidepressant medication with depressed youth: what psychologists should know.” Although the article targeted psychologists and was published in the journal, Professional Psychology, the content was relevant to all mental health professionals as well as anyone who works closely with children.

Yesterday, when teaching my research class to a fantastic group of Master’s students in the Department of Counseling at UM, I had a moment of reminiscence. Not surprisingly, along with the reminiscence, came a resurgence of emotion and passion. I was sharing about how it’s possible to find an area of interest that hooks so much passion, that you might end up tracking down, literally everything ever published on that topic (as long as the topic is small enough!).

The motivation behind my interest in the efficacy of antidepressants with youth came about because of a confluence of factors. First, I was working with youth every day, many of whom were prescribed antidepressant medications. Second, I was in a sort of professional limbo—working in full-time private practice—but wishing to be in academia. Third, out of virtual nowhere, in 1994, Bob Deaton, a professor of social work at the University of Montana, asked Rita and I to do an all-day presentation for the Montana Chapter of the National Association of Social Workers. Bob’s offer was not to be refused, and I’ve been in Bob Deaton’s debt ever since. If you’re out there reading this, thanks again Bob, for your confidence and the opportunity.

To prep, Rita and I split up the content. One of my tasks was to dive into all things related to antidepressant medications. Before embarking on the journey into the literature, I expected there would be modest evidence supporting the efficacy of antidepressants in treating depression in youth.

My expectations were completely wrong. Much to my shock, I discovered that not only was there not much “out there,” but the prevailing research was riddled with methodological problems and, bottom line, there had NEVER been a published study indicating that antidepressants were more effective in treating depression in youth than placebo. I was gob smacked.

Just to give you a taste, here’s the abstract:

Pharmacologic treatments for mental or emotional disorders are becoming increasingly popular, especially in managed care environments. Consequently, psychologists must remain cognizant of medication efficacy concerning specific mental disorders. This article reviews all double-blind, placebo- controlled efficacy trials of tricyclic antidepressants (TCAs) with depressed youth that were published in 1985-1994. Also, all group-treatment studies of depressed youth using fluoxetine, a serotonin-specific reuptake inhibitor (SSRI), are summarized. Results indicate that neither TCAs nor SSRIs have demonstrated greater efficacy than placebo in alleviating depressive symptoms in children and adolescents, despite the use of research strategies designed to give antidepressants an advantage over placebo. The implications of these findings for research and practice are discussed.

Early in my research class this semester, an astute young woman asked about the “rule” she had heard about that you shouldn’t cite research that’s more than 10-years-old. It was a great question. I hope I responded rationally, but my apoplectic-ness may have showed in my complexion and words. In my view, we cannot and should not ignore past research. As Samuel Clemens once wrote, “History doesn’t repeat itself, it only rhymes.” If we don’t know the old stuff, we may miss out on the contemporary rhyming pattern. In our article, 25-years-old now, we also discussed some medication research reporting shenanigans (although we used more professional language. Here’s an excerpt of our discussion about drop-out rates.

Dropout rates. Side effects and adverse events can significantly affect medication study outcomes by causing participants to discontinue medication treatment. For example, in the IMI [imipramine] study with children ( Puig-Antich et  al.,  I987), 4 out of 20 (20%) of the medication group did not complete the study, whereas in the two DMI [desipramine] studies ( Boulos et al., l99 l; Kutcher et al., 1994 ), 6 out of 18 (33%) and 9 out of 30 (30%) medication participants dropped out because of side effects. For each of these studies, participants who dropped out of the treatment groups before completing the treatment protocol were eliminated from data analyses. The elimination of dropout participants from data analyses produced inappropriately inflated treatment-response rates. For example, although Puig-Antich et al. (1987) reported a treatment-response rate of 56% (9 of 16 participants), if all participants are included within the data analyses, the adjusted or intent-to-treat response rate is 45% (9/20). For the three studies that reported the number of medication protocol participants who dropped out of the study, the average reduction in response rate was 16.5%. Overall, intent­to-treat response rates ranged from less than 8% to 45% (see Table 2 for intent-to-treat response rates for all reviewed TCA studies).

What’s the value, you might wonder, of looking back 25-years at the methodology and outcomes related to tricyclic antidepressant medication use? You may disagree, but I think the rhyming pattern within antidepressant medication research for youth (and adults) remains. If you’re interested in expanding your historical knowledge about this rhyming, I’ve linked the article here.

Research can be boring; it can be opaque; it can be riddled with stats and numbers. Nevertheless, for me, research remains exciting, both as a source of amazing knowledge, but also as something to read with a critical eye.

The Hottest New Placebos for PTSD

Let’s do a thought experiment.

What if I owned a company and paid all my employees to conduct an intervention study on a drug my company profits from? After completing the study, I pay a journal about ten thousand British pounds to publish the results. That’s not to say the study wouldn’t have been published anyway, but the payment allows for publication on “open access,” which is quicker and gets me immediate media buzz.

My drug intervention targets a longstanding human and societal problem—post-traumatic stress disorder (PTSD). Of course, everyone with a soul wants to help people who have been physically or sexually assaulted or exposed to horrendous natural or military-related trauma. In the study, I compare the efficacy of my drug (plus counseling) with an inactive placebo (plus counseling). The results show that my drug is significantly more effective than an inactive placebo. The study is published. I get great media attention, with two New York Times (NYT) articles, one of which dubs my drug as one of the “hottest new therapeutics since Prozac.”  

In real life, there’s hardly anything I love much more than a cracker-jack scientific study. And, in real life, my thought experiment is a process that’s typical for large pharmaceutical companies. My problem with these studies is that they use the cover of science to market a financial investment. Having financially motivated individuals conduct research, analyze the results, and report their implications spoils the science.

Over the past month or so, my thought experiment scenario has played out with psilocybin and MDMA (aka ecstasy) in the treatment of PTSD. The company—actually a non-profit—is the Multidisciplinary Association for Psychedelic Studies (MAPS). They funded an elaborate research project, titled, “MDMA-assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study” through private donations. That may sound innocent, but Andrew Jacobs of the NYT described MAPS as, “a multimillion dollar research and advocacy empire that employs 130 neuroscientists, pharmacologists and regulatory specialists working to lay the groundwork for the coming psychedelics revolution.” Well, that’s not your average non-profit.

To be honest, I’m not terribly opposed to careful experimentation of psychedelics for treating PTSD. I suspect psychedelics will be no worse (and no better) than other pharmaceutic-produced drugs used to treat PTSD. What I do oppose, is dressing up marketing as science. Sadly, this pseudo-scientific approach has been used and perfected by pharmaceutical companies for decades. I’m familiar with promotional pieces impersonating science mostly from the literature on antidepressants for treating depression in youth. I can summarize the results of those studies simply: Mostly antidepressants don’t work for treating depression in youth. Although some individual children and adolescents will experience benefits from antidepressants, separating the true, medication-based benefits from placebo responses is virtually impossible.

My best guess from reading medication studies for 30 years (and recent psychedelic research) is that the psychedelic drug results will end up about the same as antidepressants for youth. Why? Because placebo.

Placebos can, and usually do, produce powerful therapeutic responses. I’ll describe the details in a later blog-post. For now, I just want to say that in the MDMA study, the researchers, despite reasonable efforts, were unable to keep study participants “blind” from whether they were taking MDMA vs. placebo. Unsurprisingly, 95.7% of patients in the MDMA group accurately guessed that they were in the MDMA group and 84.1% of patients in the placebo group accurately guessed they were only receiving inactive placebos. Essentially, the patients knew what they were getting, and consequently, attributing a positive therapeutic response to MDMA (rather than an MDMA-induced placebo effect) is speculation. . . not science.

In his NYT article (May 9, 2021), Jacobs wrote, “Psilocybin and MDMA are poised to be the hottest new therapeutics since Prozac.” Alternatively, he might have written, “Psilocybin and MDMA are damn good placebos.” Even further, he also could have written, “The best therapeutics for PTSD are and always will be exercise, culturally meaningful and socially-connected processes like sweat lodge therapy, being outdoors, group support, and counseling or psychotherapy with a trusted and competent practitioner.” Had he been interested in prevention, rather than treatment, he would have written, “The even better solution to PTSD involves investing in peace over war, preventing sexual assault, and addressing poverty.”

Unfortunately, my revision of what Jacobs wrote won’t make anyone much money . . . and so you won’t see it published anywhere now or ever—other than right here on this beautiful (and free) blog—which is why you should pass it on.

Boy Brains, Girl Brains, and Neurosexism

Black White Bikes

Sorry to say, I’ve been irritable the past couple days. If you don’t believe me, just ask my internet provider . . . or my editor . . . or ask me about my upcoming book deadline. There’s evidence everywhere for my irritability and impatience. You might even see evidence for it in this short excerpt from our forthcoming Theories of Counseling and Psychotherapy textbook. In fact, you should read this now, because I’m pretty sure it will get censored before appearing in our text.

Here you go.

You may be aware of popular books describing and delighting in the differences between female and male brains. Here’s a short list, along with my snarky comments:

  1. The essential difference: Male and female brains and the truth about autism (Baron-Cohen, 2003). Baron-Cohen is an autism researcher. His book allegedly, “. . . proves that female-type brains are better at empathizing and communicating, while male brains are stronger at understanding and building systems-not just computers and machinery, but abstract systems such as politics and music.” Comment: It’s so good to finally understand why most of our politicians are smirky White males who look like Baron-Cohen (heads up, this statement is sarcasm).
  2. The female brain (Brizendine, 2006): Brizendine is a neuropsychiatrist. Her book is touted as bringing “. . . together the latest findings to show how the unique structure of the female brain determines how women think, what they value, how they communicate, and who they love.” Comment: In Delusions of gender (2011), Cordelia Fine reduces Brizendine’s arguments to rubble. Nuff said.
  3. Teaching the female brain: How girls learn math and science (James, 2009). Comment: It’s hard to know how this book could be more than two pages given that there’s extremely sparse scientific evidence to support what this book’s title implies.
  4. Female brain gone insane: An emergency guide for women who feel like they are falling apart (Lundin, 2009). No comment. I couldn’t bring myself to read beyond this book’s title.
  5. The male brain: A breakthrough understanding of how men and boys think (Brizendine, 2011). Comment: The main breakthrough finding is that when you sell a million+ copies of your first book, a sequel, with similar drama, but equally slim scientific support, is essential.
  6. Unleash the power of the female brain: Supercharging yours for better health, energy, mood, focus, and sex (Amen, 2014). Comment: Better health, energy, mood, focus, and sex? I want a female brain!

The dangers of over-stating what’s known about the brain is significant, but nowhere are the dangers bigger than when you’re talking about sex and gender. Over time, physical differences between females and males have nearly always been used to justify systemic mistreatment of females (and limitations for males, as well). Some examples:

Plato didn’t think women were created directly by God and so they didn’t have had souls.

Aristotle thought women were deficient in natural heat and therefore unable to cook their menstrual fluids into semen.

Gustav Le Bon (1979) concluded that women’s intellectual inferiority was so obvious that no one could contest it. He wrote: “All psychologists who have studied the intelligence of women, as well as poets and novelists, recognize today that they represent the most inferior forms of human evolution and that they are closer to children and savages than to an adult, civilized man” (see Women’s Brains by S. J. Gould). Le Bon purportedly based his ideas on Broca’s measurements of 6 female and 7 male skulls. Not surprisingly, Le Bon strongly opposed the whole idea of educating women.

More recently, over the past 30 years, I’ve seen and heard and read many different descriptions and explanations about female and male brain differences. Nearly always, there’s the same old story: Women are more “right brained” and intuitive and less “left brained” and rational. Of course the actual brain hemisphere research is sketchy, but the take home messages are much like Baron-Cohen’s and Brizendine, which happen to be much like the philosophy of the Nazi Third Reich, which is that girls and women are well-suited for working in the kitchen and the church, and especially good at caring for children, but that women had best leave politics and the corporate world – where steady rationality is essential – to the men.

All this reminds me of the time my daughter, then a senior in high school, was shown a film in her science class depicting the female brain as structurally less capable of science and math. She came home in distress. We showed up at school the next day. What do you suppose happened next? We’ll leave that story to your imagination.

Genderizing the brain marginalizes and limits females, but it can also do the same for males. Take, for example, this quotation from “Dr.” Kevin Leman.

“Did you know that scientific studies prove why a woman tends to be more ‘relational’ than her male counterpart? A woman actually has more connecting fibers than a man does between the verbal and the emotional side of her brain. That means a woman’s feelings and thoughts zip along quickly, like they’re on an expressway, but a man’s tend to poke slowly as if he’s walking and dragging his feet on a dirt road.” (pp. 5-6).

Just FYI, even though my emotional quotient is just barely dragging along Leman’s dirt road, I can quickly intuit that what he wrote is sheer drivel. It’s not partial drivel because . . . as Cordelia Fine might say, “He just made that shit up.”

Seriously? Am I making the claim that male and female brains are relatively equivalent in terms of empathic processing? Yes. I. am.

Using the best and most rigorous laboratory empathy measure available, empathy researcher William Ickes found no differences between males and females in seven consecutive studies. However, based on a larger group of studies, he and his colleagues acknowledged that there may be small sex-based differences favoring women on empathy tasks. It should be noted that he and his research team (which includes females who may be more limited in their scientific skills than Baron-Cohen) offer at least two caveats. First, they believe that females being raised in social conditions that promote a communal orientation may account for some of the differences. Second, females are especially likely to be better at empathy when they’re primed, directly or indirectly, to recall that they (women) are better at emotional tasks than men. The converse is also true. When men are primed to think all men are empathic dullards, they tend to perform more like empathic dullards.

What all this boils down to is that females and males are generally quite similar in their empathic accuracy, not to mention their math and science and language abilities. It appears that the minor observable differences between females and males may be explained by various environmental factors. This means that if you want to stick with scientific evidence, you should be very cautious in making any conclusions about brain differences between females and males. To do otherwise is to create what has been eloquently termed, a neuromyth.

In summary, the safest empirically-based conclusions on sex- and gender-based brain differences are:

  1. The differences appear to be minimal
  2. When they exist, they may be largely caused by immediate environmental factors or longer-term educational opportunities
  3. To avoid mistakes from the past, we should be cautious in attributing female and male behavioral or performance differences to their brains
  4. If and when true neurological differences are discovered, it would be best if we viewed them using the Jungian concept of Gifts differing (Myers, 1995).
  5. Consistent with Cordelia Fine’s excellent recommendation in Delusions of Gender, we shouldn’t make things up—even if it means we get to sell more books.

Feminist Theory and Interpersonal Neurobiology: A Natural Connection

 

Woman Statue

This is a draft “Brain Box” for the feminist theory and therapy chapter from our forthcoming Theories of Counseling and Psychotherapy textbook.

**************

Feminist therapy is about connection.

So is neuroscience.

Neuroscience involves the study of synaptic interconnections, neural networks, brain structures and their electrochemical communications.

Feminist therapy involves egalitarian interconnection, empathy, mutual empathy, and empowerment of the oppressed, neglected, and marginalized.

As a highly sophisticated, interconnected entity, the human brain is metaphorical support for feminist theory and therapy. In the brain, cells don’t operate in isolation. In feminist therapy in general, and relational cultural therapy (RCT) in particular, isolation is unhealthy. Connection is healthy.

Healthy brains are connection-heavy. Whether humans are awake or asleep, brain cells are in constant communication; they problem-solve; they operate sensory and motor systems; they feedback information to and from the body, inhibiting, exciting, and forming a connected, communicating, community.

Using modern brain research as a foundation, Jordan (the developer of RCT) described how empathic relationships can change clients:

“Empathy is not just a means to better understand the client; in mutually empathic exchanges, the isolation of the client is altered. The client feels less alone, more joined with the therapist. It is likely that in these moments of empathy and resonance, there is active brain resonance between therapist and client (Schore, 1994), which can alter the landscape and functioning of the brain. Thus, those areas of the brain that register isolation and exclusion fire less and those areas that indicate empathic responsiveness begin to activate.”

Jordan is talking about how therapist-client interactions change the brain. Many others have made the same point: “It is the power of being with others that shapes our brain” (Cozolino, 2006, p. 9). In her review of RCT theory and outcomes, Frey (2013) emphasized that “research on mirror neurons, the facial recognition system, lifelong neuroplasticity and neurogenesis, and the social functions of brain structures” (p. 181) supports feminist theory and feminist therapy process.

Neuroscience research is supportive of feminist therapy in ways that are both real and metaphorical. There is unarguably great potential here. However, before we wax too positive, it’s important to heed a warning. Beginning with Plato (at least) and throughout the history of time, the main way in which physical (or brain) differences between the sexes have been used is to marginalize females and undercut their viability as equal partners in the human race (see Brain Box 10.2). With that caveat in mind, let’s respect feminism with some multitasking: Let’s celebrate the positive parallels between human neurology and feminist theory, while simultaneously keeping a watchful eye on how neuroscience is being used to limit or oppress girls and women.

That time when I conducted a scientific research study designed to test the effectiveness of using hypnosis to break down the space-time continuum and transport 18 people to the future so they could fill-out perfect March Madness brackets.

Flower in Bricks

You can probably tell by the title of this post that I’m pretty stoked about scientific research right now.

I typically don’t do much empirical research. That’s why it was a surprise to me and my colleagues that, about six weeks ago, I spontaneously developed a research idea, dropped nearly everything else I was doing, and had amazing fun conducting my first ever March Madness bracket research project.

My research experience included a roller coaster of surprises.

I somehow convinced a professor from the Health and Human Performance department at the University of Montana to collaborate with me on a ridiculous study on a ridiculously short timeline.

My university IRB approved our proposal. Seriously. I submitted a proposal that involved me hypnotizing volunteer participants to transport them into the future to make their March Madness bracket selections. Then they approved it in six days. How cool is that?

I managed to network my way onto ESPN radio (where we called the study ESP on ESPN; thanks Lauren and Arianna) and onto the Billings, MT CBS affiliate (thanks Dan).

And, this is the teaser: with only 36 participants, the results were significant at the p < .001 level.

Damn. Now you know. Scientific research is so cool.

Of course, there’s a back-story. While you’re waiting in anticipation to learn about those p < .001 results, you really need to hear this back-story.

Several years ago, while on a 90-minute car ride back from Trapper Creek Job Corps to Missoula, my counseling interns asked me if I could hypnotize someone and take them back in time so they could recall something that happened to them in a previous life. I thought the question was silly and the answer was simple.

“Absolutely yes.” I said, “Of course I could do that.”

Questions followed.

My answers included a ramble about not really believing in past lives and not really thinking that past life hypnotic regression was ethical. But still, I said, “If someone is hypnotizable, then, I’m sure I could get them into a trance and at least make them think they went back to a previous life and retrieved a few memories. No problem.”

Have you ever noticed that once you start to brag, it’s hard to stop. That’s what happened next, for several years.

Somewhat later in another conversation, I started exaggerating bigly. I decided to extend my imaginary prowess into a fool-proof strategy for generating a perfect March Madness bracket. I said something about, “Brains being amazing and that you can suddenly pay attention to the big toe on your right foot and, at nearly the same time, project yourself not only back into your 7-year-old self, but forward in time into the future. That being the case,” I waxed, “it’s pretty obvious that I could hypnotize people, break down the space-time continuum, and take them to a future where all the March Madness basketball games had been played and therefore, they could just copy down the winners and create a perfect March Madness bracket.”

Through this process, I would turn a one-in-a-trillion possibility into absolute certainty.

I enjoyed bragging about my imaginary scenario for several years. That is, until this year, when, I decided that if I was set on bragging bigly, I should also be willing to put my science where my mouth is (or something like that). It was time to test my hypnosis-space-time-continuum hypothesis using the scientific method.

We designed a pre-test, post-test experimental design with random assignment to three conditions.

Condition 1: Education. Participants would receive about 20 minutes of education on statistics relevant to making March Madness bracket picks. My colleague, Dr. Charles Palmer, showed powerpoint slides and provided insights about the statistical probabilities of 12s beating 5s and 9s beating 8s, and “Blue Blood” conferences.

Condition 2: Progressive Muscle Relaxation. The plan was for Daniel Salois, one of my graduate students and an immensely good sport, to do 20 minutes of progressive muscle relaxation with this group.

Condition 3: Hypnosis. I would use a hypnotic induction, a deepening procedure, and then project participants into the future. Instead of having everyone fill out their brackets while in trance, I decided to use a post-hypnotic suggestion. As soon as they heard me clap twice, they would immediately recall the tournament game outcomes and then fill out their brackets perfectly.

Unfortunately, on short notice we only recruited 36 participants. To give ourselves a chance to obtain statistical significance, we dumped the progressive muscle relaxation condition, and just had the EDUCATION and HYPNOSIS conditions go head to head in a winner-take-all battle.

Both groups followed the same basic protocol. Upon arrival at the College of Education, they were randomly assigned to one of two rooms (Charlie or me). When the got to their room, they signed the informed consent, and immediately filled out a bracket along with a confidence rating. Then they received either the EDUCATION or HYPNOSIS training. After their respective trainings, they filled out a second bracket, along with another confidence rating.

We hypothesized that both groups would report an increase in confidence, but that only the EDUCATION group (but not the HYPNOSIS group) would show a statistically significant improvement in bracket-picking accuracy. We based our hypotheses on the fact that although real education should help, there’s no evidence that anyone can use hypnosis to transport themselves to the future. We viewed the HYPNOSIS condition as essentially equivalent to raising false hopes without providing help that had any substance.

IMHO, the results were stunning.

We were dead on about the EDUCATION group. Those participants significantly increased their confidence; they also improved their bracket scores (we used the online ESPN scoring system where participants can obtain up to a maximum of 320 points for each round; this means participants got 10 points for every correct pick in the first round, with their potential points doubling in every round, and concluding with 320 points if they correctly picked the University of North Carolina to win the tournament).

Then there was the HYPNOSIS group.

HYPNOSIS participants experienced a small but nonsignificant increase in their confidence. . . but they totally tanked their predictions. We had a participant who picked Creighton to win it all. We had one bracket that had Virginia Tech vs. Oklahoma State in the final. We had another person who listed a final score in the championship game of 34-23. When I shared these results to our research class, I said, “The HYPNOSIS participants totally sucked. They did so bad that I think they couldn’t have done any worse if we had hit them all on the head with a 2 x 4 and given them concussions and then had them fill out their brackets.”

So what happened? Why did the HYPNOSIS group perform so badly?

When told of the outcome, one student who had participated offered her explanation, “I believe it. I don’t know what happened, but after the hypnosis, I totally forgot about anything I knew, and just wrote down whatever team names popped into my head.”

My interpretation: Most of the people in the HYPNOSIS group completely abandoned rational and logical thought. They decided that whatever thoughts that happened to come into their minds were true and right.

It’s probably too much of a stretch to link this to politics, but it’s hard not to speculate. It’s possible that candidates from both parties are able, from time to time, to use charisma and bold claims to get their supporters to let go of logic and rational thought, and instead, embrace a fantastical future.

Another faculty member in our department offered an alternative explanation. She recalled the old Yerkes-Dodson law. This “law” in psychology predicts that optimal arousal (or stress) is linked to optimal performance. In contrast, too much arousal or too little arousal impairs performance. She theorized that perhaps the hypnosis participants had become too relaxed; they were so under-aroused that they couldn’t perform.

It seems clear that the hypnosis did something. But what? It wasn’t a helpful trip to the future. Some friends suggested that maybe they went to the wrong year. Others have mocked me for being a bragger who couldn’t really use hypnosis to break down the space-time continuum.

What do you think? Do you have any potential explanations you’d like to offer? I’d love to hear them. And, if you have any ideas of which scientific journal to submit our manuscript to, we’d love to hear that as well.

Do You Want to Participate in The March Madness Research Project?

Madness 2017

If your answer is yes . . . here’s what you should do and what will happen:

  1. Email: ummadness2017@gmail.com and say “Yes, I’m in”
  2. You will be randomly assigned to one of three “March Madness Bracket Training” groups:
    • Relaxation and focusing
    • Hypnosis to view the games from the future
    • Educational information
  3. You will receive an email telling you where to meet. All groups will meet on campus at the University of Montana in a specific room in the Phyllis J. Washington College of Education building at 7pm on Tuesday evening, March 14.
  4. Show up at your designated room. When you arrive, you will fill out an informed consent form, a March Madness bracket, and complete a short questionnaire.
  5. Then you will participate in the training.
  6. After the training, you will complete another bracket
  7. You will leave your completed packet and your brackets with the researcher and it will be uploaded in to the ESPN Tournament Challenge website [we will need your email address to upload your selections into the ESPN system]
  8. You will receive information at the “Training” on how to login and track your bracket. If you lose or misplace this information, you can request an additional copy via email: ummadness2017@gmail.com

To sign up for this research project, email:

UMmadness2017@gmail.com