I’ve spent the morning learning. At this point in my life, learning requires simultaneous regulation of my snarky irreverence. Although I intellectually know I don’t know everything, when I discover, as I do ALL. THE. TIME., that I don’t know something, I have to humble myself unto the world.
Okay. I know I’m being a little dramatic.
After pushing “submit” on our latest effort to publish Round 1 of our happiness class data, less than an hour later I received a message from the very efficient editor that our manuscript had been “Unsubmitted.” Argh! The good news is that the editor was just letting us know that we needed to follow the manuscript submission guidelines and include a “Structured Abstract.” Who knew?
The best news is I wrote a structured abstract and discovered that I like structured abstracts way more than I like traditional abstracts. So, that’s cool.
And, here it is!
Background: University counseling center services are inadequate to address current student mental health needs. Positive psychology courses may be scalable interventions that address student well-being and mental health.
Objective: The purpose of this study was to evaluate the effects of a multi-component positive psychology course on undergraduate student well-being, mental health, and physical health.
Method: We used a quantitative, quasi-experimental, pretest-posttest design. Participants in a multi-component positive psychology course (n = 38) were compared to a control condition (n = 41). All participants completed pre-post measures of well-being, physical health, and mental health.
Results: Positive psychology students reported significant improved well-being and physical health on eight of 18 outcome measures. Although results on the depression scale were not statistically significant, a post-hoc analysis of positive psychology students who were severely depressed at pretest reported substantial depression symptom reduction at posttest, whereas severely depressed control group students showed no improvement.
Conclusion:Positive psychology courses may produce important salutatory effects on student physical and mental health. Future research should include larger samples, random assignment, and greater diversity.
Teaching Implications: Psychology instructors should collaborate with student affairs to explore how positive psychology courses and interventions can facilitate student well-being, health, and mental health.
As many of you know, over the past year or so I’ve been frustrated in my efforts to publish a couple of journal articles. I know I’m not the only one who has experienced this, but this morning we got another rejection (the third for this manuscript) that triggered me in a way that, as the feminists might say, raised my consciousness.
Three colleagues and I are trying to publish the outcomes from a short online “happiness workshop” I did a couple years ago for counseling students. Mostly the results were nonsignificant, except for the depression scale we used, which showed our workshop participants were less depressed than a non-random control group. Also, based on open-ended responses, participants seemed to find the workshop experience helpful and relevant to them in their lives.
Problems with the methodology in this study are obvious. In this most recent rejection, one reviewer noted the lack of “generalizability” of our data. I totally agree. The study has a relatively small n, nonrandom group assignment, yada, yada, yada. We acknowledge all this in the manuscript. Having a reviewer point out to us what we have readily acknowledged is annoying, but accurate. In fact, this rejection was accompanied by the most informed and reasonable reviews we’ve gotten yet.
Nevertheless, I immediately sent out a response email to the editor . . . which, because I’m partially all about entertainment, I’m sharing below. As you’ll see, for this rejection, my concerns are less with the reviews, and more about WHAT IS BEING PUBLISHED IN SO-CALLED SCIENTIFIC JOURNALS. Although I don’t think it’s necessary, I’ve anonymized my email so as to not incriminate anyone.
Thanks for your timely processing of our manuscript.
Overall, I believe your reviewers did a nice job of reading the manuscript, noting problems, and providing feedback. Being very familiar with the journal submission and feedback process, I want to compliment you and your reviewers on your evaluation of our manuscript. Compared to the quality of feedback I’ve obtained from other journals, you and your team did well.
Now I’d like to apologize in advance for the rest of this email because it’s a critique not only of your journal, but of counseling research more generally.
Despite your professional review, I have concerns about the decision, and rather than sit on them, I’m going to share them.
Although the reviews were accurate, and, as Reviewer 1 noted, there are generalizability concerns (but aren’t there always), I looked at the most recent online articles published in [your journal], to get a feel for the journal’s standards for generalizability, among other issues. What I found was disturbing.
In the seven published 2023 articles from your most recent issue, none have data that are even close to generalizable, and yet all of the articles offer recommendations, as if there were generalizable data. In the [first] article there’s an n of 8; [the second article] has an n of 6 and use a made-up questionnaire. I know these are qualitative studies, but, oh my, they don’t shy away from widely offering recommendations (is that not generalizing?), based on minimal data. Four of the articles in the most recent issue have no data; that’s okay, they’re interesting and may be useful. The only “empirical” study is a survey with n = 165, using a correlational analysis. But no information is provided on the % response to the survey, and so any justification for generalization is absent. Overall, some of these articles are interesting, and written by people I know and like. But none of them have anything close to what might be considered “generalizability.”
What’s most concerning to me is that none of the published articles employ an experimental design. My impression is that “Counselor Education and Preparation” (not just the journal, but the whole profession) mostly avoids experimental or quasi-experimental designs, and privileges qualitative research, or correlational designs that, of course, are really just open inquiries about the relationships among 2 or more variables.
This is the third rejection of this manuscript from counseling journals that, to be frank, essentially have no scientific impact factor. Maybe the manuscript is unpublishable. I would be open to that possibility if I didn’t read any of the published articles from [your journal and other journals]. My best guess (hypothesis) is that counseling journals have double standards; they allow generalizing statements from qualitative studies, but they hold experimental designs to inappropriately high standards. I say inappropriate here because all experimental designs are flawed in one way or another, and finding those flaws is easier than understanding them.
I know I’m biased, but my last problem with the rejection of this manuscript has to do with relevance. We tried to offer counseling students a short workshop intervention to help them cope with their COVID-related distress and distress in general–something that I think more counseling programs should do, and something that I think is innately relevant and potentially very meaningful to counseling students and practitioners.
Sorry again, for this email and it’s length, but I hope some of what I’ve shared is food for thought for you in your role as journal editor.
Thanks again for the timely review and feedback. I do appreciate the professionalism.
If you’re still reading and following my incessant complaining, for your continued pleasure, now I’m pasting my email response to my coauthors, one of whom wrote us all this morning beginning with the word “Bummer.”
Yes! Another bummer.
For entertainment purposes, I kept you all on my email to the editor.
Although I’m clearly triggered, because I just read some articles in the [Journal], I now know, more about self-care, because in their [most recent lead published article], the authors wrote:
“Most participants also offered some recommendations for self-care practices to process crisis counseling. One participant (R2) indicated, “I keep a journal with prayers, thoughts and feelings, complaints and poetry.”
Now that I’ve done my complaining, I need to take time off to pray and write a poem or two, but then, yes . . . I will continue to send this out into the world in hopes of eventual validation.
Happy Friday to you all,
I hope you all caught my clever utilization of recommendations from the offending journal to cope with this latest rejection. The good news is, like most rejections, this one was clarifying and inspired me with even more snark energy than I usually have.
In 2017, I collaborated with Dr. Charles Palmer and Daniel Salois (now Dr. Daniel Salois) on a creative, one-of-a-kind research study evaluating and comparing the effectiveness of an educational intervention vs. a hypnotic induction transporting people to the future in for improving the accuracy of March Madness NCAA Basketball Tournament bracket picks. The results were stunning (but I can’t share them right now because I want to recruit anyone and everyone in the Missoula area to participate in our planned replication of this amazing study). The study has been approved by the University of Montana IRB.To participate, follow these instructions:1.Email Marchmadnessresearch2023@gmail.com and say “Yes, I’m in!”2.We will email you back a confirmation.3.Upon arrival at the study location, you will be randomly assigned to one of two “March Madness Bracket Training” groups:a.Hypnosis to enhance your natural intuitive powersb.Educational information from a UM professor4.All participants will meet in room 123 of the Phyllis J. Washington College of Education building at 7pm on Tuesday, 3/14/23. Enter on the East end of the building. From there, we’ll send you to a room for either the education or hypnosis intervention.5.When you arrive in your room, you will fill out an informed consent form, a March Madness bracket, and complete a short questionnaire. 6.Then you will participate in either they educational or hypnosis training.7.After the training, you will complete another bracket and short questionnaire8.You will leave your completed packet and your brackets with the researchers; they will be uploaded into the ESPN Tournament Challenge website using the “Team Name” you provide. If you bring a device, we will provide a password so you can upload your own selections into the ESPN system.9.You will receive information at the “Training” on how to login and track your bracket. On or around April 15, we will post a summary of the research results at: https://johnsommersflanagan.com/Once again, to sign up for this research project, email: Marchmadnessresearch2023@gmail.com I’m posting this because we’re trying to recruit as many participants as possible. If you live near Missoula, please consider participating. If you know someone who might be interested, please share this with them. Thanks for reading and have a fabulous day.John S-F
Consensus among my family and friends is that I’m weird. I’m good with that. Being weird may explain why, on the Saturday morning of Thanksgiving weekend, I was delighted to be searching PsycINFO for citations to fit into the revised Mental Status Examination chapter of our Clinical Interviewing textbook.
One thing: I found a fantastic article on Foreign Accent Syndrome (FAS). If you’ve never heard of FAS, you’re certainly not alone. Here’s the excerpt from our chapter:
Many other distinctive deviations from normal speech are possible, including a rare condition referred to as “foreign accent syndrome.” Individuals with this syndrome speak with a nonnative accent. Both neurological and psychogenic factors have been implicated in the development of foreign accent syndrome (Romö et al., 2021).
Romö’s article, cited above, described research indicating that some forms of FAS have clear neurological or brain-based etiologies, while others appear psychological in origin. Turns out they may be able to discriminate between the two based on “Schwa insertion and /r/ production.” How cool is that? To answer my own question: Very cool!.
Not to be outdone, a research team from Oxford (Isham et al., 2021) reported on qualitative interviews with 15 patients who had grandiose delusions. They wrote: “All patients described the grandiose belief as highly meaningful: it provided a sense of purpose, belonging, or self-identity, or it made sense of unusual or difficult events.” Ever since I worked about 1.5 years in a psychiatric hospital back in 1980-81, I’ve had affection for people with psychotic disorders, and felt their grandiose delusions held meaning. Wow.
One last delight, and then I’ll get back to my obsessive PsycINFO search-aholism.
Having experienced sleep paralysis when I was a frosh/soph attending Mount Hood Community College in 1975-1976, I’ve always been super-delighted to discover old and new information about multi-sensory (and bizarre) experiences linked to sleep paralysis episodes. Today I found two articles stunningly relevant to my 1970s SP experiences. One looked at over 300 people and their sleep paralysis/out-of-body experiences. They found that having out-of-body experiences during sleep paralysis reduced the usual distress linked to sleep paralysis. The other study surveyed 185 people with sleep paralysis and found that most of them, as I did in the 1970s, experienced hallucinations of people in the room and many believed the “others” in the room to be supernatural. I find these results oddly confirming of my long-passed sleep insomnia experiences.
All this delight at scientific discovery leads me to conclude that (a) knowledge exists, (b) we should seek out that knowledge, and (c) gaining knowledge can help us better understand our own experiences, as well as the experiences of others.
And another conclusion: We should all offer a BIG THANKS to all the scientists out there grinding out research and contributing to society . . . one study at a time.
Isham, L., Griffith, L., Boylan, A., Hicks, A., Wilson, N., Byrne, R., . . . Freeman, D. (2021). Understanding, treating, and renaming grandiose delusions: A qualitative study. Psychology and Psychotherapy: Theory, Research and Practice, 94(1), 119-140. doi:https://doi.org/10.1111/papt.12260
Herrero, N. L., Gallo, F. T., Gasca‐Rolín, M., Gleiser, P. M., & Forcato, C. (2022). Spontaneous and induced out‐of‐body experiences during sleep paralysis: Emotions, “aura” recognition, and clinical implications. Journal of Sleep Research, 9. doi:https://doi.org/10.1111/jsr.13703
Before and after a quick trip to NYC (see the photo), I’m teaching the research class in our Department of Counseling this year. This leads me to re-affirm a conclusion I reached long ago: Research is hard.
Research is hard for many reasons, not the least of which is that scientific language can look and feel opaque. If you don’t know the terminology, it’s easy to miss the point. Even worse, it’s easy to dismiss the point, just because the language feels different. I do that all the time. When I come upon terminology that I don’t recognize, one of my common responses is to be annoyed at the jargon and consequently dismiss the content. As my sister Peggy might have said, that’s like “throwing the baby out with the bathtub.”
Teaching research to Master’s students who want to practice counseling and see research as a bothersome requirement is especially hard. It doesn’t help that my mastery of research design and statistics and qualitative methods is limited. Nevertheless, I’ve thrown myself into the teaching of research this semester; that’s a good thing, because it means I’m learning.
This week I shared a series of audio recordings of a woman bereaved by the suicide of her former husband. The content and affect in the recordings are incredible. Together, we all listened to the woman’s voice, intermittently cracking with pain and grief. We listened to each excerpt twice, pulling out meaning units and then building a theory around our observations and the content. More on the results from that in another blog.
During the class before, I got several volunteers, hypnotized them, and then used a single-case design to evaluate whether my hypnotic interventions improved or adversely affected their physical performance on a coin-tossing task. The results? Sort of and maybe. Before that, I gave them fake math quizzes (to evaluate math anxiety). I also used graphology and palmistry to conduct personality assessments and make behavioral and life predictions. I had written the names of four (out of 24 students) who would volunteer for the graphology and palmistry activities, placed them in an envelope, and got ¾ correct. Am I psychic? Nope. But I do know the basic rule of behavioral prediction: The best predictor of future behavior is past behavior.
Today is Friday, which means I don’t have many appointments, which means I’m working on some long overdue research reports. Two different happiness projects are burning a hole in my metaphorical research pocket. The first is a write-up of a short 2.5-hour happiness workshop on counseling students’ health and wellness. As it turns out, compared with the control group, students who completed the happiness workshop immediately and significantly had lower scores on the Center for Epidemiologic Studies Depression scale (p = .006). Even better, after 6-months, up to 81% of the participants believed they were still experiencing benefits from the workshop on at least one outcome variable (i.e., mindfulness). The point of writing this up is to emphasize that even brief workshops on evidence-based happiness interventions can have lasting positive effects on graduate students in counseling.
Given that I’m on the cusp of writing up these workshop results, along with a second study of the outcomes of a semester-long happiness course, I’m stopping here so I can get back to work. Not surprisingly, as I mentioned in the beginning of this blog, research is hard; that means it’s much easier for me to write this blog than it is to force myself to do the work I need to do to get these studies published.
As my sister Peggy used to say, I need to stop procrastinating and “put my shoulder to the grindstone.”
After posting (last Thursday) our 1996 article on the efficacy of antidepressant medications for treating depression in youth, several people have asked if I have updated information. Well, yes, but because I’m old, even my updated research review is old. However, IMHO, it’s still VERY informative.
In 2008, the editor of the Journal of Contemporary Psychotherapy, invited Rita and I to publish an updated review on medication efficacy. Rita opted out, and so I recruited Duncan Campbell, a professor of psychology at the University of Montana, to join me.
Duncan and I discovered some parallels and some differences from our 1996 article. The parallels included the tendency for researchers to do whatever they could to demonstrate medication efficacy. That’s not surprising, because much of the antidepressant medication research is funded by pharmaceutical companies. Another parallel was the tendency for researchers to overstate or misstate or twist some of their conclusions in favor of antidepressants. Here’s the abstract:
This article reviews existing research pertaining to antidepressant medications, psychotherapy, and their combined efficacy in the treatment of clinical depression in youth. Based on this review, we recommend that youth depression and its treatment can be readily understood from a social-psycho-bio model. We maintain that this model presents an alternative conceptualization to the dominant biopsychosocial model, which implies the primacy of biological contributors. Further, our review indicates that psychotherapy should be the frontline treatment for youth with depression and that little scientific evidence suggests that combined psychotherapy and medication treatment is more effective than psychotherapy alone. Due primarily to safety issues, selective serotonin reuptake inhibitors should be initiated only in conjunction with psychotherapy and/or supportive monitoring.
The main difference from our 1996 review was that in the late 1990s and early 2000s, there were several SSRI studies where SSRIs were reported as more efficacious than placebo. Overall, we found 6 of 10 reporting efficacy. An excerpt follows:
Our PsychInfo and PubMed database searches and cross- referencing strategies identified 10 published RCTs of SSRI efficacy. In total, these studies compared 1,223 SSRI treated patients to a similar number of placebo controls. Using the researchers’ own efficacy criteria, six studies returned significant results favoring SSRIs over placebo. These included 3 of 4 fluoxetine studies (Emslie et al. 1997, 2002; Simeon et al. 1990; The TADS Team 2004), 1 of 3 paroxetine studies (Berard et al. 2006; Emslie et al. 2006; Keller 2001), 1 of 1 sertraline study (Wagner et al. 2003), and 1 of 1 citalopram study (Wagner et al. 2004).
Despite these pharmaceutical-funded positive outcomes, medication-related side-effects were startling, and the methodological chicanery discouraging. Here’s an excerpt where we take a deep dive into the medication-related side effects and adverse events (N.B., the researchers should be lauded for their honest reporting of these numbers, but not for their “safe and effective” conclusions).
SSRI-related medication safety issues for young patients, in particular, deserve special scrutiny and articulation. For example, Emslie et al. (1997) published the first RCT to claim that fluoxetine is safe and efficacious for treating youth depression. Further inspection, however, uncovers not only methodological problems (such as the fact that psychiatrist ratings provided the sole outcome variable and the possibility that intent-to-treat analyses conferred an advantage for fluoxetine due to a 46% discontinuation rate in the placebo condition), but also, three (6.25%) fluoxetine patients developed manic symptoms, a finding that, when extrapolated, suggests the possibility of 6,250 mania conversions for every 100,000 treated youth.
Similarly, in the much-heralded Treatment of Adolescents with Depression Study (TADS), self-harming and suicidal adverse events occurred among 12% of fluoxetine treated youth and only 5% of Cognitive Behavioral Therapy (CBT) patients. Additionally, psychiatric adverse events were reported for 21% of fluoxetine patients and 1% of CBT patients (March et al. 2006; The TADS Team 2004, 2007). Keller et al. (2001), authors of the only positive paroxetine study, reported similar data regarding SSRI safety. In Keller et al.’s sample, 12% of paroxetine-treated adolescents experienced at least one adverse event, and 6% manifested increased suicidal ideation or behavior. Interestingly, in the TCA and placebo comparison groups, no participants evinced increased suicidality. Nonetheless, Keller et al. claimed paroxetine was safe and effective.
When it came to combination treatment, we found only two studies, one of which made a final recommendation that was nearly the opposite of their findings:
Other than TADS, only one other RCT has evaluated combination SSRI and psychotherapy treatment for youth with depression. Specifically, Melvin et al. (2006) directly compared sertraline, CBT, and their combination. They observed partial remission among 71% of CBT patients, 33% of sertraline patients, and 47% of patients receiving combined treatment. Consistent with previously reviewed research, Sertraline patients evidenced significantly more adverse events and side effects. Surprisingly and in contradiction with their own data, Melvin et al. recommended CBT and sertraline with equal strength.
As I summarize the content from our article, I’m aware that you might conclude that I’m completely against antidepressant medication use. That’s not the case. For me, the take-home points include, (a) SSRI antidepressants appear to be effective for some young people with depression, and (b) at the same time, as a general treatment, the risk of side effects, adverse effects, and minimal treatment effects make SSRIs a bad bet for uniformly positive outcomes, but that doesn’t mean there won’t be any positive outcomes. In the end, for my money—and for the safety of children and adolescents—I’d go with counseling/psychotherapy or exercise as primary treatments for depressive symptoms in youth, both of which have comparable outcomes to SSRIs, with much less risk.
About 25-years ago Rita and I published an article titled, “Efficacy of antidepressant medication with depressed youth: what psychologists should know.” Although the article targeted psychologists and was published in the journal, Professional Psychology, the content was relevant to all mental health professionals as well as anyone who works closely with children.
Yesterday, when teaching my research class to a fantastic group of Master’s students in the Department of Counseling at UM, I had a moment of reminiscence. Not surprisingly, along with the reminiscence, came a resurgence of emotion and passion. I was sharing about how it’s possible to find an area of interest that hooks so much passion, that you might end up tracking down, literally everything ever published on that topic (as long as the topic is small enough!).
The motivation behind my interest in the efficacy of antidepressants with youth came about because of a confluence of factors. First, I was working with youth every day, many of whom were prescribed antidepressant medications. Second, I was in a sort of professional limbo—working in full-time private practice—but wishing to be in academia. Third, out of virtual nowhere, in 1994, Bob Deaton, a professor of social work at the University of Montana, asked Rita and I to do an all-day presentation for the Montana Chapter of the National Association of Social Workers. Bob’s offer was not to be refused, and I’ve been in Bob Deaton’s debt ever since. If you’re out there reading this, thanks again Bob, for your confidence and the opportunity.
To prep, Rita and I split up the content. One of my tasks was to dive into all things related to antidepressant medications. Before embarking on the journey into the literature, I expected there would be modest evidence supporting the efficacy of antidepressants in treating depression in youth.
My expectations were completely wrong. Much to my shock, I discovered that not only was there not much “out there,” but the prevailing research was riddled with methodological problems and, bottom line, there had NEVER been a published study indicating that antidepressants were more effective in treating depression in youth than placebo. I was gob smacked.
Just to give you a taste, here’s the abstract:
Pharmacologic treatments for mental or emotional disorders are becoming increasingly popular, especially in managed care environments. Consequently, psychologists must remain cognizant of medication efficacy concerning specific mental disorders. This article reviews all double-blind, placebo- controlled efficacy trials of tricyclic antidepressants (TCAs) with depressed youth that were published in 1985-1994. Also, all group-treatment studies of depressed youth using fluoxetine, a serotonin-specific reuptake inhibitor (SSRI), are summarized. Results indicate that neither TCAs nor SSRIs have demonstrated greater efficacy than placebo in alleviating depressive symptoms in children and adolescents, despite the use of research strategies designed to give antidepressants an advantage over placebo. The implications of these findings for research and practice are discussed.
Early in my research class this semester, an astute young woman asked about the “rule” she had heard about that you shouldn’t cite research that’s more than 10-years-old. It was a great question. I hope I responded rationally, but my apoplectic-ness may have showed in my complexion and words. In my view, we cannot and should not ignore past research. As Samuel Clemens once wrote, “History doesn’t repeat itself, it only rhymes.” If we don’t know the old stuff, we may miss out on the contemporary rhyming pattern. In our article, 25-years-old now, we also discussed some medication research reporting shenanigans (although we used more professional language. Here’s an excerpt of our discussion about drop-out rates.
Dropout rates. Side effects and adverse events can significantly affect medication study outcomes by causing participants to discontinue medication treatment. For example, in the IMI [imipramine] study with children ( Puig-Antich et al., I987), 4 out of 20 (20%) of the medication group did not complete the study, whereas in the two DMI [desipramine] studies ( Boulos et al., l99 l; Kutcher et al., 1994 ), 6 out of 18 (33%) and 9 out of 30 (30%) medication participants dropped out because of side effects. For each of these studies, participants who dropped out of the treatment groups before completing the treatment protocol were eliminated from data analyses. The elimination of dropout participants from data analyses produced inappropriately inflated treatment-response rates. For example, although Puig-Antich et al. (1987) reported a treatment-response rate of 56% (9 of 16 participants), if all participants are included within the data analyses, the adjusted or intent-to-treat response rate is 45% (9/20). For the three studies that reported the number of medication protocol participants who dropped out of the study, the average reduction in response rate was 16.5%. Overall, intentto-treat response rates ranged from less than 8% to 45% (see Table 2 for intent-to-treat response rates for all reviewed TCA studies).
What’s the value, you might wonder, of looking back 25-years at the methodology and outcomes related to tricyclic antidepressant medication use? You may disagree, but I think the rhyming pattern within antidepressant medication research for youth (and adults) remains. If you’re interested in expanding your historical knowledge about this rhyming, I’ve linked the article here.
Research can be boring; it can be opaque; it can be riddled with stats and numbers. Nevertheless, for me, research remains exciting, both as a source of amazing knowledge, but also as something to read with a critical eye.
What if I owned a company and paid all my employees to conduct an intervention study on a drug my company profits from? After completing the study, I pay a journal about ten thousand British pounds to publish the results. That’s not to say the study wouldn’t have been published anyway, but the payment allows for publication on “open access,” which is quicker and gets me immediate media buzz.
My drug intervention targets a longstanding human and societal problem—post-traumatic stress disorder (PTSD). Of course, everyone with a soul wants to help people who have been physically or sexually assaulted or exposed to horrendous natural or military-related trauma. In the study, I compare the efficacy of my drug (plus counseling) with an inactive placebo (plus counseling). The results show that my drug is significantly more effective than an inactive placebo. The study is published. I get great media attention, with two New York Times (NYT) articles, one of which dubs my drug as one of the “hottest new therapeutics since Prozac.”
In real life, there’s hardly anything I love much more than a cracker-jack scientific study. And, in real life, my thought experiment is a process that’s typical for large pharmaceutical companies. My problem with these studies is that they use the cover of science to market a financial investment. Having financially motivated individuals conduct research, analyze the results, and report their implications spoils the science.
Over the past month or so, my thought experiment scenario has played out with psilocybin and MDMA (aka ecstasy) in the treatment of PTSD. The company—actually a non-profit—is the Multidisciplinary Association for Psychedelic Studies (MAPS). They funded an elaborate research project, titled, “MDMA-assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study” through private donations. That may sound innocent, but Andrew Jacobs of the NYT described MAPS as, “a multimillion dollar research and advocacy empire that employs 130 neuroscientists, pharmacologists and regulatory specialists working to lay the groundwork for the coming psychedelics revolution.” Well, that’s not your average non-profit.
To be honest, I’m not terribly opposed to careful experimentation of psychedelics for treating PTSD. I suspect psychedelics will be no worse (and no better) than other pharmaceutic-produced drugs used to treat PTSD. What I do oppose, is dressing up marketing as science. Sadly, this pseudo-scientific approach has been used and perfected by pharmaceutical companies for decades. I’m familiar with promotional pieces impersonating science mostly from the literature on antidepressants for treating depression in youth. I can summarize the results of those studies simply: Mostly antidepressants don’t work for treating depression in youth. Although some individual children and adolescents will experience benefits from antidepressants, separating the true, medication-based benefits from placebo responses is virtually impossible.
My best guess from reading medication studies for 30 years (and recent psychedelic research) is that the psychedelic drug results will end up about the same as antidepressants for youth. Why? Because placebo.
Placebos can, and usually do, produce powerful therapeutic responses. I’ll describe the details in a later blog-post. For now, I just want to say that in the MDMA study, the researchers, despite reasonable efforts, were unable to keep study participants “blind” from whether they were taking MDMA vs. placebo. Unsurprisingly, 95.7% of patients in the MDMA group accurately guessed that they were in the MDMA group and 84.1% of patients in the placebo group accurately guessed they were only receiving inactive placebos. Essentially, the patients knew what they were getting, and consequently, attributing a positive therapeutic response to MDMA (rather than an MDMA-induced placebo effect) is speculation. . . not science.
In his NYT article (May 9, 2021), Jacobs wrote, “Psilocybin and MDMA are poised to be the hottest new therapeutics since Prozac.” Alternatively, he might have written, “Psilocybin and MDMA are damn good placebos.” Even further, he also could have written, “The best therapeutics for PTSD are and always will be exercise, culturally meaningful and socially-connected processes like sweat lodge therapy, being outdoors, group support, and counseling or psychotherapy with a trusted and competent practitioner.” Had he been interested in prevention, rather than treatment, he would have written, “The even better solution to PTSD involves investing in peace over war, preventing sexual assault, and addressing poverty.”
Unfortunately, my revision of what Jacobs wrote won’t make anyone much money . . . and so you won’t see it published anywhere now or ever—other than right here on this beautiful (and free) blog—which is why you should pass it on.
Sorry to say, I’ve been irritable the past couple days. If you don’t believe me, just ask my internet provider . . . or my editor . . . or ask me about my upcoming book deadline. There’s evidence everywhere for my irritability and impatience. You might even see evidence for it in this short excerpt from our forthcoming Theories of Counseling and Psychotherapy textbook. In fact, you should read this now, because I’m pretty sure it will get censored before appearing in our text.
Here you go.
You may be aware of popular books describing and delighting in the differences between female and male brains. Here’s a short list, along with my snarky comments:
The essential difference: Male and female brains and the truth about autism (Baron-Cohen, 2003). Baron-Cohen is an autism researcher. His book allegedly, “. . . proves that female-type brains are better at empathizing and communicating, while male brains are stronger at understanding and building systems-not just computers and machinery, but abstract systems such as politics and music.” Comment: It’s so good to finally understand why most of our politicians are smirky White males who look like Baron-Cohen (heads up, this statement is sarcasm).
The female brain (Brizendine, 2006): Brizendine is a neuropsychiatrist. Her book is touted as bringing “. . . together the latest findings to show how the unique structure of the female brain determines how women think, what they value, how they communicate, and who they love.” Comment: In Delusions of gender (2011), Cordelia Fine reduces Brizendine’s arguments to rubble. Nuff said.
Teaching the female brain: How girls learn math and science (James, 2009). Comment: It’s hard to know how this book could be more than two pages given that there’s extremely sparse scientific evidence to support what this book’s title implies.
Female brain gone insane: An emergency guide for women who feel like they are falling apart (Lundin, 2009). No comment. I couldn’t bring myself to read beyond this book’s title.
The male brain: A breakthrough understanding of how men and boys think (Brizendine, 2011). Comment: The main breakthrough finding is that when you sell a million+ copies of your first book, a sequel, with similar drama, but equally slim scientific support, is essential.
Unleash the power of the female brain: Supercharging yours for better health, energy, mood, focus, and sex (Amen, 2014). Comment: Better health, energy, mood, focus, and sex? I want a female brain!
The dangers of over-stating what’s known about the brain is significant, but nowhere are the dangers bigger than when you’re talking about sex and gender. Over time, physical differences between females and males have nearly always been used to justify systemic mistreatment of females (and limitations for males, as well). Some examples:
Plato didn’t think women were created directly by God and so they didn’t have had souls.
Aristotle thought women were deficient in natural heat and therefore unable to cook their menstrual fluids into semen.
Gustav Le Bon (1979) concluded that women’s intellectual inferiority was so obvious that no one could contest it. He wrote: “All psychologists who have studied the intelligence of women, as well as poets and novelists, recognize today that they represent the most inferior forms of human evolution and that they are closer to children and savages than to an adult, civilized man” (see Women’s Brains by S. J. Gould). Le Bon purportedly based his ideas on Broca’s measurements of 6 female and 7 male skulls. Not surprisingly, Le Bon strongly opposed the whole idea of educating women.
More recently, over the past 30 years, I’ve seen and heard and read many different descriptions and explanations about female and male brain differences. Nearly always, there’s the same old story: Women are more “right brained” and intuitive and less “left brained” and rational. Of course the actual brain hemisphere research is sketchy, but the take home messages are much like Baron-Cohen’s and Brizendine, which happen to be much like the philosophy of the Nazi Third Reich, which is that girls and women are well-suited for working in the kitchen and the church, and especially good at caring for children, but that women had best leave politics and the corporate world – where steady rationality is essential – to the men.
All this reminds me of the time my daughter, then a senior in high school, was shown a film in her science class depicting the female brain as structurally less capable of science and math. She came home in distress. We showed up at school the next day. What do you suppose happened next? We’ll leave that story to your imagination.
Genderizing the brain marginalizes and limits females, but it can also do the same for males. Take, for example, this quotation from “Dr.” Kevin Leman.
“Did you know that scientific studies prove why a woman tends to be more ‘relational’ than her male counterpart? A woman actually has more connecting fibers than a man does between the verbal and the emotional side of her brain. That means a woman’s feelings and thoughts zip along quickly, like they’re on an expressway, but a man’s tend to poke slowly as if he’s walking and dragging his feet on a dirt road.” (pp. 5-6).
Just FYI, even though my emotional quotient is just barely dragging along Leman’s dirt road, I can quickly intuit that what he wrote is sheer drivel. It’s not partial drivel because . . . as Cordelia Fine might say, “He just made that shit up.”
Seriously? Am I making the claim that male and female brains are relatively equivalent in terms of empathic processing? Yes. I. am.
Using the best and most rigorous laboratory empathy measure available, empathy researcher William Ickes found no differences between males and females in seven consecutive studies. However, based on a larger group of studies, he and his colleagues acknowledged that there may be small sex-based differences favoring women on empathy tasks. It should be noted that he and his research team (which includes females who may be more limited in their scientific skills than Baron-Cohen) offer at least two caveats. First, they believe that females being raised in social conditions that promote a communal orientation may account for some of the differences. Second, females are especially likely to be better at empathy when they’re primed, directly or indirectly, to recall that they (women) are better at emotional tasks than men. The converse is also true. When men are primed to think all men are empathic dullards, they tend to perform more like empathic dullards.
What all this boils down to is that females and males are generally quite similar in their empathic accuracy, not to mention their math and science and language abilities. It appears that the minor observable differences between females and males may be explained by various environmental factors. This means that if you want to stick with scientific evidence, you should be very cautious in making any conclusions about brain differences between females and males. To do otherwise is to create what has been eloquently termed, a neuromyth.
In summary, the safest empirically-based conclusions on sex- and gender-based brain differences are:
The differences appear to be minimal
When they exist, they may be largely caused by immediate environmental factors or longer-term educational opportunities
To avoid mistakes from the past, we should be cautious in attributing female and male behavioral or performance differences to their brains
If and when true neurological differences are discovered, it would be best if we viewed them using the Jungian concept of Gifts differing (Myers, 1995).
Consistent with Cordelia Fine’s excellent recommendation in Delusions of Gender, we shouldn’t make things up—even if it means we get to sell more books.
This is a draft “Brain Box” for the feminist theory and therapy chapter from our forthcoming Theories of Counseling and Psychotherapy textbook.
Feminist therapy is about connection.
So is neuroscience.
Neuroscience involves the study of synaptic interconnections, neural networks, brain structures and their electrochemical communications.
Feminist therapy involves egalitarian interconnection, empathy, mutual empathy, and empowerment of the oppressed, neglected, and marginalized.
As a highly sophisticated, interconnected entity, the human brain is metaphorical support for feminist theory and therapy. In the brain, cells don’t operate in isolation. In feminist therapy in general, and relational cultural therapy (RCT) in particular, isolation is unhealthy. Connection is healthy.
Healthy brains are connection-heavy. Whether humans are awake or asleep, brain cells are in constant communication; they problem-solve; they operate sensory and motor systems; they feedback information to and from the body, inhibiting, exciting, and forming a connected, communicating, community.
Using modern brain research as a foundation, Jordan (the developer of RCT) described how empathic relationships can change clients:
“Empathy is not just a means to better understand the client; in mutually empathic exchanges, the isolation of the client is altered. The client feels less alone, more joined with the therapist. It is likely that in these moments of empathy and resonance, there is active brain resonance between therapist and client (Schore, 1994), which can alter the landscape and functioning of the brain. Thus, those areas of the brain that register isolation and exclusion fire less and those areas that indicate empathic responsiveness begin to activate.”
Jordan is talking about how therapist-client interactions change the brain. Many others have made the same point: “It is the power of being with others that shapes our brain” (Cozolino, 2006, p. 9). In her review of RCT theory and outcomes, Frey (2013) emphasized that “research on mirror neurons, the facial recognition system, lifelong neuroplasticity and neurogenesis, and the social functions of brain structures” (p. 181) supports feminist theory and feminist therapy process.
Neuroscience research is supportive of feminist therapy in ways that are both real and metaphorical. There is unarguably great potential here. However, before we wax too positive, it’s important to heed a warning. Beginning with Plato (at least) and throughout the history of time, the main way in which physical (or brain) differences between the sexes have been used is to marginalize females and undercut their viability as equal partners in the human race (see Brain Box 10.2). With that caveat in mind, let’s respect feminism with some multitasking: Let’s celebrate the positive parallels between human neurology and feminist theory, while simultaneously keeping a watchful eye on how neuroscience is being used to limit or oppress girls and women.
The place to click if you want to learn about psychotherapy, counseling, or whatever John SF is thinking about.