Category Archives: Research

Evidence-Based Happiness for Teachers: Preliminary Results (and another opportunity)

We’ve been collecting outcomes data on our Evidence-Based Happiness course for Teachers. From last summer, we have pre-post data on 39 participants. We had VERY significant results on all of the following outcomes

Less negative affect

More positive affect

Lower depression scores

Better sleep

Fewer headaches

Less gastrointestinal distress

Fewer colds

Increased hope

Increased mindfulness

If you’re a Montana Educator and you want to take the course THIS summer, it’s online, asynchronous, and only $195 for 3 Graduate Credits. You can register here: https://www.campusce.net/umextended/course/course.aspx?C=712&pc=13&mc=&sc=

If you’re not an educator, you must know one, and they deserve this, so share it, please!

Now for you researcher nerds. Over the past week, I’ve tried to fit in some manuscript writing time. If you’re following this blog, you’ll already know that I’ve experienced some rejections and frustrations in my efforts to publish out positive psychology/happiness outcomes. I’ve also emailed various editors and let them know what I think of their reviews and review processes. . . which means I may have destroyed my chances at publication. On the other hand, maybe sometimes the editors and reviewers need a testy review sent their way!

Yesterday, a friend from UC Santa Barbara sent me a fairly recent review of all the empirical research on College Happiness Course Outcomes. To summarize the review: There are HARDLY ANY good studies with positive outcomes that have been published. Specifically, if you look at U.S. published studies, only three studies with control groups and positive outcomes have been published. There’s one more I know of. If you want to read the article, here it is:

As always, thanks for reading. I’ll be posting a “teaching group counseling” update soon! JSF

Savor This!

As many of you who know me in-person or through this blog, I’m quite capable of backward-savoring. . . which might be why I find this week’s Montana Happiness Challenge activity especially compelling.

Savoring is defined as a deliberate effort to extend and expand positive experiences. Or, as I learned from Dr. Heidi Zetzer of the University of California, Santa Barbara, “Savoring is amplifying and extending positive emotions, by lingering, reveling, relishing, or something even more active like taking a victory lap! I also stole this photo from one of Heidi’s happiness slides. Thanks Heidi!!

So, how can anyone—or me—do savoring backward? Enter another fun word: Rumination.

Dictionary.com defines rumination as (1) a deep or considered thought about something. Or, (2) the action of chewing the cud.

Essentially, to ruminate is to think hard. You may be ruminating right now, wondering, “What’s backward or bad about thinking hard.”

Well, in the domain of mental health, we focus on a particular type of rumination. For example, according to the American Psychiatric Association, “Rumination involves repetitive thinking or dwelling on negative feelings and distress and their causes and consequences.”

Thinking hard about negative things is precisely the opposite of savoring. And, despite my surface penchant for the positive, both my wife and I would attest to the fact that I’m also an excellent ruminator—as in the psychiatric sense, not so much in the cud-chewing sense.

As we like to say in academia, the research on savoring is damn good. Well, maybe we don’t really like to say “damn good,” but I’m sure someone has said that at some point in time, probably while savoring all the savoring research.  

How good is the research, you ask?

People instructed to savor, depending on the type of savoring, generally report improved mood, increased satisfaction, greater hope for positive life events, increased planning, and a greater likelihood of repeating a previously savor-worthy experience. Just savor that for an extra moment or two. For something so simple, savoring research has damn good outcomes.

This week, our Montana Happiness Challenge savoring activity provides you with a menu of different savoring activities to try out. You can read the details on the Montana Happiness Project website: https://montanahappinessproject.com/savoring

The summary is: For this week the plan is for you to pick one savoring assignment from a menu of research-based savoring activities (below). Each of these activities has research support; doing any of them might make you feel significantly more happiness or less depression. Here are your options:

  • Engage in mutual reminiscence. Mutual reminiscence happens when you get together with someone and intentionally pull up and talk about fun, positive, or meaningful memories. I was on the phone with a friend last week and did a bit of this and it was nice. Now I have memories of us remembering our shared positive memories.
  • Make a list of positive memories. After making the list, transport yourself to reminisce on one of the memories. You can do this by yourself. Retrieve the memory. Play it back in your mind. Explore it. Feel it. Let your brain elaborate on the details.
  • Celebrate good news longer than you would. This is easy. You need to track/observe for a positive message or news in your life that feels good. Then, let your mind linger on it. Notice how you feel. What parts of the news are especially meaningful and pleasant to you? Extend and celebrate the good news.
  • Notice and observe beauty. This activity is mostly visual, but you can listen for beautiful sounds and smells too. Let yourself see color, patterns, and nuanced beauty in nature or in art. Linger with that visual and let its pleasant effects be in your eyes, brain, and body. Notice and feel those sensations and thoughts.

As usual, consider making your savoring public. . . and tag us, so we have more things to savor.

University of Montana Happiness Class Research Results Round 1 (again): The Structured Abstract

I’ve spent the morning learning. At this point in my life, learning requires simultaneous regulation of my snarky irreverence. Although I intellectually know I don’t know everything, when I discover, as I do ALL. THE. TIME., that I don’t know something, I have to humble myself unto the world.

Okay. I know I’m being a little dramatic.

After pushing “submit” on our latest effort to publish Round 1 of our happiness class data, less than an hour later I received a message from the very efficient editor that our manuscript had been “Unsubmitted.” Argh! The good news is that the editor was just letting us know that we needed to follow the manuscript submission guidelines and include a “Structured Abstract.” Who knew?

The best news is I wrote a structured abstract and discovered that I like structured abstracts way more than I like traditional abstracts. So, that’s cool.

And, here it is!

Abstract

Background: University counseling center services are inadequate to address current student mental health needs. Positive psychology courses may be scalable interventions that address student well-being and mental health.

Objective: The purpose of this study was to evaluate the effects of a multi-component positive psychology course on undergraduate student well-being, mental health, and physical health.

Method: We used a quantitative, quasi-experimental, pretest-posttest design. Participants in a multi-component positive psychology course (n = 38) were compared to a control condition (n = 41). All participants completed pre-post measures of well-being, physical health, and mental health.

Results: Positive psychology students reported significant improved well-being and physical health on eight of 18 outcome measures. Although results on the depression scale were not statistically significant, a post-hoc analysis of positive psychology students who were severely depressed at pretest reported substantial depression symptom reduction at posttest, whereas severely depressed control group students showed no improvement.

Conclusion: Positive psychology courses may produce important salutatory effects on student physical and mental health. Future research should include larger samples, random assignment, and greater diversity.

Teaching Implications: Psychology instructors should collaborate with student affairs to explore how positive psychology courses and interventions can facilitate student well-being, health, and mental health. 

Concerns about Science

As many of you know, over the past year or so I’ve been frustrated in my efforts to publish a couple of journal articles. I know I’m not the only one who has experienced this, but this morning we got another rejection (the third for this manuscript) that triggered me in a way that, as the feminists might say, raised my consciousness.

Three colleagues and I are trying to publish the outcomes from a short online “happiness workshop” I did a couple years ago for counseling students. Mostly the results were nonsignificant, except for the depression scale we used, which showed our workshop participants were less depressed than a non-random control group. Also, based on open-ended responses, participants seemed to find the workshop experience helpful and relevant to them in their lives.

Problems with the methodology in this study are obvious. In this most recent rejection, one reviewer noted the lack of “generalizability” of our data. I totally agree. The study has a relatively small n, nonrandom group assignment, yada, yada, yada. We acknowledge all this in the manuscript. Having a reviewer point out to us what we have readily acknowledged is annoying, but accurate. In fact, this rejection was accompanied by the most informed and reasonable reviews we’ve gotten yet.

Nevertheless, I immediately sent out a response email to the editor . . . which, because I’m partially all about entertainment, I’m sharing below. As you’ll see, for this rejection, my concerns are less with the reviews, and more about WHAT IS BEING PUBLISHED IN SO-CALLED SCIENTIFIC JOURNALS. Although I don’t think it’s necessary, I’ve anonymized my email so as to not incriminate anyone.

Dear Editor,

Thanks for your timely processing of our manuscript.

Overall, I believe your reviewers did a nice job of reading the manuscript, noting problems, and providing feedback. Being very familiar with the journal submission and feedback process, I want to compliment you and your reviewers on your evaluation of our manuscript. Compared to the quality of feedback I’ve obtained from other journals, you and your team did well.

Now I’d like to apologize in advance for the rest of this email because it’s a critique not only of your journal, but of counseling research more generally.

Despite your professional review, I have concerns about the decision, and rather than sit on them, I’m going to share them.

Although the reviews were accurate, and, as Reviewer 1 noted, there are generalizability concerns (but aren’t there always), I looked at the most recent online articles published in [your journal], to get a feel for the journal’s standards for generalizability, among other issues. What I found was disturbing.

In the seven published 2023 articles from your most recent issue, none have data that are even close to generalizable, and yet all of the articles offer recommendations, as if there were generalizable data. In the [first] article there’s an n of 8; [the second article] has an n of 6 and use a made-up questionnaire. I know these are qualitative studies, but, oh my, they don’t shy away from widely offering recommendations (is that not generalizing?), based on minimal data. Four of the articles in the most recent issue have no data; that’s okay, they’re interesting and may be useful. The only “empirical” study is a survey with n = 165, using a correlational analysis. But no information is provided on the % response to the survey, and so any justification for generalization is absent. Overall, some of these articles are interesting, and written by people I know and like. But none of them have anything close to what might be considered “generalizability.”

What’s most concerning to me is that none of the published articles employ an experimental design. My impression is that “Counselor Education and Preparation” (not just the journal, but the whole profession) mostly avoids experimental or quasi-experimental designs, and privileges qualitative research, or correlational designs that, of course, are really just open inquiries about the relationships among 2 or more variables.

This is the third rejection of this manuscript from counseling journals that, to be frank, essentially have no scientific impact factor. Maybe the manuscript is unpublishable. I would be open to that possibility if I didn’t read any of the published articles from [your journal and other journals]. My best guess (hypothesis) is that counseling journals have double standards; they allow generalizing statements from qualitative studies, but they hold experimental designs to inappropriately high standards. I say inappropriate here because all experimental designs are flawed in one way or another, and finding those flaws is easier than understanding them.

I know I’m biased, but my last problem with the rejection of this manuscript has to do with relevance. We tried to offer counseling students a short workshop intervention to help them cope with their COVID-related distress and distress in general–something that I think more counseling programs should do, and something that I think is innately relevant and potentially very meaningful to counseling students and practitioners.

Sorry again, for this email and it’s length, but I hope some of what I’ve shared is food for thought for you in your role as journal editor.

Thanks again for the timely review and feedback. I do appreciate the professionalism.

Sincerely,

John SF

If you’re still reading and following my incessant complaining, for your continued pleasure, now I’m pasting my email response to my coauthors, one of whom wrote us all this morning beginning with the word “Bummer.”

Hi There,

Yes! Another bummer.

For entertainment purposes, I kept you all on my email to the editor.

Although I’m clearly triggered, because I just read some articles in the [Journal], I now know, more about self-care, because in their [most recent lead published article], the authors wrote:

“Most participants also offered some recommendations for self-care practices to process crisis counseling. One participant (R2) indicated, “I keep a journal with prayers, thoughts and feelings, complaints and poetry.”

Now that I’ve done my complaining, I need to take time off to pray and write a poem or two, but then, yes . . .  I will continue to send this out into the world in hopes of eventual validation.

Happy Friday to you all,

John

I hope you all caught my clever utilization of recommendations from the offending journal to cope with this latest rejection. The good news is, like most rejections, this one was clarifying and inspired me with even more snark energy than I usually have.

Have a great weekend.

Please Participate in Our March Madness Research Study

In 2017, I collaborated with Dr. Charles Palmer and Daniel Salois (now Dr. Daniel Salois) on a creative, one-of-a-kind research study evaluating and comparing the effectiveness of an educational intervention vs. a hypnotic induction transporting people to the future in for improving the accuracy of March Madness NCAA Basketball Tournament bracket picks. The results were stunning (but I can’t share them right now because I want to recruit anyone and everyone in the Missoula area to participate in our planned replication of this amazing study). The study has been approved by the University of Montana IRB.To participate, follow these instructions:1.      Email Marchmadnessresearch2023@gmail.com and say “Yes, I’m in!”2.      We will email you back a confirmation.3.      Upon arrival at the study location, you will be randomly assigned to one of two “March Madness Bracket Training” groups:a.      Hypnosis to enhance your natural intuitive powersb.      Educational information from a UM professor4.      All participants will meet in room 123 of the Phyllis J. Washington College of Education building at 7pm on Tuesday, 3/14/23. Enter on the East end of the building. From there, we’ll send you to a room for either the education or hypnosis intervention.5.      When you arrive in your room, you will fill out an informed consent form, a March Madness bracket, and complete a short questionnaire. 6.      Then you will participate in either they educational or hypnosis training.7.      After the training, you will complete another bracket and short questionnaire8.      You will leave your completed packet and your brackets with the researchers; they will be uploaded into the ESPN Tournament Challenge website using the “Team Name” you provide. If you bring a device, we will provide a password so you can upload your own selections into the ESPN system.9.      You will receive information at the “Training” on how to login and track your bracket. On or around April 15, we will post a summary of the research results at: https://johnsommersflanagan.com/ Once again, to sign up for this research project, email: Marchmadnessresearch2023@gmail.com I’m posting this because we’re trying to recruit as many participants as possible. If you live near Missoula, please consider participating. If you know someone who might be interested, please share this with them.  Thanks for reading and have a fabulous day.John S-F 

The Delight of Scientific Discovery

Art historians point to images like John Henry Fuseli’s 1754 painting “The Nightmare” as early depictions of sleep paralysis.

Consensus among my family and friends is that I’m weird. I’m good with that. Being weird may explain why, on the Saturday morning of Thanksgiving weekend, I was delighted to be searching PsycINFO for citations to fit into the revised Mental Status Examination chapter of our Clinical Interviewing textbook.

One thing: I found a fantastic article on Foreign Accent Syndrome (FAS). If you’ve never heard of FAS, you’re certainly not alone. Here’s the excerpt from our chapter:   

Many other distinctive deviations from normal speech are possible, including a rare condition referred to as “foreign accent syndrome.” Individuals with this syndrome speak with a nonnative accent. Both neurological and psychogenic factors have been implicated in the development of foreign accent syndrome (Romö et al., 2021).

Romö’s article, cited above, described research indicating that some forms of FAS have clear neurological or brain-based etiologies, while others appear psychological in origin. Turns out they may be able to discriminate between the two based on “Schwa insertion and /r/ production.” How cool is that? To answer my own question: Very cool!.

Not to be outdone, a research team from Oxford (Isham et al., 2021) reported on qualitative interviews with 15 patients who had grandiose delusions. They wrote: “All patients described the grandiose belief as highly meaningful: it provided a sense of purpose, belonging, or self-identity, or it made sense of unusual or difficult events.” Ever since I worked about 1.5 years in a psychiatric hospital back in 1980-81, I’ve had affection for people with psychotic disorders, and felt their grandiose delusions held meaning. Wow.  

One last delight, and then I’ll get back to my obsessive PsycINFO search-aholism.

Having experienced sleep paralysis when I was a frosh/soph attending Mount Hood Community College in 1975-1976, I’ve always been super-delighted to discover old and new information about multi-sensory (and bizarre) experiences linked to sleep paralysis episodes. Today I found two articles stunningly relevant to my 1970s SP experiences. One looked at over 300 people and their sleep paralysis/out-of-body experiences. They found that having out-of-body experiences during sleep paralysis reduced the usual distress linked to sleep paralysis. The other study surveyed 185 people with sleep paralysis and found that most of them, as I did in the 1970s, experienced hallucinations of people in the room and many believed the “others” in the room to be supernatural. I find these results oddly confirming of my long-passed sleep insomnia experiences.

All this delight at scientific discovery leads me to conclude that (a) knowledge exists, (b) we should seek out that knowledge, and (c) gaining knowledge can help us better understand our own experiences, as well as the experiences of others.

And another conclusion: We should all offer a BIG THANKS to all the scientists out there grinding out research and contributing to society . . . one study at a time.

For more: Here’ a link to a cool NPR story on sleep paralysis: https://www.npr.org/2019/11/21/781724874/seeing-monsters-it-could-be-the-nightmare-of-sleep-paralysis

References

Isham, L., Griffith, L., Boylan, A., Hicks, A., Wilson, N., Byrne, R., . . . Freeman, D. (2021). Understanding, treating, and renaming grandiose delusions: A qualitative study. Psychology and Psychotherapy: Theory, Research and Practice, 94(1), 119-140. doi:https://doi.org/10.1111/papt.12260

Herrero, N. L., Gallo, F. T., Gasca‐Rolín, M., Gleiser, P. M., & Forcato, C. (2022). Spontaneous and induced out‐of‐body experiences during sleep paralysis: Emotions, “aura” recognition, and clinical implications. Journal of Sleep Research, 9. doi:https://doi.org/10.1111/jsr.13703

Romö, N., Miller, N., & Cardoso, A. (2021). Segmental diagnostics of neurogenic and functional foreign accent syndrome. Journal of Neurolinguistics, 58, 15. doi:https://doi.org/10.1016/j.jneuroling.2020.100983

Sharpless, B. A., & Kliková, M. (2019). Clinical features of isolated sleep paralysis. Sleep Medicine, 58, 102-106. doi:https://doi.org/10.1016/j.sleep.2019.03.007

Research is Hard: Procrastination is Easy

Before and after a quick trip to NYC (see the photo), I’m teaching the research class in our Department of Counseling this year. This leads me to re-affirm a conclusion I reached long ago: Research is hard.

Research is hard for many reasons, not the least of which is that scientific language can look and feel opaque. If you don’t know the terminology, it’s easy to miss the point. Even worse, it’s easy to dismiss the point, just because the language feels different. I do that all the time. When I come upon terminology that I don’t recognize, one of my common responses is to be annoyed at the jargon and consequently dismiss the content. As my sister Peggy might have said, that’s like “throwing the baby out with the bathtub.”  

Teaching research to Master’s students who want to practice counseling and see research as a bothersome requirement is especially hard. It doesn’t help that my mastery of research design and statistics and qualitative methods is limited. Nevertheless, I’ve thrown myself into the teaching of research this semester; that’s a good thing, because it means I’m learning.

This week I shared a series of audio recordings of a woman bereaved by the suicide of her former husband. The content and affect in the recordings are incredible. Together, we all listened to the woman’s voice, intermittently cracking with pain and grief. We listened to each excerpt twice, pulling out meaning units and then building a theory around our observations and the content. More on the results from that in another blog.

During the class before, I got several volunteers, hypnotized them, and then used a single-case design to evaluate whether my hypnotic interventions improved or adversely affected their physical performance on a coin-tossing task. The results? Sort of and maybe. Before that, I gave them fake math quizzes (to evaluate math anxiety). I also used graphology and palmistry to conduct personality assessments and make behavioral and life predictions. I had written the names of four (out of 24 students) who would volunteer for the graphology and palmistry activities, placed them in an envelope, and got ¾ correct. Am I psychic? Nope. But I do know the basic rule of behavioral prediction: The best predictor of future behavior is past behavior.

Today is Friday, which means I don’t have many appointments, which means I’m working on some long overdue research reports. Two different happiness projects are burning a hole in my metaphorical research pocket. The first is a write-up of a short 2.5-hour happiness workshop on counseling students’ health and wellness. As it turns out, compared with the control group, students who completed the happiness workshop immediately and significantly had lower scores on the Center for Epidemiologic Studies Depression scale (p = .006). Even better, after 6-months, up to 81% of the participants believed they were still experiencing benefits from the workshop on at least one outcome variable (i.e., mindfulness). The point of writing this up is to emphasize that even brief workshops on evidence-based happiness interventions can have lasting positive effects on graduate students in counseling.

Given that I’m on the cusp of writing up these workshop results, along with a second study of the outcomes of a semester-long happiness course, I’m stopping here so I can get back to work. Not surprisingly, as I mentioned in the beginning of this blog, research is hard; that means it’s much easier for me to write this blog than it is to force myself to do the work I need to do to get these studies published.

As my sister Peggy used to say, I need to stop procrastinating and “put my shoulder to the grindstone.”

The Efficacy of Antidepressant Medications with Youth: Part II

After posting (last Thursday) our 1996 article on the efficacy of antidepressant medications for treating depression in youth, several people have asked if I have updated information. Well, yes, but because I’m old, even my updated research review is old. However, IMHO, it’s still VERY informative.

In 2008, the editor of the Journal of Contemporary Psychotherapy, invited Rita and I to publish an updated review on medication efficacy. Rita opted out, and so I recruited Duncan Campbell, a professor of psychology at the University of Montana, to join me.

Duncan and I discovered some parallels and some differences from our 1996 article. The parallels included the tendency for researchers to do whatever they could to demonstrate medication efficacy. That’s not surprising, because much of the antidepressant medication research is funded by pharmaceutical companies. Another parallel was the tendency for researchers to overstate or misstate or twist some of their conclusions in favor of antidepressants. Here’s the abstract:

Abstract

This article reviews existing research pertaining to antidepressant medications, psychotherapy, and their combined efficacy in the treatment of clinical depression in youth. Based on this review, we recommend that youth depression and its treatment can be readily understood from a social-psycho-bio model. We maintain that this model presents an alternative conceptualization to the dominant biopsychosocial model, which implies the primacy of biological contributors. Further, our review indicates that psychotherapy should be the frontline treatment for youth with depression and that little scientific evidence suggests that combined psychotherapy and medication treatment is more effective than psychotherapy alone. Due primarily to safety issues, selective serotonin reuptake inhibitors should be initiated only in conjunction with psychotherapy and/or supportive monitoring.

The main difference from our 1996 review was that in the late 1990s and early 2000s, there were several SSRI studies where SSRIs were reported as more efficacious than placebo. Overall, we found 6 of 10 reporting efficacy. An excerpt follows:

Our PsychInfo and PubMed database searches and cross- referencing strategies identified 10 published RCTs of SSRI efficacy. In total, these studies compared 1,223 SSRI treated patients to a similar number of placebo controls. Using the researchers’ own efficacy criteria, six studies returned significant results favoring SSRIs over placebo. These included 3 of 4 fluoxetine studies (Emslie et al. 1997, 2002; Simeon et al. 1990; The TADS Team 2004), 1 of 3 paroxetine studies (Berard et al. 2006; Emslie et al. 2006; Keller 2001), 1 of 1 sertraline study (Wagner et al. 2003), and 1 of 1 citalopram study (Wagner et al. 2004).

Despite these pharmaceutical-funded positive outcomes, medication-related side-effects were startling, and the methodological chicanery discouraging. Here’s an excerpt where we take a deep dive into the medication-related side effects and adverse events (N.B., the researchers should be lauded for their honest reporting of these numbers, but not for their “safe and effective” conclusions).

SSRI-related medication safety issues for young patients, in particular, deserve special scrutiny and articulation. For example, Emslie et al. (1997) published the first RCT to claim that fluoxetine is safe and efficacious for treating youth depression. Further inspection, however, uncovers not only methodological problems (such as the fact that psychiatrist ratings provided the sole outcome variable and the possibility that intent-to-treat analyses conferred an advantage for fluoxetine due to a 46% discontinuation rate in the placebo condition), but also, three (6.25%) fluoxetine patients developed manic symptoms, a finding that, when extrapolated, suggests the possibility of 6,250 mania conversions for every 100,000 treated youth.

Similarly, in the much-heralded Treatment of Adolescents with Depression Study (TADS), self-harming and suicidal adverse events occurred among 12% of fluoxetine treated youth and only 5% of Cognitive Behavioral Therapy (CBT) patients. Additionally, psychiatric adverse events were reported for 21% of fluoxetine patients and 1% of CBT patients (March et al. 2006; The TADS Team 2004, 2007). Keller et al. (2001), authors of the only positive paroxetine study, reported similar data regarding SSRI safety. In Keller et al.’s sample, 12% of paroxetine-treated adolescents experienced at least one adverse event, and 6% manifested increased suicidal ideation or behavior. Interestingly, in the TCA and placebo comparison groups, no participants evinced increased suicidality. Nonetheless, Keller et al. claimed paroxetine was safe and effective.

When it came to combination treatment, we found only two studies, one of which made a final recommendation that was nearly the opposite of their findings:

Other than TADS, only one other RCT has evaluated combination SSRI and psychotherapy treatment for youth with depression. Specifically, Melvin et al. (2006) directly compared sertraline, CBT, and their combination. They observed partial remission among 71% of CBT patients, 33% of sertraline patients, and 47% of patients receiving combined treatment. Consistent with previously reviewed research, Sertraline patients evidenced significantly more adverse events and side effects. Surprisingly and in contradiction with their own data, Melvin et al. recommended CBT and sertraline with equal strength.

As I summarize the content from our article, I’m aware that you might conclude that I’m completely against antidepressant medication use. That’s not the case. For me, the take-home points include, (a) SSRI antidepressants appear to be effective for some young people with depression, and (b) at the same time, as a general treatment, the risk of side effects, adverse effects, and minimal treatment effects make SSRIs a bad bet for uniformly positive outcomes, but that doesn’t mean there won’t be any positive outcomes. In the end, for my money—and for the safety of children and adolescents—I’d go with counseling/psychotherapy or exercise as primary treatments for depressive symptoms in youth, both of which have comparable outcomes to SSRIs, with much less risk.

And here’s a link to the whole article:

 

Antidepressant Medications for Treating Depression in Youth: A 25-Year Flashback

About 25-years ago Rita and I published an article titled, “Efficacy of antidepressant medication with depressed youth: what psychologists should know.” Although the article targeted psychologists and was published in the journal, Professional Psychology, the content was relevant to all mental health professionals as well as anyone who works closely with children.

Yesterday, when teaching my research class to a fantastic group of Master’s students in the Department of Counseling at UM, I had a moment of reminiscence. Not surprisingly, along with the reminiscence, came a resurgence of emotion and passion. I was sharing about how it’s possible to find an area of interest that hooks so much passion, that you might end up tracking down, literally everything ever published on that topic (as long as the topic is small enough!).

The motivation behind my interest in the efficacy of antidepressants with youth came about because of a confluence of factors. First, I was working with youth every day, many of whom were prescribed antidepressant medications. Second, I was in a sort of professional limbo—working in full-time private practice—but wishing to be in academia. Third, out of virtual nowhere, in 1994, Bob Deaton, a professor of social work at the University of Montana, asked Rita and I to do an all-day presentation for the Montana Chapter of the National Association of Social Workers. Bob’s offer was not to be refused, and I’ve been in Bob Deaton’s debt ever since. If you’re out there reading this, thanks again Bob, for your confidence and the opportunity.

To prep, Rita and I split up the content. One of my tasks was to dive into all things related to antidepressant medications. Before embarking on the journey into the literature, I expected there would be modest evidence supporting the efficacy of antidepressants in treating depression in youth.

My expectations were completely wrong. Much to my shock, I discovered that not only was there not much “out there,” but the prevailing research was riddled with methodological problems and, bottom line, there had NEVER been a published study indicating that antidepressants were more effective in treating depression in youth than placebo. I was gob smacked.

Just to give you a taste, here’s the abstract:

Pharmacologic treatments for mental or emotional disorders are becoming increasingly popular, especially in managed care environments. Consequently, psychologists must remain cognizant of medication efficacy concerning specific mental disorders. This article reviews all double-blind, placebo- controlled efficacy trials of tricyclic antidepressants (TCAs) with depressed youth that were published in 1985-1994. Also, all group-treatment studies of depressed youth using fluoxetine, a serotonin-specific reuptake inhibitor (SSRI), are summarized. Results indicate that neither TCAs nor SSRIs have demonstrated greater efficacy than placebo in alleviating depressive symptoms in children and adolescents, despite the use of research strategies designed to give antidepressants an advantage over placebo. The implications of these findings for research and practice are discussed.

Early in my research class this semester, an astute young woman asked about the “rule” she had heard about that you shouldn’t cite research that’s more than 10-years-old. It was a great question. I hope I responded rationally, but my apoplectic-ness may have showed in my complexion and words. In my view, we cannot and should not ignore past research. As Samuel Clemens once wrote, “History doesn’t repeat itself, it only rhymes.” If we don’t know the old stuff, we may miss out on the contemporary rhyming pattern. In our article, 25-years-old now, we also discussed some medication research reporting shenanigans (although we used more professional language. Here’s an excerpt of our discussion about drop-out rates.

Dropout rates. Side effects and adverse events can significantly affect medication study outcomes by causing participants to discontinue medication treatment. For example, in the IMI [imipramine] study with children ( Puig-Antich et  al.,  I987), 4 out of 20 (20%) of the medication group did not complete the study, whereas in the two DMI [desipramine] studies ( Boulos et al., l99 l; Kutcher et al., 1994 ), 6 out of 18 (33%) and 9 out of 30 (30%) medication participants dropped out because of side effects. For each of these studies, participants who dropped out of the treatment groups before completing the treatment protocol were eliminated from data analyses. The elimination of dropout participants from data analyses produced inappropriately inflated treatment-response rates. For example, although Puig-Antich et al. (1987) reported a treatment-response rate of 56% (9 of 16 participants), if all participants are included within the data analyses, the adjusted or intent-to-treat response rate is 45% (9/20). For the three studies that reported the number of medication protocol participants who dropped out of the study, the average reduction in response rate was 16.5%. Overall, intent­to-treat response rates ranged from less than 8% to 45% (see Table 2 for intent-to-treat response rates for all reviewed TCA studies).

What’s the value, you might wonder, of looking back 25-years at the methodology and outcomes related to tricyclic antidepressant medication use? You may disagree, but I think the rhyming pattern within antidepressant medication research for youth (and adults) remains. If you’re interested in expanding your historical knowledge about this rhyming, I’ve linked the article here.

Research can be boring; it can be opaque; it can be riddled with stats and numbers. Nevertheless, for me, research remains exciting, both as a source of amazing knowledge, but also as something to read with a critical eye.

The Hottest New Placebos for PTSD

Let’s do a thought experiment.

What if I owned a company and paid all my employees to conduct an intervention study on a drug my company profits from? After completing the study, I pay a journal about ten thousand British pounds to publish the results. That’s not to say the study wouldn’t have been published anyway, but the payment allows for publication on “open access,” which is quicker and gets me immediate media buzz.

My drug intervention targets a longstanding human and societal problem—post-traumatic stress disorder (PTSD). Of course, everyone with a soul wants to help people who have been physically or sexually assaulted or exposed to horrendous natural or military-related trauma. In the study, I compare the efficacy of my drug (plus counseling) with an inactive placebo (plus counseling). The results show that my drug is significantly more effective than an inactive placebo. The study is published. I get great media attention, with two New York Times (NYT) articles, one of which dubs my drug as one of the “hottest new therapeutics since Prozac.”  

In real life, there’s hardly anything I love much more than a cracker-jack scientific study. And, in real life, my thought experiment is a process that’s typical for large pharmaceutical companies. My problem with these studies is that they use the cover of science to market a financial investment. Having financially motivated individuals conduct research, analyze the results, and report their implications spoils the science.

Over the past month or so, my thought experiment scenario has played out with psilocybin and MDMA (aka ecstasy) in the treatment of PTSD. The company—actually a non-profit—is the Multidisciplinary Association for Psychedelic Studies (MAPS). They funded an elaborate research project, titled, “MDMA-assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study” through private donations. That may sound innocent, but Andrew Jacobs of the NYT described MAPS as, “a multimillion dollar research and advocacy empire that employs 130 neuroscientists, pharmacologists and regulatory specialists working to lay the groundwork for the coming psychedelics revolution.” Well, that’s not your average non-profit.

To be honest, I’m not terribly opposed to careful experimentation of psychedelics for treating PTSD. I suspect psychedelics will be no worse (and no better) than other pharmaceutic-produced drugs used to treat PTSD. What I do oppose, is dressing up marketing as science. Sadly, this pseudo-scientific approach has been used and perfected by pharmaceutical companies for decades. I’m familiar with promotional pieces impersonating science mostly from the literature on antidepressants for treating depression in youth. I can summarize the results of those studies simply: Mostly antidepressants don’t work for treating depression in youth. Although some individual children and adolescents will experience benefits from antidepressants, separating the true, medication-based benefits from placebo responses is virtually impossible.

My best guess from reading medication studies for 30 years (and recent psychedelic research) is that the psychedelic drug results will end up about the same as antidepressants for youth. Why? Because placebo.

Placebos can, and usually do, produce powerful therapeutic responses. I’ll describe the details in a later blog-post. For now, I just want to say that in the MDMA study, the researchers, despite reasonable efforts, were unable to keep study participants “blind” from whether they were taking MDMA vs. placebo. Unsurprisingly, 95.7% of patients in the MDMA group accurately guessed that they were in the MDMA group and 84.1% of patients in the placebo group accurately guessed they were only receiving inactive placebos. Essentially, the patients knew what they were getting, and consequently, attributing a positive therapeutic response to MDMA (rather than an MDMA-induced placebo effect) is speculation. . . not science.

In his NYT article (May 9, 2021), Jacobs wrote, “Psilocybin and MDMA are poised to be the hottest new therapeutics since Prozac.” Alternatively, he might have written, “Psilocybin and MDMA are damn good placebos.” Even further, he also could have written, “The best therapeutics for PTSD are and always will be exercise, culturally meaningful and socially-connected processes like sweat lodge therapy, being outdoors, group support, and counseling or psychotherapy with a trusted and competent practitioner.” Had he been interested in prevention, rather than treatment, he would have written, “The even better solution to PTSD involves investing in peace over war, preventing sexual assault, and addressing poverty.”

Unfortunately, my revision of what Jacobs wrote won’t make anyone much money . . . and so you won’t see it published anywhere now or ever—other than right here on this beautiful (and free) blog—which is why you should pass it on.