Category Archives: Research

Today’s Rabbit Hole: What Constitutes Scientific Evidence for Psychotherapy Efficacy?

On July 24, in Helena, I attended a fun and fascinating meeting sponsored by the Carter Center. I spent the day with a group of incredibly smart people dedicated to improving mental health in Montana.

The focus was twofold. How do we promote and establish mental health parity in Montana and how do with improve behavioral health in schools? Two worthy causes. The discussions were enlightening.

We haven’t solved these problems (yet!). In the meantime, we’re cogitating on the issues we discussed, with plans to coalesce around practical strategies for making progress.

During our daylong discussions, the term evidence-based treatments bounced around. I shared with the group that as an academic psychologist/counselor, I could go deep into a rabbit-hole on terminology pertaining to treatment efficacy. Much to everyone’s relief, I exhibited a sort of superhuman inhibition and avoided taking the discussion down a hole lined with history and trivia. But now, much to everyone’s delight (I’m projecting here), I’m sharing part of my trip down that rabbit hole. If exploring the use of terms like, evidence-based, best practice, and empirically supported treatment is your jam, read on!

The following content is excerpted from our forthcoming text, Counseling and Psychotherapy Theories in Context and Practice (4th edition). Our new co-author is Bryan Cochran. I’m reading one of his chapters right now . . . which is so good that you all should read it . . . eventually. This text is most often used with first-year students in graduate programs in counseling, psychology, and social work. Consequently, this is only a modestly deep rabbit hole.

Enjoy the trip.

*************************************

What Constitutes Evidence? Efficacy, Effectiveness, and Other Research Models

We like to think that when clients or patients walk into a mental health clinic or private practice, they will be offered an intervention that has research support. This statement, as bland as it may seem, would generate substantial controversy among academics, scientists, and people on the street. One person’s evidence may or may not meet another person’s standards. For example, several popular contemporary therapy approaches have minimal research support (e.g., polyvagal theory and therapy, somatic experiencing therapy).

Subjectivity is a palpable problem in scientific research. Humans are inherently subjective; humans design the studies, construct and administer assessment instruments, and conduct the statistical analyses. Consequently, measuring treatment outcomes always includes error and subjectivity. Despite this, we support and respect the scientific method and appreciate efforts to measure (as objectively as possible) psychotherapy outcomes.

There are two primary approaches to outcomes research: (1) efficacy research and (2) effectiveness research. These terms flow from the well-known experimental design concepts of internal and external validity (Campbell et al., 1963). Efficacy research employs experimental designs that emphasize internal validity, allowing researchers to comment on causal mechanisms; effectiveness research uses experimental designs that emphasize external validity, allowing researchers to comment on generalizability of their findings.

Efficacy Research

Efficacy research involves tightly controlled experimental trials with high internal validity. Within medicine, psychology, counseling, and social work, randomized controlled trials (RCTs) are the gold standard for determining treatment efficacy. RCTs statistically compare outcomes between randomly assigned treatment and control groups. In medicine and psychiatry, the control group is usually administered an inert placebo (i.e., placebo pill). In the end, treatment is considered efficacious if the active medication relieves symptoms, on average, at a rate significantly higher than placebo. In psychotherapy research, treatment groups are compared with a waiting list, attention-placebo control group, or alternative treatment group.

To maximize researcher control over independent variables, RCTs require that participants meet specific inclusion and exclusion criteria prior to random assignment to a treatment or comparison group. This allows researchers to determine with greater certainty whether the treatment itself directly caused treatment outcomes.

In 1986, Gerald Klerman, then head of the National Institute of Mental Health, gave a keynote address to the Society for Psychotherapy Research. During his speech, he emphasized that psychotherapy should be evaluated through RCTs. He claimed:

We must come to view psychotherapy as we do aspirin. That is, each form of psychotherapy must have known ingredients, we must know what these ingredients are, they must be trainable and replicable across therapists, and they must be administered in a uniform and consistent way within a given study. (Quoted in Beutler, 2009, p. 308)

Klerman’s speech advocated for medicalizing psychotherapy. Klerman’s motivation for medicalizing psychotherapy partly reflected his awareness of heated competition for health care dollars. This is an important contextual factor. Events that ensued were an effort to place psychological interventions on par with medical interventions.

The strategy of using science to compete for health care dollars eventually coalesced into a movement within professional psychology. In 1993, Division 12 (the Society of Clinical Psychology) of the American Psychological Association (APA) formed a “Task Force on Promotion and Dissemination of Psychological Procedures.” This task force published an initial set of empirically validated treatments. To be considered empirically validated, treatments were required to be (a) manualized and (b) shown to be superior to a placebo or other treatment, or equivalent to an already established treatment in at least two “good” group design studies or in a series of single case design experiments conducted by different investigators (Chambless et al., 1998).

Division 12’s empirically validated treatments were instantly controversial. Critics protested that the process favored behavioral and cognitive behavioral treatments. Others complained that manualized treatment protocols destroyed authentic psychotherapy (Silverman, 1996). In response, Division 12 held to their procedures for identifying efficacious treatments but changed the name from empirically validated treatments to empirically supported treatments (ESTs).

Advocates of ESTs don’t view common factors in psychotherapy as “important” (Baker & McFall, 2014, p. 483). They view psychological interventions as medical procedures implemented by trained professionals. However, other researchers and practitioners complain that efficacy research outcomes do not translate well (aka generalize) to real-world clinical settings (Hoertel et al., 2021; Philips & Falkenström, 2021).

Effectiveness Research

Sternberg, Roediger, and Halpern (2007) described effectiveness studies:

An effectiveness study is one that considers the outcome of psychological treatment, as it is delivered in real-world settings. Effectiveness studies can be methodologically rigorous …, but they do not include random assignment to treatment conditions or placebo control groups. (p. 208)

Effectiveness research focuses on collecting data with external validity. This usually involves “real-world” settings. Effectiveness research can be scientifically rigorous but doesn’t involve random assignment to treatment and control conditions. Inclusion and exclusion criteria for clients to participate are less rigid and more like actual clinical practice, where clients come to therapy with a mix of different symptoms or diagnoses. Effectiveness research is sometimes referred to as “real world designs” or “pragmatic RCTs” (Remskar et al., 2024). Effectiveness research evaluates counseling and psychotherapy as practiced in the real world.

Other Research Models

Other research models also inform researchers and practitioners about therapy process and outcome. These models include survey research, single-case designs, and qualitative studies. However, based on current mental health care reimbursement practices and future trends, providers are increasingly expected to provide services consistent with findings from efficacy and effectiveness research (Cuijpers et al., 2023).

In Pursuit of Research-Supported Psychological Treatments

Procedure-oriented researchers and practitioners believe the active mechanism producing positive psychotherapy outcomes is therapy technique. Common factors proponents support the dodo bird declaration. To make matters more complex, prestigious researchers who don’t have allegiance to one side or the other typically conclude that we don’t have enough evidence to answer these difficult questions about what ingredients create change in psychotherapy (Cuijpers et al., 2019). Here’s what we know: Therapy usually works for most people. Here’s what we don’t know: What, exactly, produces positive changes.

For now, the question shouldn’t be, “Techniques or common factors?” Instead, we should be asking “How do techniques and common factors operate together to produce positive therapy outcomes?” We should also be asking, “Which approaches and techniques work most efficiently for which problems and populations?” To be broadly consistent with the research, we should combine principles and techniques from common factors and EST perspectives. We suspect that the best EST providers also use common factors, and the best common factors clinicians sometimes use empirically supported techniques.

Naming and Claiming What Works

When it comes to naming and claiming what works in psychotherapy, we have a naming problem. Every day, more research information about psychotherapy efficacy and effectiveness rolls in. As a budding clinician, you should track as much of this new research information as is reasonable. To help you navigate the language of researchers and practitioners use to describe “What works,” here’s a short roadmap to the naming and claiming of what works in psychotherapy.

When Klerman (1986) stated, “We must come to view psychotherapy as we do aspirin” his analogy was ironic. Aspirin’s mechanisms and range of effects have been and continue to be complex and sometimes mysterious (Sommers-Flanagan, 2015). Such is also the case with counseling and psychotherapy.

Language matters, and researchers and practitioners have created many ways to describe therapy effectiveness.

  • D12 briefly used the phrase empirically validated psychotherapy. Given that psychotherapy outcomes vary, the word validated is generally avoided.
  • In the face of criticism, D12 blinked once, renaming their procedures as empirically supported psychotherapy. ESTs are manualized and designed to treat specific mental disorders or specific client problems. If it’s not manualized and doesn’t target a disorder/problem, it’s not an EST.
  • ESTs have proliferated. As of this moment (August 2025), 89 ESTs for 30 different psychological disorders and behavior problems are listed on the Division 12 website (https://div12.org/psychological-treatments/). You can search the website to find the research status of various treatments.
  • To become proficient in providing an EST requires professional training. Certification may be necessary. It’s impossible to obtain training to implement all the ESTs available.
  • In 2006, an APA Presidential Task Force (2006) loosened D12’s definition, shifting to a more flexible term, Evidence-Based Practice (EBP), and defining it as ‘‘the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences’’ (p. 273).
  • In 2007, the Journal of Counseling and Development, the American Counseling Association’s flagship journal, inaugurated a new journal section, “Best Practices.” As we’ve written elsewhere, best practice has grown subjective and generic and is “often used so inconsistently that it is nearly meaningless” (Sommers-Flanagan, 2015, p. 98).
  • In 2011, D12 relaunched their website, relabeling ESTs as research-supported psychological treatments (n.b., most researchers and practitioners continue to refer to ESTs instead of research-supported psychological treatments).
  • As an alternative source of research updates, you can also track the prolific work of Pim Cuijpers and his research team for regular meta-analyses on psychological treatments (Cuijpers et al., 2023; Harrer et al., 2025).
  • Other naming variations, all designed to convey the message that specific treatments have research support, include evidence-based treatment, evidence-supported treatment, and other phrasings that, in contrast to ESTs and APA’s evidence-based practice definition, have no formal definition.

Manuals, Fidelity, and Creativity

Manualized treatments require therapist fidelity. In psychotherapy, fidelity means exactness or faithfulness to the published procedure—meaning you follow the manual. However, in the real world, when it comes to treatment fidelity, therapist practice varies. Some therapists follow manuals to the letter. Others use the manual as an outline. Still others read the manual, put it aside, and infuse their therapeutic creativity.

A seasoned therapist (Bernard) we know recently provided a short, informal description of his application of exposure therapy to adult and child clients diagnosed with obsessive-compulsive disorder. Bernard described interactions where his adult clients sobbed with relief upon getting a diagnosis. Most manuals don’t specify how to respond to clients sobbing, so he provided empathy, support, and encouragement. Bernard described a therapy scenario where the client’s final exposure trial involved the client standing behind Bernard and holding a sharp kitchen knife at Bernard’s neck. This level of risk-taking and intimacy also isn’t in the manual—but Bernard’s client benefited from Bernard trusting him and his impulse control.

During his presentation, Bernard’s colleagues chimed in, noting that Bernard was known for eliciting boisterous laughter from anxiety-plagued children and teenagers. There’s no manual available on using humor with clients, especially youth with overwhelming obsessional anxiety. Bernard used humor anyway. Although Bernard had read the manuals, his exposure treatments were laced with empathy, creativity, real-world relevance, and humor. Much to his clients’ benefit, Bernard’s approach was far outside the manualized box (B. Balleweg, personal communication, July 14, 2025).    

As Norcross and Lambert (2018) wrote: “Treatment methods are relational acts” (p. 5). The reverse is equally applicable, “Relational acts are treatment methods.” As you move into your therapeutic future, we hope you will take the more challenging path, learning how to apply BOTH the techniques AND the common factors. You might think of this—like Bernard—as practicing the science and art of psychotherapy.

**********************************

Note: This is a draft excerpt from Chapter 1 of our 4th edition, coming out in 2026. As a draft, your input is especially helpful. Please share as to whether the rabbit hole was too deep, not deep enough, just right, and anything else you’re inspired to share.

Thanks for reading!

That Time When I Found A Parallel Universe Where People Like Statistics

Earlier this week I found a parallel universe wherein I was able to convince three people that it would be terribly fun to sit with me in a classroom for 2+ hours and work through the post-course data from our most recent “Happiness for Educators” class. This involved me figuring out how to screencast my computer onto a big screen where I went through the process of accessing our Qualtrics file and exporting the data to SPSS. Then, while experiencing intermittent fits of joy, we cleaned the data, used the “recode” function to reverse score all the items requiring reverse scoring and then calculated our 16 different outcome variables.

In this parallel universe, the three people who joined me (you know who you are), asked great questions and acted interested the WHOLE time. Of course, one of the “people” is a well-established Missoula actor, so there’s the possibility that I was fooled by some excellent acting or feigning or pretending. That said, finding a parallel universe where people act interested in stats remains a feat to brag about.

We made it through all the post-test data. To maximize the fun and bring us all to a place of breathless excitement, I ran a quick descriptive analysis. At first glance, the data looked okay, but not great. Of course, we didn’t have the pretest outcome variables analyzed, and so we were forced to leave with bated breath.

Today, access to the parallel universe was briefly adversely affected by a slight temporal shift; nevertheless, I found one of the “people” and she enthusiastically embraced another 2 hours of stats. . . . At the end, she shouted from her office, “That was fun!”

I know at this point, I am, as Freud might say, “straining your credulity” but I speak the whole truth and nothing but the truth.

And the rest of the truth gets even better. Tammy (my new best stats friend) and I found the following statistical results.

  1. 89 of 100 students completed the pre-post questionnaires.
  2. We had statistical significance on ALL 16 outcomes—at the p < .01 level (or better).
  3. The effect sizes (Cohen’s d with Hedges adjustments) were among our best ever, with top outcomes being:
  4. Improved positive affect (feeling more cheerful, etc): d = .900 (a LARGE effect size)
  5. Reduced negative affect (feeling fewer negative emotional states) d = 885. (a LARGE effect size)
  6. Improved total self-reported physical health (a compilation of better sleep, reduced headaches, reduced gastrointestinal symptoms, fewer respiratory symptoms) d = .821 (a LARGE effect size)
  7. Reduced depression (as measured by the CES-D): d = .732 (an almost LARGE effect size)

If you’re reading this, I hope you’re skeptical. Because if you’re skeptical, then I’m sure you’ll want to know whether this is the first, second, third, or fourth time we’ve found this pattern of results. Nope. It’s the FIFTH consecutive time we’ve had all significant outcomes or nearly all significant outcomes that appear to be happening as a function of our happiness for educators course.

Although I am in constant fear that, next time, the results will be less impressive, I’m getting to the point where I’m thinking: These results are not random error, because we now have data across five cohorts and 267 teachers.

If you’re reading this, I also hope you’re thinking what I’m thinking. That is: You should take this course (if you’re a Montana educator) or you should tell your Montana educator friends to take this course. If you happen to be thinking what I’m thinking, here’s the link to sign up for our summer sections.

It’s a pretty good deal. Only $95 to experience more positive emotions, fewer negative emotions, better physical health, reduced depression, and more!

Ten Things Everyone Should Know about Mental Health, Suicide, and Happiness

I’ve spent the better part of the past two weeks doing presentations in various locations and venues. I did five presentations in Nebraska, and found myself surprisingly fond of Lincoln and Kearney Nebraska. On Thursday I was at a Wellness “Reason to Live” conference with CSKT Tribal Services at Kwataqnuk in Polson. Just now I finished an online talk with the Tex-Chip program. One common topic among these talks was the title of this blog post. I have found myself interestingly passionate about the content of this particular. . . so much so that I actually feel energized–rather than depleted–after talking for two hours.

Not surprisingly, I’ve had amazingly positive experiences throughout these talks. All the participants have been engaged, interesting, and working hard to be the best people they can be. Beginning with the Mourning Hope’s annual breakfast fundraiser, extending into my time with Union Bank employees, and then being with the wonderful indigenous people in Polson, and finally the past two hours Zooming with counseling students in Texas . . . I have felt hope and inspiration for the good things people are doing despite the challenges they face in the current socio-political environment.

If you were at one of these talks (or are reading this post), thanks for being you, and thanks for contributing your unique gifts to the world.

For your viewing pleasure, the ppts for this talk are linked here.

Why I’m Mostly Against Universal Suicide Screenings in Schools

I’ve been in repeated conversations with numerous concerned people about the risks and benefits of suicide screenings for youth in schools. Several years ago, I was in a one-on-one coffee shop discussion of suicide prevention with a local suicide prevention coordinator. She said, more as a statement than a question, “Who could be against school-based depression and suicide screenings?”

I slowly raised my hand, forced a smile, and confessed my position.

The question of how and why I’m not in favor of school-based mental health and suicide screenings is a complex one. On occasion, screenings will work, students at high-risk will be identified, and tragedy is averted. That’s obviously a great outcome. But I believe the mental health casualties from broad, school-based screenings tend to outweigh the benefits. Here’s why.

  1. Early identification of depression and suicide in youth will result in early labeling in school systems; even worse, young people will begin labeling themselves as being “ill” or “defective.” Those labels are sticky and won’t support positive outcomes.
  2. Most youth who experience depressive symptoms and suicide ideation are NOT likely to die by suicide. Odds are that students who don’t report suicidal ideation are just as likely to die by suicide. As the scientists put it, suicidal ideation is not a good predictor of suicide. Also, depression symptoms generally come and go among teenagers. Most teens will recover from depressive symptoms without intensive interventions.
  3. After a year or two of school-based screenings, the students will know the drill. They will realize that if they endorse depression symptoms and suicidal items that they’ll have to experience a pretty horrible assessment and referral process. When I talk to school personnel, they tell me that, (a) they already know the students who are struggling, and (b) in year 2 of screenings, the rates of depression and suicidality plummet—because students are smart and they want to avoid the consequences of being open about their emotional state.
  4. About 10-15% of people who complete suicide screenings feel worse afterward. We don’t really want that outcome.
  5. There’s no evidence that school-based screenings are linked to reductions in suicide rates.   

For more info on this, you can check out a brief commentary I published in the American Psychologist with my University of Montana colleague, Maegan Rides At The Door. The commentary focuses on suicide assessment with youth of color, but our points work for all youth. And, citations supporting our perspective are included.

Here are a few excerpts from the commentary:

 Standardized questionnaires, although well-intended and sometimes helpful, can be emotionally activating and their use is not without risk (Bryan, 2022; de Beurs et al., 2016).

In their most recent recommendations, the United States Preventive Services Task Force (2022) concluded that the evidence supporting screening for suicide risk among children and adolescents was “insufficient” (p. 1534). Even screening proponents acknowledge, “There is currently little to no data to show that screening decreases suicide attempt or death rates” (Cwik et al., 2020, p. 255). . . . Across settings, little to no empirical evidence indicates that screening assessments provide accurate, predictive, or useful information for categorizing risk (Bryan, 2022).

And here’s the link to the commentary:

Publication Alert — Broadening and Amplifying the Effects of Positive Psychology Courses on College Student Well-Being, Mental Health, and Physical Health

We have more good news for 2025. At long last, we’ve published a research article based on Dr. Dan Salois’s doctoral dissertation. Congratulations Dan!

This article is part of growing empirical support for our particular approaches to teaching positive psychology, happiness, and how people can live their best lives. As always, I want to emphasize that our approach is NOT about toxic positivity, as we encourage people to deal with the deep conflicts, trauma, and societal issues that cause distress — while also teaching strategies for generating positive affect, joyspotting, and other practices derived from positive psychology.

One of the big takeaways from Dr. Dan’s dissertation is that our happiness class format may produce physical health benefits. Also, it’s important to note that this publication is from early on in our research, and that our later research (currently unpublished) continues to show physical health benefits. Exciting stuff!

Here’s a link to the article. My understanding from the publisher is that only the first 50 clicks on this link can read/view the whole article.
https://www.tandfonline.com/eprint/VXXD3ISCT7EUJ8WAM7UY/full?target=10.1080/07448481.2024.2446434

Here’s a new article published in The Conversation

Happiness class is helping clinically depressed school teachers become emotionally healthy − with a cheery assist from Aristotle

This course is more than just suggesting that you ‘cheer up’ and ‘look on the bright side.’ akinbostanci/E+ via Getty Images

John Sommers-Flanagan, University of Montana

Text saying: Uncommon Courses, from The Conversation

Uncommon Courses is an occasional series from The Conversation U.S. highlighting unconventional approaches to teaching.

Title of Course

Evidence-Based Happiness for Teachers

What prompted the idea for the course?

I was discouraged. For nearly three decades, as a clinical psychologist, I trained mental health professionals on suicide assessment. The work was good but difficult.

All the while, I watched in dismay as U.S. suicide rates relentlessly increased for 20 consecutive years, from 1999 to 2018, followed by a slight dip during the COVID-19 pandemic, and then a rise in 2021 and 2022 – this despite more local, state and national suicide prevention programming than ever.

I consulted my wife, Rita, who also happens to be my favorite clinical psychologist. We decided to explore the science of happiness. Together, we established the Montana Happiness Project and began offering evidence-based happiness workshops to complement our suicide prevention work.

In 2021, the Arthur M. Blank Family Foundation, through the University of Montana, awarded us a US$150,000 grant to support the state’s K-12 public school teachers, counselors and staff. We’re using the funds to offer these educators low-cost, online graduate courses on happiness. In spring 2023, the foundation awarded us another $150,000 so we could extend the program through December 2025.

What does the course explore?

Using the word “happiness” can be off-putting. Sometimes, people associate happiness with recommendations to just smile, cheer up and suppress negative emotions – which can lead to toxic positivity.

As mental health professionals, my wife and I reject that definition. Instead, we embrace Aristotle’s concept of “eudaimonic happiness”: the daily pursuit of meaning, mutually supportive relationships and becoming the best possible version of yourself.

The heart of the course is an academic, personal and experiential exploration of evidence-based positive psychology interventions. These are intentional practices that can improve mood, optimism, relationships and physical wellness and offer a sense of purpose. Examples include gratitude, acts of kindness, savoring, mindfulness, mood music, practicing forgiveness and journaling about your best possible future self.

Students are required to implement at least 10 of 14 positive psychology interventions, and then to talk and write about their experiences on implementing them.

Why is this course relevant now?

Teachers are more distressed than ever before. They’re anxious, depressed and discouraged in ways that adversely affect their ability to teach effectively, which is one reason why so many of them leave the profession after a short period of time. It’s not just the low pay – educators need support, appreciation and coping tools; they also need to know they’re not alone. https://www.youtube.com/embed/ZOGAp9dw8Ac?wmode=transparent&start=0 This exercise helps you focus on what goes right, rather than the things that go wrong.

What’s a critical lesson from the course?

The lesson on sleep is especially powerful for educators. A review of 33 studies from 15 countries reported that 36% to 61% of K-12 teachers suffered from insomnia. Although the rates varied across studies, sleep problems were generally worse when teachers were exposed to classroom violence, had low job satisfaction and were experiencing depressive symptoms.

The sleep lesson includes, along with sleep hygiene strategies, a happiness practice and insomnia intervention called Three Good Things, developed by the renowned positive psychologist Martin Seligman.

I describe the technique, in Seligman’s words: “Write down, for one week, before you go to sleep, three things that went well for you during the day, and then reflect on why they went well.”

Next, I make light of the concept: “I’ve always thought Three Good Things was hokey, simplistic and silly.” I show a video of Seligman saying, “I don’t need to recommend beyond a week, typically … because when you do this, you find you like it so much, most people just keep doing it.” At that point, I roll my eyes and say, “Maybe.”

Then I share that I often awakened for years at 4 a.m. with terribly dark thoughts. Then – funny thing – I tried using Three Good Things in the middle of the night. It wasn’t a perfect solution, but it was a vast improvement over lying helplessly in bed while negative thoughts pummeled me.

The Three Good Things lesson is emblematic of how we encourage teachers in our course – using science, playful cynicism and an open and experimental mindset to apply the evidence-based happiness practices in ways that work for them.

I also encourage students to understand that the strategies I offer are not universally effective. What works for others may not work for them, which is why they should experiment with many different approaches.

What will the course prepare students to do?

The educators leave the course with a written lesson plan they can implement at their school, if they wish. As they deepen their happiness practice, they can also share it with other teachers, their students and their families.

Over the past 16 months, we’ve taught this course to 156 K-12 educators and other school personnel. In a not-yet-published survey that we carried out, more than 30% of the participants scored as clinically depressed prior to starting the class, compared with just under 13% immediately after the class.

This improvement is similar to the results obtained by antidepressant medications and psychotherapy.

The educators also reported overall better health after taking the class. Along with improved sleep, they took fewer sick days, experienced fewer headaches and reported reductions in cold, flu and stomach symptoms.

As resources allow, we plan to tailor these courses to other people with high-stress jobs. Already, we are receiving requests from police officers, health care providers, veterinarians and construction workers.

John Sommers-Flanagan, Clinical Psychologist and Professor of Counseling, University of Montana

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Happy Workshop for Graduate Students Pub: Hot off the Digital Press

Good news. Yesterday, I got a mysterious email from ORCID–which stands for: Open Researcher and Contributor ID. ORCID is a global, non-profit organization. Their vision is: “a world where all who participate in research, scholarship, and innovation are uniquely identified and connected to their contributions across disciplines, borders, and time.”

Cool.

Anyway, ORCID was notifying me of a change to my ORCID record. A few minutes later, I received an email from Wiley telling me that our Happy Workshop for Grad Students article was now officially published online.

As some of you know, I’ve complained about the journal publishing process, and, although I still think it’s a pretty broken and disturbing process, working with the editors and reviewers from the Journal of Humanistic Counseling was pretty smooth and pretty fabulous. Check them out: https://onlinelibrary.wiley.com/journal/21611939

And so, without further ado, here’s the Abstract, followed by methods to access the article. . .

Effects of a Single-Session, Online, Experiential Happiness Workshop on

Graduate Student Mental Health and Wellness

John Sommers-Flanagan

Jayna Mumbauer-Pisano

Daniel Salois

Kristen Byrne

Abstract

Graduate students regularly experience anxiety, sleep disturbances, and depression, but little research exists on how to support their mental health. We evaluated the effects of a single-session, online, synchronous, happiness workshop on graduate student well-being, mental health, and physical health. Forty-five students participated in a quasi-experimental study. Students attended a synchronous 2.5-h online happiness workshop, or a no-workshop control condition. After workshop completion and as compared with no-treatment controls, participants reported significant reductions in depression symptoms but no significant changes on seven other measures. At 6 months, participants reported further reductions in depression symptoms. Moreover, across four open-ended questions, 37.0%–48.1% of workshop participants (a) recalled workshop tools, (b) found them useful, (c) had been practicing them regularly, and (d) used them in sessions with clients. Despite study limitations, single-session, synchronous, online, happiness workshops may have salutatory effects on graduate student mental health. Additional research is needed.

K E Y W O R D S: depression, graduate students, mental health, single-session, wellness

Here’s a link to the article online: https://onlinelibrary.wiley.com/share/author/UMKTTSPPECBTVXEQYRKX?target=10.1002/johc.12223

And here’s a pdf copy for your personal (non-commercial) use:

Evidence-Based Happiness for Teachers: Preliminary Results (and another opportunity)

We’ve been collecting outcomes data on our Evidence-Based Happiness course for Teachers. From last summer, we have pre-post data on 39 participants. We had VERY significant results on all of the following outcomes

Less negative affect

More positive affect

Lower depression scores

Better sleep

Fewer headaches

Less gastrointestinal distress

Fewer colds

Increased hope

Increased mindfulness

If you’re a Montana Educator and you want to take the course THIS summer, it’s online, asynchronous, and only $195 for 3 Graduate Credits. You can register here: https://www.campusce.net/umextended/course/course.aspx?C=712&pc=13&mc=&sc=

If you’re not an educator, you must know one, and they deserve this, so share it, please!

Now for you researcher nerds. Over the past week, I’ve tried to fit in some manuscript writing time. If you’re following this blog, you’ll already know that I’ve experienced some rejections and frustrations in my efforts to publish out positive psychology/happiness outcomes. I’ve also emailed various editors and let them know what I think of their reviews and review processes. . . which means I may have destroyed my chances at publication. On the other hand, maybe sometimes the editors and reviewers need a testy review sent their way!

Yesterday, a friend from UC Santa Barbara sent me a fairly recent review of all the empirical research on College Happiness Course Outcomes. To summarize the review: There are HARDLY ANY good studies with positive outcomes that have been published. Specifically, if you look at U.S. published studies, only three studies with control groups and positive outcomes have been published. There’s one more I know of. If you want to read the article, here it is:

As always, thanks for reading. I’ll be posting a “teaching group counseling” update soon! JSF

Savor This!

As many of you who know me in-person or through this blog, I’m quite capable of backward-savoring. . . which might be why I find this week’s Montana Happiness Challenge activity especially compelling.

Savoring is defined as a deliberate effort to extend and expand positive experiences. Or, as I learned from Dr. Heidi Zetzer of the University of California, Santa Barbara, “Savoring is amplifying and extending positive emotions, by lingering, reveling, relishing, or something even more active like taking a victory lap! I also stole this photo from one of Heidi’s happiness slides. Thanks Heidi!!

So, how can anyone—or me—do savoring backward? Enter another fun word: Rumination.

Dictionary.com defines rumination as (1) a deep or considered thought about something. Or, (2) the action of chewing the cud.

Essentially, to ruminate is to think hard. You may be ruminating right now, wondering, “What’s backward or bad about thinking hard.”

Well, in the domain of mental health, we focus on a particular type of rumination. For example, according to the American Psychiatric Association, “Rumination involves repetitive thinking or dwelling on negative feelings and distress and their causes and consequences.”

Thinking hard about negative things is precisely the opposite of savoring. And, despite my surface penchant for the positive, both my wife and I would attest to the fact that I’m also an excellent ruminator—as in the psychiatric sense, not so much in the cud-chewing sense.

As we like to say in academia, the research on savoring is damn good. Well, maybe we don’t really like to say “damn good,” but I’m sure someone has said that at some point in time, probably while savoring all the savoring research.  

How good is the research, you ask?

People instructed to savor, depending on the type of savoring, generally report improved mood, increased satisfaction, greater hope for positive life events, increased planning, and a greater likelihood of repeating a previously savor-worthy experience. Just savor that for an extra moment or two. For something so simple, savoring research has damn good outcomes.

This week, our Montana Happiness Challenge savoring activity provides you with a menu of different savoring activities to try out. You can read the details on the Montana Happiness Project website: https://montanahappinessproject.com/savoring

The summary is: For this week the plan is for you to pick one savoring assignment from a menu of research-based savoring activities (below). Each of these activities has research support; doing any of them might make you feel significantly more happiness or less depression. Here are your options:

  • Engage in mutual reminiscence. Mutual reminiscence happens when you get together with someone and intentionally pull up and talk about fun, positive, or meaningful memories. I was on the phone with a friend last week and did a bit of this and it was nice. Now I have memories of us remembering our shared positive memories.
  • Make a list of positive memories. After making the list, transport yourself to reminisce on one of the memories. You can do this by yourself. Retrieve the memory. Play it back in your mind. Explore it. Feel it. Let your brain elaborate on the details.
  • Celebrate good news longer than you would. This is easy. You need to track/observe for a positive message or news in your life that feels good. Then, let your mind linger on it. Notice how you feel. What parts of the news are especially meaningful and pleasant to you? Extend and celebrate the good news.
  • Notice and observe beauty. This activity is mostly visual, but you can listen for beautiful sounds and smells too. Let yourself see color, patterns, and nuanced beauty in nature or in art. Linger with that visual and let its pleasant effects be in your eyes, brain, and body. Notice and feel those sensations and thoughts.

As usual, consider making your savoring public. . . and tag us, so we have more things to savor.

University of Montana Happiness Class Research Results Round 1 (again): The Structured Abstract

I’ve spent the morning learning. At this point in my life, learning requires simultaneous regulation of my snarky irreverence. Although I intellectually know I don’t know everything, when I discover, as I do ALL. THE. TIME., that I don’t know something, I have to humble myself unto the world.

Okay. I know I’m being a little dramatic.

After pushing “submit” on our latest effort to publish Round 1 of our happiness class data, less than an hour later I received a message from the very efficient editor that our manuscript had been “Unsubmitted.” Argh! The good news is that the editor was just letting us know that we needed to follow the manuscript submission guidelines and include a “Structured Abstract.” Who knew?

The best news is I wrote a structured abstract and discovered that I like structured abstracts way more than I like traditional abstracts. So, that’s cool.

And, here it is!

Abstract

Background: University counseling center services are inadequate to address current student mental health needs. Positive psychology courses may be scalable interventions that address student well-being and mental health.

Objective: The purpose of this study was to evaluate the effects of a multi-component positive psychology course on undergraduate student well-being, mental health, and physical health.

Method: We used a quantitative, quasi-experimental, pretest-posttest design. Participants in a multi-component positive psychology course (n = 38) were compared to a control condition (n = 41). All participants completed pre-post measures of well-being, physical health, and mental health.

Results: Positive psychology students reported significant improved well-being and physical health on eight of 18 outcome measures. Although results on the depression scale were not statistically significant, a post-hoc analysis of positive psychology students who were severely depressed at pretest reported substantial depression symptom reduction at posttest, whereas severely depressed control group students showed no improvement.

Conclusion: Positive psychology courses may produce important salutatory effects on student physical and mental health. Future research should include larger samples, random assignment, and greater diversity.

Teaching Implications: Psychology instructors should collaborate with student affairs to explore how positive psychology courses and interventions can facilitate student well-being, health, and mental health.