Tag Archives: Science

Today’s Rabbit Hole: What Constitutes Scientific Evidence for Psychotherapy Efficacy?

On July 24, in Helena, I attended a fun and fascinating meeting sponsored by the Carter Center. I spent the day with a group of incredibly smart people dedicated to improving mental health in Montana.

The focus was twofold. How do we promote and establish mental health parity in Montana and how do with improve behavioral health in schools? Two worthy causes. The discussions were enlightening.

We haven’t solved these problems (yet!). In the meantime, we’re cogitating on the issues we discussed, with plans to coalesce around practical strategies for making progress.

During our daylong discussions, the term evidence-based treatments bounced around. I shared with the group that as an academic psychologist/counselor, I could go deep into a rabbit-hole on terminology pertaining to treatment efficacy. Much to everyone’s relief, I exhibited a sort of superhuman inhibition and avoided taking the discussion down a hole lined with history and trivia. But now, much to everyone’s delight (I’m projecting here), I’m sharing part of my trip down that rabbit hole. If exploring the use of terms like, evidence-based, best practice, and empirically supported treatment is your jam, read on!

The following content is excerpted from our forthcoming text, Counseling and Psychotherapy Theories in Context and Practice (4th edition). Our new co-author is Bryan Cochran. I’m reading one of his chapters right now . . . which is so good that you all should read it . . . eventually. This text is most often used with first-year students in graduate programs in counseling, psychology, and social work. Consequently, this is only a modestly deep rabbit hole.

Enjoy the trip.

*************************************

What Constitutes Evidence? Efficacy, Effectiveness, and Other Research Models

We like to think that when clients or patients walk into a mental health clinic or private practice, they will be offered an intervention that has research support. This statement, as bland as it may seem, would generate substantial controversy among academics, scientists, and people on the street. One person’s evidence may or may not meet another person’s standards. For example, several popular contemporary therapy approaches have minimal research support (e.g., polyvagal theory and therapy, somatic experiencing therapy).

Subjectivity is a palpable problem in scientific research. Humans are inherently subjective; humans design the studies, construct and administer assessment instruments, and conduct the statistical analyses. Consequently, measuring treatment outcomes always includes error and subjectivity. Despite this, we support and respect the scientific method and appreciate efforts to measure (as objectively as possible) psychotherapy outcomes.

There are two primary approaches to outcomes research: (1) efficacy research and (2) effectiveness research. These terms flow from the well-known experimental design concepts of internal and external validity (Campbell et al., 1963). Efficacy research employs experimental designs that emphasize internal validity, allowing researchers to comment on causal mechanisms; effectiveness research uses experimental designs that emphasize external validity, allowing researchers to comment on generalizability of their findings.

Efficacy Research

Efficacy research involves tightly controlled experimental trials with high internal validity. Within medicine, psychology, counseling, and social work, randomized controlled trials (RCTs) are the gold standard for determining treatment efficacy. RCTs statistically compare outcomes between randomly assigned treatment and control groups. In medicine and psychiatry, the control group is usually administered an inert placebo (i.e., placebo pill). In the end, treatment is considered efficacious if the active medication relieves symptoms, on average, at a rate significantly higher than placebo. In psychotherapy research, treatment groups are compared with a waiting list, attention-placebo control group, or alternative treatment group.

To maximize researcher control over independent variables, RCTs require that participants meet specific inclusion and exclusion criteria prior to random assignment to a treatment or comparison group. This allows researchers to determine with greater certainty whether the treatment itself directly caused treatment outcomes.

In 1986, Gerald Klerman, then head of the National Institute of Mental Health, gave a keynote address to the Society for Psychotherapy Research. During his speech, he emphasized that psychotherapy should be evaluated through RCTs. He claimed:

We must come to view psychotherapy as we do aspirin. That is, each form of psychotherapy must have known ingredients, we must know what these ingredients are, they must be trainable and replicable across therapists, and they must be administered in a uniform and consistent way within a given study. (Quoted in Beutler, 2009, p. 308)

Klerman’s speech advocated for medicalizing psychotherapy. Klerman’s motivation for medicalizing psychotherapy partly reflected his awareness of heated competition for health care dollars. This is an important contextual factor. Events that ensued were an effort to place psychological interventions on par with medical interventions.

The strategy of using science to compete for health care dollars eventually coalesced into a movement within professional psychology. In 1993, Division 12 (the Society of Clinical Psychology) of the American Psychological Association (APA) formed a “Task Force on Promotion and Dissemination of Psychological Procedures.” This task force published an initial set of empirically validated treatments. To be considered empirically validated, treatments were required to be (a) manualized and (b) shown to be superior to a placebo or other treatment, or equivalent to an already established treatment in at least two “good” group design studies or in a series of single case design experiments conducted by different investigators (Chambless et al., 1998).

Division 12’s empirically validated treatments were instantly controversial. Critics protested that the process favored behavioral and cognitive behavioral treatments. Others complained that manualized treatment protocols destroyed authentic psychotherapy (Silverman, 1996). In response, Division 12 held to their procedures for identifying efficacious treatments but changed the name from empirically validated treatments to empirically supported treatments (ESTs).

Advocates of ESTs don’t view common factors in psychotherapy as “important” (Baker & McFall, 2014, p. 483). They view psychological interventions as medical procedures implemented by trained professionals. However, other researchers and practitioners complain that efficacy research outcomes do not translate well (aka generalize) to real-world clinical settings (Hoertel et al., 2021; Philips & Falkenström, 2021).

Effectiveness Research

Sternberg, Roediger, and Halpern (2007) described effectiveness studies:

An effectiveness study is one that considers the outcome of psychological treatment, as it is delivered in real-world settings. Effectiveness studies can be methodologically rigorous …, but they do not include random assignment to treatment conditions or placebo control groups. (p. 208)

Effectiveness research focuses on collecting data with external validity. This usually involves “real-world” settings. Effectiveness research can be scientifically rigorous but doesn’t involve random assignment to treatment and control conditions. Inclusion and exclusion criteria for clients to participate are less rigid and more like actual clinical practice, where clients come to therapy with a mix of different symptoms or diagnoses. Effectiveness research is sometimes referred to as “real world designs” or “pragmatic RCTs” (Remskar et al., 2024). Effectiveness research evaluates counseling and psychotherapy as practiced in the real world.

Other Research Models

Other research models also inform researchers and practitioners about therapy process and outcome. These models include survey research, single-case designs, and qualitative studies. However, based on current mental health care reimbursement practices and future trends, providers are increasingly expected to provide services consistent with findings from efficacy and effectiveness research (Cuijpers et al., 2023).

In Pursuit of Research-Supported Psychological Treatments

Procedure-oriented researchers and practitioners believe the active mechanism producing positive psychotherapy outcomes is therapy technique. Common factors proponents support the dodo bird declaration. To make matters more complex, prestigious researchers who don’t have allegiance to one side or the other typically conclude that we don’t have enough evidence to answer these difficult questions about what ingredients create change in psychotherapy (Cuijpers et al., 2019). Here’s what we know: Therapy usually works for most people. Here’s what we don’t know: What, exactly, produces positive changes.

For now, the question shouldn’t be, “Techniques or common factors?” Instead, we should be asking “How do techniques and common factors operate together to produce positive therapy outcomes?” We should also be asking, “Which approaches and techniques work most efficiently for which problems and populations?” To be broadly consistent with the research, we should combine principles and techniques from common factors and EST perspectives. We suspect that the best EST providers also use common factors, and the best common factors clinicians sometimes use empirically supported techniques.

Naming and Claiming What Works

When it comes to naming and claiming what works in psychotherapy, we have a naming problem. Every day, more research information about psychotherapy efficacy and effectiveness rolls in. As a budding clinician, you should track as much of this new research information as is reasonable. To help you navigate the language of researchers and practitioners use to describe “What works,” here’s a short roadmap to the naming and claiming of what works in psychotherapy.

When Klerman (1986) stated, “We must come to view psychotherapy as we do aspirin” his analogy was ironic. Aspirin’s mechanisms and range of effects have been and continue to be complex and sometimes mysterious (Sommers-Flanagan, 2015). Such is also the case with counseling and psychotherapy.

Language matters, and researchers and practitioners have created many ways to describe therapy effectiveness.

  • D12 briefly used the phrase empirically validated psychotherapy. Given that psychotherapy outcomes vary, the word validated is generally avoided.
  • In the face of criticism, D12 blinked once, renaming their procedures as empirically supported psychotherapy. ESTs are manualized and designed to treat specific mental disorders or specific client problems. If it’s not manualized and doesn’t target a disorder/problem, it’s not an EST.
  • ESTs have proliferated. As of this moment (August 2025), 89 ESTs for 30 different psychological disorders and behavior problems are listed on the Division 12 website (https://div12.org/psychological-treatments/). You can search the website to find the research status of various treatments.
  • To become proficient in providing an EST requires professional training. Certification may be necessary. It’s impossible to obtain training to implement all the ESTs available.
  • In 2006, an APA Presidential Task Force (2006) loosened D12’s definition, shifting to a more flexible term, Evidence-Based Practice (EBP), and defining it as ‘‘the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences’’ (p. 273).
  • In 2007, the Journal of Counseling and Development, the American Counseling Association’s flagship journal, inaugurated a new journal section, “Best Practices.” As we’ve written elsewhere, best practice has grown subjective and generic and is “often used so inconsistently that it is nearly meaningless” (Sommers-Flanagan, 2015, p. 98).
  • In 2011, D12 relaunched their website, relabeling ESTs as research-supported psychological treatments (n.b., most researchers and practitioners continue to refer to ESTs instead of research-supported psychological treatments).
  • As an alternative source of research updates, you can also track the prolific work of Pim Cuijpers and his research team for regular meta-analyses on psychological treatments (Cuijpers et al., 2023; Harrer et al., 2025).
  • Other naming variations, all designed to convey the message that specific treatments have research support, include evidence-based treatment, evidence-supported treatment, and other phrasings that, in contrast to ESTs and APA’s evidence-based practice definition, have no formal definition.

Manuals, Fidelity, and Creativity

Manualized treatments require therapist fidelity. In psychotherapy, fidelity means exactness or faithfulness to the published procedure—meaning you follow the manual. However, in the real world, when it comes to treatment fidelity, therapist practice varies. Some therapists follow manuals to the letter. Others use the manual as an outline. Still others read the manual, put it aside, and infuse their therapeutic creativity.

A seasoned therapist (Bernard) we know recently provided a short, informal description of his application of exposure therapy to adult and child clients diagnosed with obsessive-compulsive disorder. Bernard described interactions where his adult clients sobbed with relief upon getting a diagnosis. Most manuals don’t specify how to respond to clients sobbing, so he provided empathy, support, and encouragement. Bernard described a therapy scenario where the client’s final exposure trial involved the client standing behind Bernard and holding a sharp kitchen knife at Bernard’s neck. This level of risk-taking and intimacy also isn’t in the manual—but Bernard’s client benefited from Bernard trusting him and his impulse control.

During his presentation, Bernard’s colleagues chimed in, noting that Bernard was known for eliciting boisterous laughter from anxiety-plagued children and teenagers. There’s no manual available on using humor with clients, especially youth with overwhelming obsessional anxiety. Bernard used humor anyway. Although Bernard had read the manuals, his exposure treatments were laced with empathy, creativity, real-world relevance, and humor. Much to his clients’ benefit, Bernard’s approach was far outside the manualized box (B. Balleweg, personal communication, July 14, 2025).    

As Norcross and Lambert (2018) wrote: “Treatment methods are relational acts” (p. 5). The reverse is equally applicable, “Relational acts are treatment methods.” As you move into your therapeutic future, we hope you will take the more challenging path, learning how to apply BOTH the techniques AND the common factors. You might think of this—like Bernard—as practicing the science and art of psychotherapy.

**********************************

Note: This is a draft excerpt from Chapter 1 of our 4th edition, coming out in 2026. As a draft, your input is especially helpful. Please share as to whether the rabbit hole was too deep, not deep enough, just right, and anything else you’re inspired to share.

Thanks for reading!

Concerns about Science

As many of you know, over the past year or so I’ve been frustrated in my efforts to publish a couple of journal articles. I know I’m not the only one who has experienced this, but this morning we got another rejection (the third for this manuscript) that triggered me in a way that, as the feminists might say, raised my consciousness.

Three colleagues and I are trying to publish the outcomes from a short online “happiness workshop” I did a couple years ago for counseling students. Mostly the results were nonsignificant, except for the depression scale we used, which showed our workshop participants were less depressed than a non-random control group. Also, based on open-ended responses, participants seemed to find the workshop experience helpful and relevant to them in their lives.

Problems with the methodology in this study are obvious. In this most recent rejection, one reviewer noted the lack of “generalizability” of our data. I totally agree. The study has a relatively small n, nonrandom group assignment, yada, yada, yada. We acknowledge all this in the manuscript. Having a reviewer point out to us what we have readily acknowledged is annoying, but accurate. In fact, this rejection was accompanied by the most informed and reasonable reviews we’ve gotten yet.

Nevertheless, I immediately sent out a response email to the editor . . . which, because I’m partially all about entertainment, I’m sharing below. As you’ll see, for this rejection, my concerns are less with the reviews, and more about WHAT IS BEING PUBLISHED IN SO-CALLED SCIENTIFIC JOURNALS. Although I don’t think it’s necessary, I’ve anonymized my email so as to not incriminate anyone.

Dear Editor,

Thanks for your timely processing of our manuscript.

Overall, I believe your reviewers did a nice job of reading the manuscript, noting problems, and providing feedback. Being very familiar with the journal submission and feedback process, I want to compliment you and your reviewers on your evaluation of our manuscript. Compared to the quality of feedback I’ve obtained from other journals, you and your team did well.

Now I’d like to apologize in advance for the rest of this email because it’s a critique not only of your journal, but of counseling research more generally.

Despite your professional review, I have concerns about the decision, and rather than sit on them, I’m going to share them.

Although the reviews were accurate, and, as Reviewer 1 noted, there are generalizability concerns (but aren’t there always), I looked at the most recent online articles published in [your journal], to get a feel for the journal’s standards for generalizability, among other issues. What I found was disturbing.

In the seven published 2023 articles from your most recent issue, none have data that are even close to generalizable, and yet all of the articles offer recommendations, as if there were generalizable data. In the [first] article there’s an n of 8; [the second article] has an n of 6 and use a made-up questionnaire. I know these are qualitative studies, but, oh my, they don’t shy away from widely offering recommendations (is that not generalizing?), based on minimal data. Four of the articles in the most recent issue have no data; that’s okay, they’re interesting and may be useful. The only “empirical” study is a survey with n = 165, using a correlational analysis. But no information is provided on the % response to the survey, and so any justification for generalization is absent. Overall, some of these articles are interesting, and written by people I know and like. But none of them have anything close to what might be considered “generalizability.”

What’s most concerning to me is that none of the published articles employ an experimental design. My impression is that “Counselor Education and Preparation” (not just the journal, but the whole profession) mostly avoids experimental or quasi-experimental designs, and privileges qualitative research, or correlational designs that, of course, are really just open inquiries about the relationships among 2 or more variables.

This is the third rejection of this manuscript from counseling journals that, to be frank, essentially have no scientific impact factor. Maybe the manuscript is unpublishable. I would be open to that possibility if I didn’t read any of the published articles from [your journal and other journals]. My best guess (hypothesis) is that counseling journals have double standards; they allow generalizing statements from qualitative studies, but they hold experimental designs to inappropriately high standards. I say inappropriate here because all experimental designs are flawed in one way or another, and finding those flaws is easier than understanding them.

I know I’m biased, but my last problem with the rejection of this manuscript has to do with relevance. We tried to offer counseling students a short workshop intervention to help them cope with their COVID-related distress and distress in general–something that I think more counseling programs should do, and something that I think is innately relevant and potentially very meaningful to counseling students and practitioners.

Sorry again, for this email and it’s length, but I hope some of what I’ve shared is food for thought for you in your role as journal editor.

Thanks again for the timely review and feedback. I do appreciate the professionalism.

Sincerely,

John SF

If you’re still reading and following my incessant complaining, for your continued pleasure, now I’m pasting my email response to my coauthors, one of whom wrote us all this morning beginning with the word “Bummer.”

Hi There,

Yes! Another bummer.

For entertainment purposes, I kept you all on my email to the editor.

Although I’m clearly triggered, because I just read some articles in the [Journal], I now know, more about self-care, because in their [most recent lead published article], the authors wrote:

“Most participants also offered some recommendations for self-care practices to process crisis counseling. One participant (R2) indicated, “I keep a journal with prayers, thoughts and feelings, complaints and poetry.”

Now that I’ve done my complaining, I need to take time off to pray and write a poem or two, but then, yes . . .  I will continue to send this out into the world in hopes of eventual validation.

Happy Friday to you all,

John

I hope you all caught my clever utilization of recommendations from the offending journal to cope with this latest rejection. The good news is, like most rejections, this one was clarifying and inspired me with even more snark energy than I usually have.

Have a great weekend.

The Delight of Scientific Discovery

Art historians point to images like John Henry Fuseli’s 1754 painting “The Nightmare” as early depictions of sleep paralysis.

Consensus among my family and friends is that I’m weird. I’m good with that. Being weird may explain why, on the Saturday morning of Thanksgiving weekend, I was delighted to be searching PsycINFO for citations to fit into the revised Mental Status Examination chapter of our Clinical Interviewing textbook.

One thing: I found a fantastic article on Foreign Accent Syndrome (FAS). If you’ve never heard of FAS, you’re certainly not alone. Here’s the excerpt from our chapter:   

Many other distinctive deviations from normal speech are possible, including a rare condition referred to as “foreign accent syndrome.” Individuals with this syndrome speak with a nonnative accent. Both neurological and psychogenic factors have been implicated in the development of foreign accent syndrome (Romö et al., 2021).

Romö’s article, cited above, described research indicating that some forms of FAS have clear neurological or brain-based etiologies, while others appear psychological in origin. Turns out they may be able to discriminate between the two based on “Schwa insertion and /r/ production.” How cool is that? To answer my own question: Very cool!.

Not to be outdone, a research team from Oxford (Isham et al., 2021) reported on qualitative interviews with 15 patients who had grandiose delusions. They wrote: “All patients described the grandiose belief as highly meaningful: it provided a sense of purpose, belonging, or self-identity, or it made sense of unusual or difficult events.” Ever since I worked about 1.5 years in a psychiatric hospital back in 1980-81, I’ve had affection for people with psychotic disorders, and felt their grandiose delusions held meaning. Wow.  

One last delight, and then I’ll get back to my obsessive PsycINFO search-aholism.

Having experienced sleep paralysis when I was a frosh/soph attending Mount Hood Community College in 1975-1976, I’ve always been super-delighted to discover old and new information about multi-sensory (and bizarre) experiences linked to sleep paralysis episodes. Today I found two articles stunningly relevant to my 1970s SP experiences. One looked at over 300 people and their sleep paralysis/out-of-body experiences. They found that having out-of-body experiences during sleep paralysis reduced the usual distress linked to sleep paralysis. The other study surveyed 185 people with sleep paralysis and found that most of them, as I did in the 1970s, experienced hallucinations of people in the room and many believed the “others” in the room to be supernatural. I find these results oddly confirming of my long-passed sleep insomnia experiences.

All this delight at scientific discovery leads me to conclude that (a) knowledge exists, (b) we should seek out that knowledge, and (c) gaining knowledge can help us better understand our own experiences, as well as the experiences of others.

And another conclusion: We should all offer a BIG THANKS to all the scientists out there grinding out research and contributing to society . . . one study at a time.

For more: Here’ a link to a cool NPR story on sleep paralysis: https://www.npr.org/2019/11/21/781724874/seeing-monsters-it-could-be-the-nightmare-of-sleep-paralysis

References

Isham, L., Griffith, L., Boylan, A., Hicks, A., Wilson, N., Byrne, R., . . . Freeman, D. (2021). Understanding, treating, and renaming grandiose delusions: A qualitative study. Psychology and Psychotherapy: Theory, Research and Practice, 94(1), 119-140. doi:https://doi.org/10.1111/papt.12260

Herrero, N. L., Gallo, F. T., Gasca‐Rolín, M., Gleiser, P. M., & Forcato, C. (2022). Spontaneous and induced out‐of‐body experiences during sleep paralysis: Emotions, “aura” recognition, and clinical implications. Journal of Sleep Research, 9. doi:https://doi.org/10.1111/jsr.13703

Romö, N., Miller, N., & Cardoso, A. (2021). Segmental diagnostics of neurogenic and functional foreign accent syndrome. Journal of Neurolinguistics, 58, 15. doi:https://doi.org/10.1016/j.jneuroling.2020.100983

Sharpless, B. A., & Kliková, M. (2019). Clinical features of isolated sleep paralysis. Sleep Medicine, 58, 102-106. doi:https://doi.org/10.1016/j.sleep.2019.03.007

The Hottest New Placebos for PTSD

Let’s do a thought experiment.

What if I owned a company and paid all my employees to conduct an intervention study on a drug my company profits from? After completing the study, I pay a journal about ten thousand British pounds to publish the results. That’s not to say the study wouldn’t have been published anyway, but the payment allows for publication on “open access,” which is quicker and gets me immediate media buzz.

My drug intervention targets a longstanding human and societal problem—post-traumatic stress disorder (PTSD). Of course, everyone with a soul wants to help people who have been physically or sexually assaulted or exposed to horrendous natural or military-related trauma. In the study, I compare the efficacy of my drug (plus counseling) with an inactive placebo (plus counseling). The results show that my drug is significantly more effective than an inactive placebo. The study is published. I get great media attention, with two New York Times (NYT) articles, one of which dubs my drug as one of the “hottest new therapeutics since Prozac.”  

In real life, there’s hardly anything I love much more than a cracker-jack scientific study. And, in real life, my thought experiment is a process that’s typical for large pharmaceutical companies. My problem with these studies is that they use the cover of science to market a financial investment. Having financially motivated individuals conduct research, analyze the results, and report their implications spoils the science.

Over the past month or so, my thought experiment scenario has played out with psilocybin and MDMA (aka ecstasy) in the treatment of PTSD. The company—actually a non-profit—is the Multidisciplinary Association for Psychedelic Studies (MAPS). They funded an elaborate research project, titled, “MDMA-assisted therapy for severe PTSD: A randomized, double-blind, placebo-controlled phase 3 study” through private donations. That may sound innocent, but Andrew Jacobs of the NYT described MAPS as, “a multimillion dollar research and advocacy empire that employs 130 neuroscientists, pharmacologists and regulatory specialists working to lay the groundwork for the coming psychedelics revolution.” Well, that’s not your average non-profit.

To be honest, I’m not terribly opposed to careful experimentation of psychedelics for treating PTSD. I suspect psychedelics will be no worse (and no better) than other pharmaceutic-produced drugs used to treat PTSD. What I do oppose, is dressing up marketing as science. Sadly, this pseudo-scientific approach has been used and perfected by pharmaceutical companies for decades. I’m familiar with promotional pieces impersonating science mostly from the literature on antidepressants for treating depression in youth. I can summarize the results of those studies simply: Mostly antidepressants don’t work for treating depression in youth. Although some individual children and adolescents will experience benefits from antidepressants, separating the true, medication-based benefits from placebo responses is virtually impossible.

My best guess from reading medication studies for 30 years (and recent psychedelic research) is that the psychedelic drug results will end up about the same as antidepressants for youth. Why? Because placebo.

Placebos can, and usually do, produce powerful therapeutic responses. I’ll describe the details in a later blog-post. For now, I just want to say that in the MDMA study, the researchers, despite reasonable efforts, were unable to keep study participants “blind” from whether they were taking MDMA vs. placebo. Unsurprisingly, 95.7% of patients in the MDMA group accurately guessed that they were in the MDMA group and 84.1% of patients in the placebo group accurately guessed they were only receiving inactive placebos. Essentially, the patients knew what they were getting, and consequently, attributing a positive therapeutic response to MDMA (rather than an MDMA-induced placebo effect) is speculation. . . not science.

In his NYT article (May 9, 2021), Jacobs wrote, “Psilocybin and MDMA are poised to be the hottest new therapeutics since Prozac.” Alternatively, he might have written, “Psilocybin and MDMA are damn good placebos.” Even further, he also could have written, “The best therapeutics for PTSD are and always will be exercise, culturally meaningful and socially-connected processes like sweat lodge therapy, being outdoors, group support, and counseling or psychotherapy with a trusted and competent practitioner.” Had he been interested in prevention, rather than treatment, he would have written, “The even better solution to PTSD involves investing in peace over war, preventing sexual assault, and addressing poverty.”

Unfortunately, my revision of what Jacobs wrote won’t make anyone much money . . . and so you won’t see it published anywhere now or ever—other than right here on this beautiful (and free) blog—which is why you should pass it on.

A Letter to My Happiness Class on Why I Called BS on the So-Called Law of Attraction

Adler Heart Brain

[This is a letter to my happiness class]

Hello Happy People,

When happiness class ends, sometimes I wish we could continue in conversation. You may not feel that way. You might be thinking, “Thank-you Universe! Class is finally over.” But as a long-time professor-type, on many days I wish we could keep on talking and learning. I know that it may not surprise you to hear that I’m feeling like I’ve got more to say:).

This week (Tuesday, February 11) was one of those days. Many of you made great comments and asked big questions. But, given that time is a pesky driver of everything, I/we couldn’t go as deep as we might have. Here’s an example of a question I loved, but that I felt I didn’t go deep enough with:

“Do you believe in the Law of Attraction?”

This is a fascinating question with deep and profound contemporary relevance. At the time, if you recall, I had dissed inspirational statements like, “If you can imagine it, you can achieve it. If you can dream it, you can become it” as “just bullshit.” Then, in response to the question of whether I believe in the so-called Law of Attraction, I said something like “I don’t completely disbelieve it” . . . and then pretended that I was in possession of a scientific mental calculator and said something like, “I believe things like imagining the positive can have a positive effect, but it might contribute about 3% of the variation to what happens to people in the future.”

Not surprisingly, upon reflection, I’m thinking that my use of the word “bullshit” and my overconfident estimation of “3% of the variation” deserve further explanation. Why? Because if I don’t back up what I say with at least a little science, then I’m doing no better than the folks who write wacky stuff like, “You can if you think you can.” In other words, how can you know if what I say isn’t “just bullshit” too?

At this point I’d like to express my apologies to Dr. Norman Vincent Peale for referring to one of his book titles as “wacky stuff.” However, in my defense, I read the book and I still can’t do whatever I think I can do . . . so there’s that . . . but that’s only a personal anecdote.

Okay. Back to science. Here’s why I said that positive thinking, as in the so-called “Law of Attraction” might account for only about 3% of the variation in life outcomes.

Back in the 1980s, I did my thesis and dissertation on personality and prediction. At the time I had four roommates and I felt I could predict their behaviors quite easily on the basis of their personalities. However, much to my surprise, I discovered that social psychology research didn’t support personality as a very good predictor of behavior. Turns out, personality only correlates with behavioral outcomes at about r = 0.3 or r = 0.4. You might think that sounds big, because you might think that r = 0.3 means 30%. But that’s not how it works. If you do the math and multiply the correlational coefficient by itself (as in 3 x 3 or 4 x 4) you get what statisticians call the coefficient of determination (in this case, 3 x 3 = 9% and 4 x 4 = 16%). The coefficient of determination is an error-filled effort to predict specific future events, as in, if your r = 0.3, then, if you know r, then you can be about 9% accurate in predicting an outcome.

Please note that everything is error-filled, including science, and including me and my shoot from the hip efforts at estimation and prediction. When I say error-filled, I’m not disrespecting science, I’m just acknowledging its limitations.

Okay. Back to the so-called Law of Attraction. In class I was calculating in my mind that if well-measured personality traits like extraversion or introversion only account for about 9-16% of the variation in behavioral outcomes, then the so-called Law (which I’m inclined to rename as the Hypothesis of Attraction) would likely account for significantly less variation . . . so I quickly did some mental math and “3%” popped out of my mouth. What I should have said is that humans are remarkably unpredictable and that personality barely predicts behavior and situations barely predict behavior and so when we hypothesize what might influence our future, we should be careful and underestimate, lest we appear foolishly overconfident, like many television pundits.

Somewhere around this time, someone asked if I thought the authors of books who advocated things like the law of attraction really believed in what they wrote or just wrote their books for profit. My response there was something like, “I don’t know. Maybe a bit of both.” To be perfectly honest—which I’m trying to be—one of my big concerns about things like the law of attraction is that they’re used to increase hope and expectations and typically come at a price. I don’t like the idea of people with profit-driven motives luring vulnerable people with big hopes into paying and then being disappointed. Sometimes I ask myself, “If someone has their life together so much that they discovered a secret to becoming wealthy by visualizing wealth, then they should already be so damn rich that they should just share their secret for free with everyone in an effort to improve people’s lives and the state of the planet!” The corollary to that thought is that if somebody says they’ve got a powerful secret AND THEY WANT TO CHARGE YOU FOR IT, my bullshit spidey sense sounds an alarm. Go ahead, call me suspicious and cynical.

Now. Don’t get me wrong. I’m a big fan of positive thinking. A huge fan. I believe positive thinking can give you an edge, and I believe it can make you happier. But I also think life is deeper than that and multiple factors are involved in how our lives turn out. I don’t want to pretend I’ve got a secret that I can share with you that will result in you living happily ever after with all the money you ever wished for. On the other hand, I do want to encourage everyone to embrace as much as you can the positivity and gratitude and kindness and visions of your best self that we’re talking about and reading about for our happiness class. I want you to have that edge or advantage. I want you to harness that 3% (okay, maybe it could be 7%) and make your lives more like your hopes and dreams.

Later, another student asked how we can know if we’re just fooling ourselves with irrational positivity. Wow. What an amazing question. At the time, I said, we need to scrutinize ourselves and bounce our self-statements or beliefs off of other people—people whom we trust—so we can get feedback. One thing I’d add to what I said in class is that we should also gather scientific information to help us determine whether we’re off in the tulips or thinking rationally. Self-scrutiny, feedback from trusted others, and pursuit of science. . . I think that’s a pretty good recipe for lots of things. It reminds me of what Alfred Adler once wrote about love. . . something like, “Follow your heart, but don’t forget your brain!”

Tomorrow is Valentine’s Day. I hope your weekend is a fabulous mix of following your heart, and hanging onto the science.

John SF