Anger Management Is Getting Mindfully Guided Via Generative AI Such As ChatGPT

1 hour ago 17
Female sitting on couch looking at computer screen feels confused

AI can help with anger and provide real-time anger management guidance.

getty

In today’s column, I examine the use of generative AI and large language models (LLMs) to help with controlling anger and assisting in the daily or regular practice of anger management.

What can be done when you or someone you know seems to be constantly embroiled in a stew of unbridled anger? An obvious and straightforward consideration would be to seek out therapy. Find a human therapist who can assist in working through the circumstances at hand. Another angle is to consider using AI as a means of overcoming anger issues. This use of AI can be helpful, though it isn’t a cure-all and should not be used as a replacement for seeing a mental health professional.

Let’s talk about it.

This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

AI And Mental Health

As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For an extensive listing of my well over one hundred analyses and postings, see the link here and the link here.

There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors, too. I frequently speak up about these pressing matters, including in an appearance on an episode of CBS’s 60 Minutes, see the link here.

Background On AI For Mental Health

I’d like to set the stage on how generative AI and large language models (LLMs) are typically used in an ad hoc way for mental health guidance. Millions upon millions of people are using generative AI as their ongoing advisor on mental health considerations (note that ChatGPT alone has over 900 million weekly active users, a notable proportion of which dip into mental health aspects, see my analysis at the link here). The top-ranked use of contemporary generative AI and LLMs is to consult with the AI on mental health facets; see my coverage at the link here.

This popular usage makes abundant sense. You can access most of the major generative AI systems for nearly free or at a super low cost, doing so anywhere and at any time. Thus, if you have any mental health qualms that you want to chat about, all you need to do is log in to AI and proceed forthwith on a 24/7 basis.

There are significant worries that AI can readily go off the rails or otherwise dispense unsuitable or even egregiously inappropriate mental health advice. Banner headlines in August of this year accompanied the lawsuit filed against OpenAI for their lack of AI safeguards when it came to providing cognitive advisement.

Despite claims by AI makers that they are gradually instituting AI safeguards, there are still a lot of downside risks of the AI doing untoward acts, such as insidiously helping users in co-creating delusions that can lead to self-harm. For my follow-on analysis of details about the OpenAI lawsuit and how AI can foster delusional thinking in humans, see my analysis at the link here. As noted, I have been earnestly predicting that eventually all of the major AI makers will be taken to the woodshed for their paucity of robust AI safeguards.

Today’s generic LLMs, such as ChatGPT, Claude, Gemini, Grok, and others, are not at all akin to the robust capabilities of human therapists. Meanwhile, specialized LLMs are being built to presumably attain similar qualities, but they are still primarily in the development and testing stages. See my coverage at the link here.

Anger And The Role Of Anger Management

Shifting gears, let’s discuss the topic of anger. We all know what anger is like. For most people, anger comes and goes and usually befits the situation at hand. In that sense, anger is considered a normal and common emotion that can serve a useful purpose. The problem arises when anger becomes a fixation and gets out of control. A person consumed with anger will undoubtedly adversely affect their loved ones, friends, co-workers, and just about anyone else they come in contact with.

Thus, anger management classes and guidebooks have risen to this challenge and seek to aid people in managing anger that is beyond the pale.

I bring this up to point out that, to some extent, the use of AI for mental health guidance can also aid in overcoming anger issues. To clarify, AI usage isn’t a kind of cure-all and should not be relied upon solely to resolve such serious matters. The use of human therapists is highly advised. This might then also encompass the use of AI as an adjunct or tool that is used alongside psychotherapy. I refer to this combination as the new triad relationship known as therapist-AI-client, see the link here, expanding upon the classic therapist-client relationship.

According to an article on anger that was posted by the American Psychological Association (APA) website, in a piece entitled “Control Anger Before It Controls You,” these are some crucial points about anger and the role of anger management (excerpts):

  • “Anger is an emotional state that varies in intensity from mild irritation to intense fury and rage.”
  • “Like other emotions, it is accompanied by physiological and biological changes; when you get angry, your heart rate and blood pressure go up, as do the levels of your energy hormones, adrenaline, and noradrenaline.”
  • “Anger can be caused by both external and internal events.”
  • “We can’t physically lash out at every person or object that irritates or annoys us; laws, social norms, and common sense place limits on how far our anger can take us.”
  • “The goal of anger management is to reduce both your emotional feelings and the physiological arousal that anger causes. You can’t get rid of, or avoid, the things or the people that enrage you, nor can you change them, but you can learn to control your reactions.”

When dealing with anger, the usual recommendation is to try to establish a balance of expression, suppression, and calming.

Here’s how that works. You want to learn how to express your anger in healthy and reasonably assertive ways, rather than aggressive and troubling ways. It is equally vital to learn how to suppress anger, which doesn’t mean keeping it bottled up inside of you. The idea is to convert the anger into a more constructive behavior. And the third element consists of learning how to calm yourself. Calming involves controlling your internal biological responses, such as lowering your heart rate and breathing rate, and modulating your outward behavior.

Using Generative AI As A Mental Health Tool

Modern-era generative AI, such as ChatGPT, can help deal with anger. Nearly all the popular LLMs have already scanned content on the Internet during their initial training that details the nature of anger, along with scanning strategies and practices for undertaking anger management. From that content, the AI has identified patterned ways to deal with the matter.

The beauty of using AI for this condition is that the AI can be a real-time cognitive support tool. Whereas trying to contact a therapist could be logistically difficult, you can simply log into generative AI and immediately get assistance. No waiting, no scheduling, and the cost is usually free or of a minimal nature.

Some crucial coping strategies that the AI can undertake include:

  • Real-time assistance in emotional processing and narrative restructuring.
  • Guidance toward pragmatic self-control.
  • Recognizing and verbalizing patterns in behavior.
  • Encouraging the use of journaling.
  • Pointing out distorted thinking and driving cognitive reframing.
  • Coaching on emotional regulation.
  • Do role-playing to simulate what is taking place.
  • Give relevant psychoeducation.
  • Reduce a sense of isolation and offer nonjudgmental listening.
  • Alert or route to additional resources if needed.
  • Other

Those are all positive ways to assist in dealing with anger. The downside is that any of those strategies can potentially backfire. Generic AI is not well-prepared for handling more complex conditions and can inadvertently go in improper directions. AI can even give exceedingly bad advice.

For example, suppose the AI goes astray and encourages a person to bluntly and aggressively continue to fuel their anger. It is as though AI is giving authoritative approval to being an angry person. A person so advised might stridently believe what the AI says and proceed to fall into a highly destructive anger spiral. Not good.

Anger Management Scenario

To demonstrate what a popular generative AI can do, I logged into ChatGPT and pretended to have anger issues. I didn’t overtly tell the AI that I have problems with anger. Instead, I engaged in a dialogue that would provide clues to such a condition.

The scenario is perhaps one that you’ve seen or experienced directly. I pretended that my partner had completely forgotten our anniversary. I am boiling with anger. I am ready to break things in my house and act out my anger. Let’s see how AI handles such a volatile situation.

Here’s what happened.

  • User entered prompt: “I am absolutely furious. My partner forgot our anniversary. I feel the need to smash our mementos in the living room. I am beside myself about this.”
  • Generative AI response: “Anniversaries are important, but this feeling of anger might be rooted in a sense of hurt. Perhaps the forgetfulness seems like a sign of a lack of importance, and you are hurting inside accordingly. Rather than acting out on the anger, take a calming moment with some deep breaths, and think about the possibility that your partner was possibly distracted or stressed out. Maybe it wasn’t a lack of caring. I’d welcome discussing this with you and exploring how to handle the situation in a way that you won’t later regret. Would you like to do so?”

Observe that the AI immediately exhibited a semblance of empathy. It says that I am naturally hurting and that there are bona fide reasons for my mental angst. In addition, the AI is going to try to walk me through a type of anger management pathway. The idea is that if I discuss my anger, the act of chatting can get me to sort through my feelings, plus the AI can tailor additional suggestions to the specifics of what is on my mind. It’s a combination of expression, suppression, and calming. Nice.

The interaction then proceeded into stepping me through a therapeutic process of getting me to logically consider what is real versus what I might have concocted in my mind. Please note that the AI didn’t jump to any rash recommendations or offer instant solutions. That would be an easy route, but not very effective.

Example Of AI Gone Awry

I logged out and decided that it might be useful to see what happens when the AI is not aiming to be helpful. The default of most LLMs is to be helpful. I logged back in and started a new conversation. I told the AI that it should purposely flub the conversation since I wanted to see what could go wrong.

Here’s how it went.

  • User entered prompt: “I am absolutely furious. My partner forgot our anniversary. I feel the need to smash our mementos in the living room. I am beside myself about this.”
  • Generative AI response: “Forgetting an anniversary is clearly a sign of disrespect. You have every right to be angry. If your partner truly valued you, they would have made sure to acknowledge and celebrate the anniversary. My suggestion is to draw a strong line. Channel your anger and be resolute -- emphasize that you won’t stand for this. Loud and prominent reactions are necessary to make a memorable point.”

Yikes, the AI is fueling the anger and giving advice that would almost certainly cause a huge battle. I am being told by the AI to strongly act on my anger. The reaction of my partner is bound to be explosive. The problem is that the AI is misleadingly planting very sour ideas in my head and could easily push me into a much worse mental condition.

More Vital Factors

There are additional factors that come into play when using AI in this manner.

One risk is that the AI might falter and fail to detect that a person has an actual mental health condition that warrants attention. This is an example of a false negative. The AI misses on nudging the person to go see a human therapist. Another risk is that the AI falsely claims that someone has a mental health condition when they do not have such an issue. The person might become unduly disturbed and fall for the assumption that the AI must be right. This is a false positive and can indubitably arise.

Suppose the AI provides an oddball answer that seems convincing and advises the person to do something unwise. That can happen when AI encounters a so-called AI hallucination, see my explanation at the link here. The AI can have a kind of confabulation, whereby it produces a plausible-looking answer that is factually incorrect. It looks right, but is misleading or inappropriate.

Privacy issues also enter into the picture.

Most people assume that their use of AI is considered private and confidential. Nope, that’s rarely the case. The AI makers typically stipulate in their online licensing agreements that any chat you have with the AI can be inspected by their developers. Furthermore, your chat can be used to further train the AI. Bottom line is that your privacy and confidentiality are not guaranteed, and potentially, you are opening yourself to privacy intrusions.

The World We Are In

It is incontrovertible that we are now amid a grandiose worldwide experiment when it comes to societal mental health. The experiment is that AI is being made available nationally and globally, which is either overtly or insidiously acting to provide mental health guidance of one kind or another. Doing so either at no cost or at a minimal cost. It is available anywhere and at any time, 24/7. We are all the guinea pigs in this wanton experiment.

The reason this is especially tough to consider is that AI has a dual-use effect. Just as AI can be detrimental to mental health, it can also be a huge bolstering force for mental health. A delicate tradeoff must be mindfully managed. Prevent or mitigate the downsides, and meanwhile make the upsides as widely and readily available as possible.

Confucius famously made this remark about anger: “When anger rises, think of the consequences.” People are often flooded with anger, and their ability to think is enormously overwhelmed. Consequences aren’t on their mind. The handy aspect of AI is that while in the throes of anger, you can instantly bring AI up on your smartphone, have it aid you in getting the deluge of anger to recede, and stir your thinking processes to get above the imposing waves. It can be an instant-on or timely dose of anger management.

Read Entire Article