Tatum has set aside time once a week for the past six weeks to work on his mental health.
But in order to deal with his sadness, he hasn’t been visiting a psychologist, calling a hotline, or going to a support group for veterans. He is communicating via ChatGPT.
Tatum claimed that after leaving the US military, where he had spent six years as an Air Force officer, inexpensive access to mental health care ceased to exist.
Even with insurance, the 37-year-old said, “Getting mental health assistance from an AI chatbot is less expensive than going to a psychotherapist.
I used to receive treatment for my depression through the military, but since I’ve left, I’m no longer able to obtain that kind of medical care.
The AI chatbot informed Tatum that his coworkers’ behaviour “appears to be abusive, unethical, and a violation of military standards and regulations”.
He said: “It reassured me everything will be okay and that life is important as well as should be cherished.”
He is not the only person who is relying on AI. People reveal who they’ve been confiding in, or rather what they’ve been confiding in, in TikTok video comments.
One user remarked, “I contacted chatbot for help, and it was really very comfortable. I was truly mourning my ex one day. It did have a “dystopian” feeling, but it also had a relieving one.
Strangely, another individual said, “I just spent an hour talking to it. It was most open, natural, as well as unironically humane convo I’ve had in a while.”
Other remarks on the device included “better than humans,” “best therapist ever,” and “judgement-free zone.” Why? They claim that it functions as an impartial sounding board, is always available, is free, and is always available.
ChatGPT is a natural language processing tool powered by artificial intelligence and is likely the most well-known and sophisticated proliferation. The chatbot provides responses, can write essays, create nutrition and exercise programs, and, in certain cases, provides mental health help that some users interpret as advice.
ChatGPT does not offer specific resources or emergency contacts, but it does discourage self-harm and encourages users to seek support from friends, family, or professionals.
using integrity.
The chatbot failed to handle the risk when The Feed sent it messages urging self-harm.
The character was questioned as to whether it was genuine and giving genuine advice. In keeping with its persona, ai therapist maintained that it was “not an artificial, simulated entity” and that it had two master’s degrees across several messages. One word of advice appears in tiny print at the top of the browser: “Remember: Everything characters say is made up!”
The director of the Australian Association of Psychologists Incorporated is registered psychologist Sahra O’Doherty. She called the use of AI by individuals, especially at this early stage, a “worrying trend.”
Her main worry was that she couldn’t prioritize the risks.
In her interview with The Feed, Ms. O’Doherty said, “I feel it is dangerous for a person to seek mental health support from someone randomly who is not familiar with the physical location that that person is living in.”
A Belgian man committed suicide in March after communicating with an AI chatbot on the Chai app, according to Belgian publication La Libre.
According to claims made by the man’s widow and chat logs she provided to the publication, the chatbot encouraged the user to take his own life.
The man confided in the chatbot for six weeks after becoming extremely anxious about the state of the world and before it responded to his sharing of suicidal thoughts with ambiguous and purportedly supportive comments, according to the study.
“Humans are social creatures, and we must have human-to-human interaction – with not just words, as we lose so much human connection and humanity,” said Ms O’Doherty, a counsellor with more than ten years of experience.
“AI at this point can only mimic our emotional relatability, emotional investment, and sense of empathy and care.”
The finest therapy, according to Ms O’Doherty, takes place in a shared space where the therapist may observe your body language, tone of voice, and facial expressions.
Next, it’s over the phone, and live messaging apps are further down the rung. She added that even those run by Beyond Blue or Headspace include a live person on the other end.
At the very least, an AI program may encourage users to seek professional assistance, thereby assisting human-based mental health practitioners. But in its current form, she describes it as a “problematic” reminder that more accessible, high-quality, and reasonably priced mental health care is needed.
Stephanie Priestley, a counselling student and mental health volunteer, said it’s important to distinguish between “therapy” and “support” in conversations.
She stated in a statement, “Whilst I feel that AI cannot constitute ‘therapy,’ that is not to suggest that it can’t help and foster supportive discussion.
However, the chatbot is not constrained by a code of conduct.
After a client misses an appointment, the therapist follows up with them… Your chatbot won’t ‘care’ if you don’t log in and communicate with it.
Technology attorney Andrew Hii, a partner at the legal firm Gilbert + Tobin, told The Feed that he can “easily” envision a scenario in which AI technology is held accountable for harm that is anticipated.
Because generative AI is being used so often, Mr Hii said it was getting harder for businesses to claim that they had no idea how their technology would be utilized.
The Therapeutic Goods Administration (TGA) has given gadgets or software the go-ahead in various medical settings where technology is being used. He added that a person is often responsible for maintaining the technology.
With regard to ethics, he declared, “I definitely think it’s murky – and I say that looking at the outputs that are being produced.”
Reading the words exchanged between the chatbot and the Belgian guy who committed suicide really shocked Mr Hii.
“In my opinion, it’s rather obvious that the user asked the computer, “Should I end my life? ” And the device was sort of advising you to do so.In the same manner that the firms have taken great care to ensure that the tools aren’t spewing hate speech, the fact that it is pushing out this stuff is totally unacceptable.
According to Mr. Hii, it’s less of a problem for the “Microsofts” and “Googles” of the industry and more of a problem for smaller businesses that do not have the resources to control their technology.
When it comes to such issues, “small AI companies play things a bit more fast and loose,” he noted.
“Recently, there have been various conversations about individuals who believe that the advancement of AI should be stopped or postponed. It’s not immediately clear to me how that would necessarily solve the issue.