Pulling the plug on AI therapy?
November 23, 2022
The COVID-19 pandemic’s global quarantine left millions isolated. Unable to leave their houses, people were forced to find safer alternatives to everything, from mundane tasks like ordering food to big things like going to work.
Among these many changes, was a massive shift in how people approached therapy. The pandemic took a major toll on peoples’ mental health, causing an uptick in individuals seeking virtual guidance. However, with therapists becoming less and less accessible throughout isolation, people turned to a more convenient option: AI therapy.
With its virtual availability, AI proved to be a useful tool when it was a struggle for people to find human therapists. However, AI therapy also has its drawbacks, since many claim robots are not capable of understanding social cues and emotional reactions.
Sophomore Cate Stanziola of Ocean believes AI therapy does not offer the same experience as in-person therapy, since robots are unable to react to even the most simplistic human emotions.
“Humans experience first-hand hardships, mental health issues and have relationships with others who have similar issues. AI has generated responses and no actual feelings attached,” Stanziola said.
According to the media company SYFY, Dr. Damien Dupre, a professor at Dublin City University, led a new study that “proved robots can’t even figure out what the looks on our faces mean.”
During the study, it was found that robots could not interpret human emotions correctly based on facial cues much more than 48% of the time, compared to the 72% rate from human subjects. The study proved how humans are the only resources equipped to respond to our own emotions.
Another issue that arose with AI therapy was robots’ tendency to harbor biases towards people of color and sometimes express blatantly racist remarks. In an interview, computer scientist Abeba Birhane explained that, “it’s nearly impossible to have artificial intelligence use data sets that aren’t biased,” since AI combes through the racist sentiments on the internet and stores that information.
According to the Brookings Institution, AI systems generally gave Black basketball players worse scores than white basketball players. Seeing how AI discriminated against people of color, AI now banned affect-detecting devices such as AI therapists. This technology is not beneficial if it is going to harbor racist rhetoric against potential clients.
Furthermore, there are many unsettling unknowns about AI. People tend to be distrustful of AI due to the widespread belief that the expansion of technology brings surveillance into our personal lives.
Sophomore Phineaus Whedon Wall does not fully trust AI therapy.
“AI technology seems like an invasion of privacy,” Whedon said.
People also often decry robots in human jobs, saying that we cannot allow AI to run people out of practicing positions. However, most therapists believe that human therapy being replaced by AI is unlikely.
According to InDatalabs.com, since therapy is such a personal experience, people will want to confide in a human they grow a connection to, not a robot.
While AI therapists proved to help give many an accessible option for mental help during the height of COVID-19, it is too unreliable to be considered “the next big thing” in the therapy industry. AI has the potential to do amazing things – create art, help in the medical field and even cause a plant to wield a machete, but it can’t recreate the human bond that was perfected two million years ago.