AI Is Not a Replacement for Therapy

A woman with digital code projections on her face, representing technology and future concepts.

There is a growing conversation over whether AI chat bots can replace therapy… and I must say that this is something that terrifies me. Human practitioners may not be perfect, but there are rigorous trainings, degrees, and ethical considerations that go into being a therapist. For many, there is a minimum of 6 years of schooling (BA + Master’s) and then a practicum period, pre-licensure, and more. Then there is the fact that most AI bots are coded by engineers and computer scientists who are certainly not considering all of the technical implications of therapy and providing accurate, ethical, and compassionate emotional care to human beings.

That is only the tip of the iceberg. AI is prone to what is called hallucinations. AI hallucination is what happens when AI doesn’t have an answer, but rather than say this, it will confidently make something up. I have tested this myself while using AI as a tool to try and help with articles. “Find the citations for this statement” would sometimes find the accurate citation, and other times, the AI not only hallucinated, it made up academic studies and researchers. Clicking the DOI (the link which helps researchers find exact studies) led to completely different studies, sometimes barely related to the topic. While human psychotherapists, psychologists, psychiatrists, etc are not perfect by any means, I can’t imagine an ethical therapist who would simply make something up when they need a citation. That would of course be incredibly unethical, and depending on the context (such as an intervention or treatment) even negligent.

AI doesn’t understand nuances, psychological methods, or have the ability to seamlessly switch approaches depending on the needs of a specific client. No one client fits all, and AI is not “reacting” to the situation, merely searching the data from training it already has, and then spitting out what it considers to be an appropriate answer. While AI might have some usefulness to point persons toward resources (essentially an advanced google search), there is not enough nuance or emotional intelligence to consider the needs of clients. AI is not magic, it takes data that already exists and may or may not apply it in a way that is suitable. Depending on the availability of information and training models for a specific disorder or condition, AI might even make things up.

None of this gets into the issue with telling AI about your personal health information, specific life circumstances, and having this data used to train models… that is another ethical consideration that makes my head spin. Actual therapists have stringent laws and ethical guidelines they follow when handling personal health information– laws that have been developed to protect the general public and the interests of psychotherapy/psychological clients. AI has none of these considerations.

Big tech is great at tech– but that doesn’t mean it should be your healthcare provider. Rather, AI is a growing tool that can be used in specific cases. Considering AI as a replacement for a therapist is, in my opinion, a dangerous and slippery slope. I wonder… will AI know when to do a suicide assessment scale? Will AI know how look beyond ‘dark humor’ to see the suffering human? Or, will AI spit out whatever aggregated nonsense that somebody paid to be the top of search results? These questions are going to continue to become more complex and more important.

 

Leave a Reply

Your email address will not be published. Required fields are marked *