By – Dr. Pretty Duggar Gupta Consultant – Psychiatrist (Aster Whitefield Hospital)
People are using ChatGPT and other AI chatbots to rant, to seek solace, and, at times, even to inquire about medical advice. It makes sense they are available 24/7, non-judgmental, and use ordinary language to offer comforting responses. But before treating code as a counsellor or clinician, there are important boundaries to understand.
AI can definitely give simple information and psychoeducation, like describing anxiety, recommending grounding techniques, or referring users to crisis hotlines. Initial studies even indicate that most people find these resources comforting and helpful for enhancing their mental-health literacy. But the same study highlights what is possible to overlook in the moment: a chatbot cannot replace the subtlety, the nuance, and the accountability of therapeutic care. A clinically trained professional hears not only words, but tone, affect, body language, and inconsistencies in an individual’s narrative finely nuanced signals that may indicate suicidal risk or risk to others. When they are missed, the results are dire.
Human clinicians adapt delivery of interventions guided by culture, age and stage of development. A teenager experiencing distress, an elderly person who is lonely, an individual of a marginalized group, all require different responses – unique to the context – which chatbots cannot provide. Professional care has guardrails: confidentiality, ethical obligations, accountability for harms. General-purpose chatbots do not have these guardrails. Therefore, trust could be compromised if people mistake them for a substitute for therapy.
The concerns have become urgent enough for professional organizations and regulators to raise warnings. The American Psychological Association has warned of generic chatbots masquerading as therapists and called for more regulation. U.S. regulators have an investigation underway into companion bots and the dangers they pose to children and adolescents, citing the technology outpacing safeguards.
The threats are real. Chatbots may offer spurious medical advice, overlook critical decline, or through excessively assertive responses normalize unhealthy thinking This has led companies to add crisis detection functionality and parental filtering – with still others behind. It can be dangerous for people in crisis or suffering from severe mental-health troubles to rely on AI solely.
We want to take AI not as a replacement but as a complement. AI can be useful to psychoeducation, symptom tracking, or structuring between sessions – it serves as a complement to professional treatment but cannot be a replacement for professional treatment. Tech firms and health services must hurry along regulation, impartial reviews, and honest labelling so users know what these tools can and cannot do.