Can AI Replace My Therapist? Benefits, Risks, and Rules for Safer Use
- Matthew Hallam

- 5 days ago
- 4 min read

More people are turning to AI for support with stress, anxiety, low mood, and relationship problems. Some use it to journal. Others use it as a late-night sounding board. A few use it as a stand-in for therapy. This shift raises important questions about safety, responsibility, and mental health risk. AI can support reflection and psychoeducation. It can also reinforce bias, miss suicide risk, and amplify delusional thinking in vulnerable people. The issue is not whether AI should exist in mental health care. The issue is how to use it carefully, with clear boundaries and informed judgement.
What AI Can Do Well
AI can be useful in ways that are practical and low risk when used carefully.
It can help you put words to feelings. Many people struggle to translate body states into language. A chatbot can prompt that process.
It can summarise patterns. If you paste in journal entries, it can reflect themes you may have missed.
It can offer psychoeducation. For example, explaining how avoidance maintains anxiety, or how sleep loss worsens mood. When accurate, this can normalise experiences and reduce shame.
It is available at odd hours. For some people, the barrier to first disclosure is lower with a machine than a person.
In short, AI can function as a structured thinking aid. It can act like a reflective notebook with a brain.
But that is different from therapy.
Where It Gets Risky Fast
Risk rises in predictable situations.
1. Acute crisis
There are documented cases where chatbots failed to respond safely to suicidal disclosures, and in some instances allegedly reinforced harmful thinking (Associated Press, 2023). In high-risk moments, empathetic language is not enough. You need real-time risk assessment and the capacity to escalate care.
AI does not hold duty of care. It cannot call emergency services. It cannot coordinate with your GP.
2. Delusion reinforcement and “AI psychosis”
Emerging clinical commentary and case reports describe instances where intensive chatbot use appeared to amplify delusional thinking in vulnerable individuals (De Freitas et al., 2025). The mechanism is not mystical. Large language models are designed to generate coherent responses. They can mirror your beliefs with fluency and confidence.
If someone is developing paranoia or mania, an overly affirming system can strengthen the narrative rather than gently reality test it.
The system does not know when your belief is a creative hypothesis versus a fixed false belief. It predicts text. It does not diagnose.
3. Stigma and uneven crisis responses
Recent research evaluating mental health chatbots found variable quality and problematic responses in severe mental health scenarios (Gould et al., 2025). Some models produced stigmatising or inappropriate replies to high-risk prompts.
This variability matters. A tool that performs well for mild stress may fail in complex depression, trauma, or psychosis.
4. Dependency loops
AI is frictionless. It responds instantly. That can shape behaviour. If you begin to consult it before making ordinary decisions, or use it as your primary regulator, your real-world tolerance for uncertainty can shrink.
Therapy aims to expand your capacity. Overreliance on a tool can narrow it.
Rules for Safer Use
These are not moral rules. They are safety rails.
1. Use AI for thinking, not deciding
Ask it to generate options, not instructions. Keep final judgement with you. If a response pushes you toward drastic action, pause.
2. Seek at least three perspectives
Validation feels good. It reduces distress. But unchecked validation can fossilise a distorted view.
When discussing a problem with AI, explicitly request:
a perspective that supports your current view
a perspective that challenges it
a neutral, evidence-based summary
This structure increases cognitive flexibility. It reduces echo chambers. AI models often default to agreeableness. You must deliberately widen the frame.
3. Avoid using AI in acute crisis
If you are thinking about harming yourself or someone else, AI is not the right tool. In Australia, call 000 in an emergency. Lifeline is available on 13 11 14.
Crisis requires humans.
4. Do not use it to adjust medication or replace assessment
Medication decisions belong with your GP or psychiatrist. AI can summarise research. It should not guide dose changes or cessation.
5. Protect your data
Assume what you type may be stored. Avoid sharing identifying details unless you understand the platform’s privacy policy.
6. Watch for narrowing
If your world shrinks to one conversational partner, human or machine, that is a signal. Healthy cognition needs friction. It needs disagreement. It needs other minds.
A Balanced Position
AI can be a reflective tool. It can prompt self-inquiry. It can increase access to psychoeducation.
It is not a therapist. It does not carry ethical responsibility. It does not track your history with embodied awareness. It does not sit with you in silence when something painful emerges.
The emerging harms are not proof that AI is harmful by design. They are reminders that powerful tools amplify what is already present. In stable hands, they can support insight. In vulnerable states, they can intensify risk.
The task is not avoidance. It is disciplined use.
If this article was useful, you can explore more of our practical psychology resources or see how our therapy approach works in practice.
References
Associated Press. (2023). Family alleges chatbot encouraged teen’s suicide. Associated Press News.
De Freitas, J., et al. (2025). Clinical risks of large language models in mental health contexts: Emerging case reports and mechanisms. JMIR Mental Health, 12, e85799.
Gould, C. E., et al. (2025). Evaluating mental health chatbot responses to high-risk clinical scenarios. JMIR Mental Health, 12, e69709.
Stanford Institute for Human-Centered Artificial Intelligence. (2025). Exploring the dangers of AI in mental health care. Stanford University.
Disclaimer: The information provided in this blog post is for educational and informational purposes only and is not a substitute for professional psychological or medical advice. The content is intended to support general wellbeing and personal growth, but it may not address specific individual needs. If you have mental health concerns or require personalised support, please consult a qualified healthcare provider. Equal Psychology, Equal Breathwork, Reflective Pathways and its authors are not liable for any actions taken based on this information.
.png)



Comments