Friend or Foe? AI’s potential impact on our Mental Health
- Dr Daniel Martins
- 2 days ago
- 5 min read
I am Dr Daniel Martins, a clinician-scientist at the Institute of Psychiatry, Psychology, and Neuroscience, King’s College London (Department of Neuroimaging), working at the intersection of psychiatry, neuroscience, and technology. In this piece, I reflect on the evolving relationship between artificial intelligence and mental health, exploring both its promise and its perils in today’s rapidly changing landscape.

Late one night, overwhelmed by racing thoughts, I found myself sitting awake in bed. Instead of scrolling endlessly through social media or waiting weeks for my next therapy session, I turned to ChatGPT to express what I was feeling. The response was immediate, warm, and practical - offering validation and guidance through calming breathing exercises. To my surprise, the tension began to ease. Artificial intelligence (AI) is rapidly becoming a surprising and increasingly trusted ally in mental health care. But alongside these promising moments come unsettling reflections: the endless doomscrolling at 2AM, the creeping loneliness fueled by algorithmic bubbles, and the nagging sense that our most private thoughts might not be so private after all.
"Is AI here to help us heal, or might it be silently reshaping our emotional lives in ways we barely understand?" From AI-driven therapy tools and mindfulness apps to the hidden risks of digital addiction and biased algorithms, this is a moment to pause and ask: how do we ensure AI supports - not supplants - our mental health?
AI as a New Ally
Just a few years ago, the idea of discussing depression with a machine felt absurd. Today, it’s quietly becoming routine. With mental health needs far exceeding available support, AI is stepping in - offering a voice at 2AM when no one else is awake. These tools don’t replace therapists, but they remove common barriers: shame, cost, waiting rooms, and geography. For many, tapping an app becomes the first step toward healing - a chance to speak what has long been kept inside.
AI also offers new ways of personalized support that traditional therapy often cannot. Studies suggest AI chatbots can reduce symptoms of anxiety and depression, with users reporting a sense of being heard and helped. Some tools even analyze speech, text, and wearable data to automatically detect stress and suggest small interventions – a gentle prompt to breathe or go for a walk. These nudges, while simple, create a sense of companionship. The idea isn’t to automate care but to extend it. Let AI handle the check-ins so that humans can focus on connection. For many, these interactions feel surprisingly human. People describe late-night chats with bots as moments of clarity, relief, even joy. That sense of emotional safety comes from design: tools built by those who understand distress firsthand. The goal isn’t to mechanize emotion, but to widen access. Sometimes, just knowing there’s a place to speak – even digitally – can be transformative.
Yet, it is fair to say that the evidence base remains in its infancy. Most studies to date are small-scale, short-term, and focused on specific populations, limiting their generalizability. We do not yet fully understand the long-term effects of AI-based mental health tools, nor how well they perform across diverse cultural and clinical contexts. The promise is real, but the science must catch up. In mental health, quick fixes are always tempting. Yet all interventions must be grounded in rigorous, ethical, and inclusive scientific evidence. It is our responsibility as the scientific community to scrutinize both the bright promises and the black-box risks of these technologies - with curiosity, but also with caution.

The Hidden Costs
With every promise comes a shadow. AI also powers the social media algorithms that erode our peace - endlessly feeding us content we didn’t ask for and can’t stop consuming. These systems aren’t built for mental well-being. They’re built for engagement, and what captures us most is often what leaves us unsettled.
There’s also the risk of emotional over-reliance. Some users begin treating bots as confidants, turning to them for comfort at the expense of real human connection. The appeal is obvious: a tool that always listens, never interrupts, and offers polished empathy. However, AI doesn’t really understand us. It simulates care without sharing it. Over time, this can dull our capacity for real intimacy - making us less patient with the imperfections of human relationships, or less willing to engage in difficult conversations. We may start judging others by a standard of interaction that was never real.
Another concern is misinformation. AI outputs are probabilistic, not wise. In moments of vulnerability, that distinction matters. A user might receive advice that sounds plausible but is misguided - or even harmful. Without the discernment of a trained human, AI may validate distorted thoughts or overlook signs of crisis. Algorithmic bias compounds these risks. If training data reflects social inequalities, then AI tools can replicate or amplify those disparities. Users from marginalized communities may receive less relevant or culturally insensitive responses, further alienating them from care.
And then there’s identity. As we outsource emotional reflection to algorithms, our sense of narrative agency may blur. If an AI companion becomes our most attentive listener, best editor, or constant motivator, where do we end and it begin? For some, this fragmentation can lead to confusion, derealization, or a subtle erosion of self.
Lastly, privacy looms large. We pour our thoughts into these platforms, often unaware of where that data goes or how it’s used. Are our disclosures truly confidential, or feeding corporate algorithms? Even when responses feel caring, the infrastructure behind them may be far less benign.

Building a Future in Partnership
So, where do we go from here? The goal isn’t to reject AI, nor to embrace it blindly, but to shape it wisely. That means ethical standards that extend beyond technical fixes, design rooted in empathy, and accountability for emotional impact. We need collaboration across psychiatry, psychology, neuroscience, philosophy, and - vitally - lived experience. We need policies that must protect not just data, but mental well-being. Above all, we must educate ourselves on what AI can and cannot be. No app, no matter how advanced, can replace the experience of being seen and held by another human being. And yet, there’s beauty in what’s emerging. That someone sitting alone in the dark might find comfort through a digital exchange - that matters. It reflects a collective effort to meet suffering with creativity, not indifference.
Let us proceed with curiosity, compassion, and caution. Let us build technologies that serve our humanity - not substitute for it. And let’s keep the conversation going. Because the future of mental health will - and must - be deeply, irreducibly human.
Image credits: All illustrations were generated using OpenAI’s DALL·E based on prompts and direction provided by the author.