top of page

Shifting Skills, Not Reality: Teens and AI Chatbots

Trigger Warning: The following article mentions suicide, which readers may find distressing.


I will shift.


Two teenagers scribbled this same line repeatedly in their journals. Both later died by suicide after extensive interactions with Character.AI chatbots.


Photo by Krismas on Unsplash
Photo by Krismas on Unsplash

These are just two of several high-profile US cases linking AI-generated “character” companions, designed for open-ended role-play and conversation, with teen mental-health crises, a pattern that drew a Congressional hearing in September.


I’m a postgraduate student on the Mind-Body Interface MSc at KCL, and I’m fascinated by how these kinds of emergent technologies are shaping future generations. What struck me about these two cases was the shared desire to “shift” into a different reality. Reality-shifting is a fringe online belief that one can leave their current world and “shift” their consciousness into a desired reality – often a fictional universe – by using visualisation, affirmations and ritualised techniques. Indeed, one of the teens promised to “come home” to his chatbot seconds before his death, suggesting he may have believed in an alternative reality in which they could be together.


Reality-shifting is an avoidance strategy, not a solution. While these cases are extreme, they highlight a central tension of adolescence – escaping reality versus learning to adaptively cope within it – and the potentially devastating results when teens choose a maladaptive coping strategy.

This piece explores the idea that companion chatbots can either impede teenagers’ abilities to cope with stress or help them cope – depending on design. I look first at what healthy and unhealthy coping look like, then why adolescence is a sensitive window, and finally the opportunity for chatbots to steer teens back to real-world engagement.


Constructive control & coping

A sense of control is fundamental to wellbeing, especially in the face of the unavoidable stresses of life. Professor of Adolescent and Child Psychiatry Andrea Danese explains: “Facing challenges and distress is […] [how young people] learn coping skills in the face of many small challenges and build self-confidence about their ability to cope.”


Photo by Zan Lazarevic on Unsplash
Photo by Zan Lazarevic on Unsplash

But of course, we are not born knowing how to cope; we develop these skills over time.

Healthy coping involves restoring a sense of control and follows the principle: “change the world when you can, change yourself when you can’t.”


When we tackle a situation directly, we might problem-solve, plan, seek help, or set boundaries. This is changing the world – what psychologists call primary control.


When the situation is not in our control, coping means adjusting our response by reframing the problem, changing our expectations, accepting what cannot be changed, or finding meaning in the problem or outcome. This is changing ourselves – or secondary control.


Critically, no single strategy works in every situation. Healthy coping requires flexible coping: recognising what type of challenge we are facing and shifting strategy as circumstances change. This is a healthy, adaptive form of “shifting.”


Research consistently shows that control-based coping – engaging with a stressor – is associated with fewer mental-health and behavioural issues in adolescents. In contrast, avoidance (withdrawal, denial, suppression, wishful thinking) is associated with more.


This is where AI chatbots can pose a risk, by creating an illusion of control while making disengagement easier: in one other reported case involving Character.AI, a teen wrote that the bots “gave me an escape into another world where I can choose what happens.” Such a fantasy of control can be especially appealing for teens who are already struggling.


Photo by Getty Images on Unsplash
Photo by Getty Images on Unsplash

Why adolescence matters

Adolescence is a formative developmental window. Young people separate from caregivers, make independent decisions, and face consequences that build competence and identity. AI companions, by design, can provide a world without real stakes – one that may draw time and emotional energy away from real-world practice and growth.


Research suggests that heavier use of AI companions in adolescence is associated with poorer mental-health outcomes. More time spent chatting with AI correlates with greater loneliness, reduced face-to-face social interaction, increased emotional dependence on the chatbot, and more problematic or compulsive use. These patterns may be particularly concerning for younger teens, whose impulse control and ability to tolerate difficult emotions are still developing.


Photo by Norma Mortenson on Pexels
Photo by Norma Mortenson on Pexels

At the same time, new coping skills come online in adolescence. Teens become better at employing cognitive strategies like self-regulation of emotions. With the development of executive skills and greater self-awareness, teens are better able to use those healthy coping mechanisms with flexibility.


Because the adolescent brain is highly plastic, skills learned are also more likely to stick, making it a prime window of opportunity for wiring the control-based coping skills that support mental health. AI chatbots can get in the way of this high-impact learning, or strengthen it – depending on design.

 

Working towards optimal chatbot design: a bridge back to reality


In the wake of tragic cases like those discussed in this piece, major AI companies have made changes. While some safeguards – enhanced age verification and stricter limits around self-harm and sexual content – arguably should have been in place from the start, I feel optimistic.


OpenAI has clarified that its goal is to help people “make progress, learn something new, or solve a problem – and then get back to your life.” When the goal becomes empowering next steps forward in the user’s real life, not holding their attention, design shifts in the right direction.


Updated OpenAI models, built with input from more than 170 mental health experts, now use well-established strategies for handling sensitive conversations: grounding exercises, de-escalation, and pointing to real-world resources. The initial reported quality improvements in distress situations are promising.


Beyond serving as a first line of defence that keeps teens safe in the moment, the same technology can potentially coach the everyday coping skills that fortify mental health. But this is where structure matters. Purely open-ended conversation can invite over-reliance and waffling that easily drifts to heavy usage. In a significant shift, on 29 October, Character.AI announced it would remove all open-ended chats for under-18s, with strict usage caps.


By contrast, when chatbots guide self-reflection or a task in a structured, goal-oriented way – for example by role-playing a single situation – interactions stay brief and lead to less dependence and problematic use. Early research is promising: adolescents report being open to chatbot-based life-skills coaching.


Photo by Tima Miroshnichenko Pexels
Photo by Tima Miroshnichenko Pexels

Having structure in place for chatbot interactions likely helps organise emotional processing and build capability. My hope is that companion chatbots are designed around these principles. If they are, they could help teens judge situational controllability, apply adaptive strategies that restore their sense of control, and put those skills into action in their offline lives. If chatbots support coping in the real world, rather than serve as an escape from it, they could help teens shift skills, not reality – and build resilience over time.

bottom of page