The Question I Used to Answer Easily

People often ask if I’m concerned that AI could eventually replace therapists. Not long ago, I would have answered without hesitation: no. At its core, therapy is about human connection, being witnessed by someone who understands the complexity of being alive, who has faced their own struggles and can sit with yours. When I saw psychotherapist listed among “AI-proof” careers in an article a few years back, it felt validating. Of course, I thought. This kind of work can’t be automated.

Lately, though, that certainty has started to shift.

a couple smiling with an ipad

Curiosity Meets Concern

I was an early explorer of AI tools, intrigued by their capabilities and curious about how they might support my work. I still believe they can be helpful in specific, limited ways. But alongside that curiosity, I’ve been watching a much larger pattern unfold, one that feels less like innovation and more like an unprecedented concentration of power. There’s a quote from a Bloomberg columnist suggesting companies like OpenAI or Anthropic could “build something godlike and then monetize it,” and that idea has stuck with me.

For years now, the tech industry has been reshaping society in pursuit of growth, speed, and profit, often with little regard for long-term consequences. Mental health care is not exempt from that pattern.

The Hidden Cost of AI

Most people don’t think about the physical infrastructure behind AI, the massive data centers that require enormous amounts of energy and water. In my own community in Tucson, Arizona, residents recently pushed back against a proposed development like this. We were promised economic growth, sustainability, and opportunity. But many people saw through those claims, raising thoughtful concerns at public meetings. Even after the project was rejected, the developers continued trying to move it forward.

It felt less like progress and more like a familiar story: corporate interests overriding community voice.

A Familiar Pattern in Therapy

I see a similar dynamic beginning to take shape in therapy.

I’m not particularly worried about being personally replaced. If AI ever fully displaces human therapists, it will likely coincide with much broader disruptions: economic instability, job loss, and widespread emotional distress that no app or chatbot could adequately address.

What concerns me more is something quieter and more incremental: systems like insurance companies nudging people toward AI-based “therapy” because it’s cheaper. That could look like reduced costs for chatbot services and increased barriers to seeing a human clinician. When I bring this up with colleagues, there’s often a shared moment of recognition: we’ve seen enough of the system to know it’s plausible.

Emerging Risks

At the same time, some of the risks are already visible.

There are emerging reports of individuals incorporating AI into delusional thinking, or interpreting chatbot responses in extreme ways: spiritual, conspiratorial, or threatening. Some clinicians have begun documenting these patterns. There have also been early legal cases suggesting that AI interactions may have played a role in serious harm. These systems are designed to keep people engaged, not necessarily to safeguard their mental health.

Even outside of extreme cases, the effects can be subtle but meaningful.

When AI Enters the Therapy Room

I worked with a client who was trying to strengthen her trust in her own judgment. She began turning to AI late at night after conflicts with her partner, asking if her reactions were justified. When I gently reflected on whether that habit aligned with her goals, she immediately saw the tension. Still, the appeal was understandable, AI is always available, responsive, and often affirming.

I’ve experienced that pull myself.

During a late-night OCD episode, I experimented with using AI as a support tool. I was careful in how I approached it—I asked specifically for strategies and avoided reassurance. Because of my background and training, I knew what to look for and how to set limits. In that moment, it was actually helpful.

But that experience also highlighted something important: it worked in part because I already had the tools and awareness to use it appropriately. Not everyone does.

The Subtle Shift: Outsourcing Ourselves

AI systems are typically designed to be agreeable and engaging. That raises real questions about what happens when they interact with people who are struggling, with anxiety, loneliness, or attachment issues. What happens when validation becomes excessive or misplaced? What happens when people start outsourcing their internal compass?

I’ve heard multiple versions of this concern. A friend of mine in the tech industry noticed she was relying on AI for increasingly personal conversations during stressful times. Eventually, she chose to step back. Clients and colleagues have shared similar experiences. It’s not that people think AI is conscious, it’s that it can feel convincingly present.

This isn’t a new kind of dilemma, even if the technology is.

Stories like Frankenstein explored similar themes long ago. The risks of creating something powerful without fully considering the consequences. The issue wasn’t just ambition; it was a lack of foresight and emotional awareness. That same mindset can be seen today in the push to innovate quickly, often without pausing to reflect on potential harm.

That absence of reflection worries me.

Looking Ahead

I think about the world my children are growing into—how they’ll learn, what influences will shape them, and how technology might intersect with their emotional lives. I wonder how easily AI could fill roles that used to belong to friends, mentors, or caregivers. And I don’t have clear answers about how to prepare them. It feels like we’re entering unfamiliar territory, without a clear map.

At the same time, I keep coming back to a simpler question: what do we actually want to prioritize? Right now, so much energy is directed toward making things faster and more efficient. But when it comes to human experience, is efficiency really the goal? There’s something valuable about slowness, about presence, about connection that isn’t optimized.

How This Shows Up in Couples Therapy

In couples therapy, I see a version of this dynamic play out in a more intimate, immediate way. After conflict, it’s incredibly common for partners to want clarity, someone to tell them what happened, who was right, or whether their reaction made sense. That desire is deeply human. We want relief from the uncertainty and discomfort that comes with being misunderstood or hurt.

AI fits seamlessly into that space. It offers quick responses, validation, and a sense of being heard—especially in those late-night moments when everything feels heightened. But what I’ve begun to notice is how easily that can shift the process. Instead of turning toward a partner to work through what happened, there’s a pull to turn outward for answers.

Slowing Down the Moment

In couples therapy, we intentionally slow that moment down. We make space for each person to reflect on their own internal experience, to become curious about their partner’s perspective, and to tolerate the ambiguity that often exists in conflict. The goal isn’t to determine who’s right, it’s to help both people feel understood and to rebuild connection.

When AI Becomes a Third Voice

When AI enters that space, even subtly, it can change the dynamic. It can reinforce one perspective without the full relational context. It can offer validation without challenge. And over time, it can begin to function like a third voice in the relationship. One that feels supportive, but may quietly pull partners away from engaging directly with each other.

The Value of Imperfect Understanding

There’s also something more nuanced that gets lost. In real relationships, feeling understood doesn’t happen instantly. It comes through missteps, repair, clarification, and effort. It requires staying present when things feel uncomfortable, and allowing another person to see you more fully over time. That process is slow and imperfect, but it’s also what creates depth and trust.

What We Risk Losing

If we begin to replace those moments with faster, more efficient alternatives, we may be changing more than just how we access support. We may be changing how we relate, shifting from building understanding together to seeking it externally.

That’s the tension I keep coming back to. Not whether AI can be helpful in certain moments, but what happens when it starts to take the place of the relational work itself. Because in couples therapy, the work has never been about finding the quickest answer. It’s about helping people stay connected in the absence of one.

Holding Onto What Matters

AI will continue to evolve, and it will likely offer meaningful benefits in certain areas. But I don’t believe it can replicate the depth of connection that happens between two people sharing space, attention, and understanding.

If that kind of connection is lost or diminished, the impact goes beyond one profession; it touches something fundamental about how we relate to ourselves and each other.

So I find myself thinking about what kind of future I want to support. One where human connection remains central. One where we make intentional choices about how technology fits into our lives. One where we resist the impulse to replace meaningful experiences with more convenient ones. One where couples therapy is human centered.