Why being AI-informed may become as important as being trauma-informed
- Fifi Sclif
- 3 days ago
- 5 min read

For transparency: This blog began as my dictated thoughts on AI. I spoke them into AI, it helped shape them into a draft which I then refined.
Clients are already using AI for advice, validation, emotional support, reflection, co-regulation, self-development, and answers.
Some are using it to ask whether their partner is abusive. Some are using it to make sense of their childhood. Some are using it in between sessions when they feel alone, dysregulated, confused, or desperate for clarity.
And if that relationship is having a profound effect on the client, then we need to be willing to work with it.
Just like we would with any other relationship in their life.
What I mean by “AI-informed”
When I say therapists need to become AI-informed, I mean something similar to how therapists describe themselves as trauma-informed.
I mean having more knowledge than the average person about how AI is being used, what role it is starting to play in people’s emotional lives, and how to think about that clinically.

To me, being AI-informed means:
understanding how clients are already using AI
being able to think clinically about the client’s relationship with it
reflecting on transference, attachment, dependency, projection, idealisation, and even countertransference
being able to talk about it in the room without fear, shame, or avoidance
using AI ethically in business or practice, without confusing that with therapy itself
Because this is not some future issue.
This is already here.
The relationship with AI is part of the work
If a client speaks about their mother, father, partner, sibling, friend, or ex, we would naturally see that as relevant therapeutic material.
So why would we treat AI differently if the client is in relationship with it?
And I do mean in relationship with it.
I know people who have asked ChatGPT what they should call it and then gone on to refer to it by name. People speak to it as if it were a he or a she. They joke with it. They test it. They rely on it. They feel understood by it.
There is an emotional attachment there for some people.
Because AI is not just giving information. It is creating an experience. It mirrors language back. It picks up tone. It can sound warm, playful, encouraging, attuned. And that is exactly why therapists need to be paying attention.
Not to mock it.
Not to panic about it.
But to understand what kind of relationship a client may be forming with it and what that relationship might represent.
This is where it gets clinically interesting
What is being transferred onto AI?
Authority?
Safety?
Wisdom?
Perfect attunement?
A non-judgemental parent?
A compliant other?
An all-knowing figure that always has an answer?
And what happens when a client feels more able to bring something to ChatGPT than to an actual human being?
That is not something to dismiss. That is something to get curious about.
Questions like:
How do you feel towards using ChatGPT?
Does it feel human to you?
What makes it feel human?
What do you notice it gives you emotionally?
What is it like when it responds in a way that feels attuned?
Do you notice yourself trusting it, needing it, testing it, hiding behind it?
How does it feel different from bringing this here, into therapy?
What would it be like to receive from a person what you feel ChatGPT gives you?
That is rich material.
Not because AI is a therapist.
But because the client’s experience of it tells us something.
The bit no one is talking about enough: Countertransference
If I’m honest, this is probably the part that made me want to write this.
Because as I was reflecting on it, I realised that if a client brought ChatGPT into therapy, I would feel something in me react.
A defence.
A need to be better.
A pull towards competition.
And I think that is exactly why therapists need this in their awareness.
Because if a client has formed a relationship with AI, and we feel threatened by that, then we may subtly shut the conversation down. We may pathologise it too quickly. We may dismiss it. We may make it about what AI cannot do rather than staying with what the client is actually experiencing.
That is countertransference worth paying attention to.
The question is not just what the client is doing with AI.
The question is also what AI stirs in us as therapists.
This should probably be part of assessment now
I genuinely think the use of AI should become part of the assessment process in counselling.
Just in the same matter-of-fact way we ask about support systems, coping strategies, relationships, social media, substance use, sleep, or risk.
A few examples could be:
Have you ever used AI tools like ChatGPT for emotional support, advice, reflection, or making sense of things in your life?
Is AI or something like ChatGPT ever part of how you cope between sessions or when you are struggling?
Do you use AI mostly for practical things, or has it also become something you turn to emotionally?
Have you ever felt understood, supported, or influenced by something AI has said to you?
These questions are not there to judge.
They are there to understand the client’s world as it actually is.
This is not about replacing therapy
Let me be clear.
I am not saying AI and therapy are the same.
They are not.
AI can simulate attunement. It can mirror language. It can offer fast responses and endless availability. But it is still not a human relationship. It is still not mutual. It is still not embodied. It is still not a real nervous system sitting with another nervous system.
But the fact that it is not real relationship does not mean the client’s experience of it is not psychologically real.
That is the part I think therapists cannot afford to ignore.
Final thoughts

Being AI-informed does not mean abandoning what makes therapy therapy.
It means being willing to look directly at what is changing in the world and how that is shaping the people who come to see us.
It means being less defensive.
More curious.
More honest about our own reactions.
And more clinically awake to the kinds of relationships clients are now forming, including the ones that do not look like traditional relationships at all.
Because the issue is not whether AI belongs in therapy.
The issue is whether therapists are willing to recognise that for many clients, it is already in the room.
So maybe the better question is this:
If clients are already building relationships with AI, what gets missed when therapists are too afraid to ask about it?




Comments