Verke Editorial

What happens when AI therapy makes a mistake? Memory, timing, and tone

Verke Editorial ·

What happens when AI therapy makes a mistake? Three patterns, mostly. The coach forgets context you shared. The coach mistimes a check-in. The coach's response lands off-tone — too peppy, too clinical, too cautious, too anything. Each one feels closer to a friend missing a beat than to a software error, which is exactly why it can sting more than a regular bug. The hurt is real, even when the cause is technical.

The good news: each of the three has a recognizable shape, a known cause, and a recovery move that works. This article walks through all three — what they look like, why they happen, what Verke does about them, and what you can do in the moment to get the conversation back on track.

Failure mode 1

When the coach forgets

Memory has limits. Long-running context — months of sessions, dozens of threads, hundreds of small details about your life — can't all stay equally vivid; some of it gets compressed into summaries that hold the shape but lose the specifics. Recurring topics tend to survive that compression well. Rarely-mentioned details — the name of your sister's cat, the specific phrase your boss used in that one meeting last month — may surface less reliably. The hurt, when it happens, isn't the technical lapse on its own. It's the implication: "you weren't important enough to keep." The brain reads attentive recall as care, and the inverse as the inverse.

Verke's response is structural rather than apologetic. Coaches keep a long-running summary of what you've worked on and can re-read it on request. You can pin a piece of context explicitly — "please remember that I'm working on the conversation with my brother" — and the coach treats pinned context as load-bearing across sessions. When something does slip, the recovery loop is designed to be fast: you name what was missed, the coach re-anchors, the work continues. The persistence improves over time as the underlying systems improve, but the recovery move is the part that matters in the moment.

Frustrated by an AI that didn't remember what mattered?

Try a CBT exercise with Judith — 2 minutes, no email needed.

Chat with Judith →

Failure mode 2

When the timing is off

A check-in lands in the middle of a meeting. A "how did it go" arrives an hour after a thing you decided not to do. A gentle nudge shows up at 5 a.m. when you wanted to be asleep. The mismatch isn't about the message — it's about the context, which the AI can't fully see from where it sits. The frustration is reasonable; the fix is mostly mechanical.

Quiet hours, do-not-disturb settings, and the option to mute proactive messages handle the bulk of it. If you'd rather the coach not initiate at all and only respond when you reach out first, that's a setting too. The product is built so you set the cadence, not the AI. If the timing pattern keeps missing in a way the settings can't fix, name it directly: "please don't check in on this until I bring it up." The coach treats explicit constraints like that as binding.

Failure mode 3

When the tone lands wrong

Too peppy when you wanted blunt. Too cautious when you wanted directness. A "have you tried gratitude journaling" response to a serious thing, or a clinical-feeling reframe when you wanted plain-language warmth. Tone is the part of language that's easiest for an AI to get wrong because there's no single right answer — what reads as caring to one person reads as cloying to another, and the coach has to read context to calibrate.

The pushback move is short and direct: "that landed flat — try again with [more direct / less reassuring / blunter / softer / more practical]." Coaches recalibrate inside the session and carry the preference forward — once you've told a coach "please don't reassure me, just help me think it through," that holds. The fastest way to a coach that fits your tone is to tell it directly when it doesn't. The AI can't take it personally, and the result lands faster than waiting for the right register to emerge on its own.

What to do when it happens

Name it directly

"You forgot what we talked about last week, and that mattered" is enough. The coach can re-anchor on the missed context once you point at it. Naming the gap is faster than rebuilding the conversation around it, and it preserves the working trust — the same way it would in any relationship where someone needs a moment to catch up. There's no penalty for flagging a miss; the AI is built to take that input as useful data, not as criticism.

Use the recovery prompt

"Let's restart this thread; here's the context you should remember" is a clean reset. List the three or four load-bearing facts — who's involved, what's at stake, what you've already tried, what you want from this conversation. The coach treats explicit context-set as the new baseline and works from there. Most ruptures repair inside one recovery prompt; the rest tend to need a second pass with more specifics.

Adjust your settings

Quiet hours, proactive-message preferences, tone preferences, and how often the coach can initiate are all settings — not fixed behaviors. If the same kind of miss keeps happening, the right move is usually a settings change, not a workaround. Open the coach preferences, make the adjustment, and the future shape of the conversation changes. This is the kind of friction that should decrease over time, not require ongoing management.

Switch coaches if needed

If the fit was wrong from the start — the modality doesn't match how you process, the tone register is consistently off, the method isn't the one you actually need — a different specialist may suit better. Verke's coaches are differentiated on purpose. Judith for cognitive-behavioral practical work; Anna for psychodynamic depth; Amanda for acceptance-and-commitment flexibility; Marie for relational and couples work; Mikkel for executive and leadership focus. Switching is a normal move, not a failure of fit-finding.

When to seek more help

Self-help and AI coaching can do a lot, but they have limits. If you're experiencing severe depression that hasn't lifted, panic attacks that interrupt daily life, thoughts of self-harm, active trauma processing, or substance dependence — those are signals to work with a licensed clinician, not signals to push harder on a coaching tool. You can find low-cost options at opencounseling.com or international helplines via findahelpline.com. There's no prize for waiting longer than you need to.

Work with Judith

The skill of naming what landed wrong and asking for what you need is exactly the work Judith's cognitive-behavioral approach builds. It's the same move you'd use with a coworker, a partner, or a friend who said something that didn't fit — and the practice transfers. If the AI mistakes are frustrating you, that frustration is also a useful clue about where the name-it-and-ask muscle could use exercise. For more on the method, see Cognitive Behavioral Therapy.

Try a CBT exercise with Judith — no account needed

FAQ

Common questions

Why does the AI sometimes forget what I told it?

Long context gets compressed for performance — rarely-mentioned details may not survive that compression as cleanly as recurring ones. The coach can re-anchor when you flag what was missed. Verke is investing in stronger persistence and recall, and the recovery loop (you name it, the coach re-grounds) is built to make these moments easy to fix in real time.

Is it normal to feel hurt when the AI forgets something?

Yes — this is one of the most common reactions. The hurt is real even though the cause is technical. It maps onto how human memory failures feel (“you didn’t even remember”) because the brain processes attentive listening as care. Naming the feeling and asking the coach to re-anchor the context tends to repair the moment, the same way it would in a human conversation.

Can the AI lie to me on purpose?

No — it has no purpose, no agenda, no goals of its own. What looks like a lie is almost always fabrication (sometimes called “hallucination”) — the model generates a plausible-sounding answer that isn’t grounded in real information. Verke’s guardrails catch the obvious risks; for everything else, treat AI output the way you’d treat a confident friend: useful starting point, not the final word.

What if the AI gives me advice that doesn’t fit my situation?

Push back. Tell the coach “that doesn’t fit because [reason]” — the response gets re-grounded around what you’ve added. Generic advice is usually a sign the coach hasn’t fully understood the specifics yet. Adding context, examples, or constraints almost always corrects it. If the advice keeps missing, that’s your signal to switch coaches or escalate to a human.

Should I trust an AI that makes mistakes?

Calibrated trust is the right move. Trust the AI for thinking-partner-level support — exploring options, naming patterns, building skills, holding context across a project. Verify when stakes are high. Double-check anything medical, legal, or financial with a qualified human. The same posture you’d take with any friend who’s well-meaning, curious, and helpful but not omniscient.

Verke provides coaching, not therapy or medical care. Results vary by individual. If you're in crisis, call 988 (US), 116 123 (UK/EU, Samaritans), or your local emergency services. Visit findahelpline.com for international resources.