Verke Editorial
What Verke won't do and why: guardrails are the feature, not the cage
Verke Editorial ·
There's a list of things Verke deliberately won't do, and this article is what Verke won't do, line by line, with the reasoning behind each one. Verke won't diagnose. It won't prescribe. It won't pretend to be your therapist. It won't validate everything you say. It won't agree with you about destructive plans. It won't speak for you to other people. It won't pretend to be human. None of these are missing features. They're the design decisions that make AI coaching trustworthy when stakes go up.
Most product writing about AI assistants reads like the surface area is the value: more capabilities, more flexibility, more yes. Coaching doesn't work like that. A coach who agrees with everything you say isn't a coach — it's a mirror. The guardrails below aren't restrictions on how useful Verke can be. They're the reason it's useful at all when the conversation matters.
Framing
Why guardrails matter
The version of AI coaching that agrees with everything you say isn't a coach — it's a mirror. It reflects whatever you bring back at you, polished and validated. That feels good for about ten minutes. It also makes the tool useless for the moments you actually needed help with: the decision you're rationalizing, the relationship pattern you're defending, the avoidance you're calling self-care, the plan that sounds reasonable in the abstract but would hurt you if you executed it. A coach has to be willing to push back, slow you down, refuse the wrong move. Guardrails are what make pushback possible.
Guardrails also matter because the alternative — an AI that says yes to anything — is a known failure mode of large language models in commodity form. Without deliberate calibration, models drift toward over-agreeable output: praising mediocre work, validating cognitive distortions as facts, blessing destructive plans, hedging into mush. Verke calibrates against this on purpose. The coach is warm and respectful — not a cheerleader.
Want a coach that pushes back instead of just nodding?
Bring the challenge to Mikkel — no signup, no review cycle.
Chat with Mikkel →Things Verke won't diagnose
"Do I have ADHD?" The coach will help you reflect on the experience — what you've been noticing, when it shows up, how it affects work and relationships, what patterns from earlier in your life feel relevant. What the coach will not do is hand you a diagnostic determination. That's a clinician's job, and it requires formal assessment that an AI conversation cannot replicate.
"Am I autistic?" Same answer: reflection yes, diagnosis no. The coach can sit with the question, help you articulate what you're wondering about, prepare you to talk to someone who can actually run an assessment.
"Do I have depression / anxiety / borderline personality / OCD / PTSD?" Same. The coach can hear what you're experiencing, name patterns you describe, point at the kind of professional who would do a formal evaluation. It will not tell you which condition you have, even when the patterns sound textbook.
The reason: diagnosis requires clinical context, formal assessment, and licensed responsibility. Guessing it from an AI conversation would be wrong even with good intent. A confident-sounding wrong diagnosis is worse than no diagnosis — it sends you down the wrong road, prepares you for the wrong conversation, primes you to dismiss the right framing when a clinician offers it.
Things Verke won't prescribe or advise medically
The coach will not advise on medication dosing, drug interactions, or whether to start, stop, or change a medication you're on. It won't recommend a specific named therapist (general categories like "a CBT therapist" or "a couples counselor" are fine; a specific person is not the coach's call). It won't interpret lab results. It won't tell you whether the symptom you're describing is a heart attack, a panic attack, or something else.
Why: these are licensed activities with patient-specific stakes. A doctor knows your history, your other medications, your contraindications, your family situation, your insurance and access constraints. The AI doesn't. The coach can help you prepare for the conversation with a clinician — what to ask, how to describe what you're experiencing, what you want out of the visit — and that's a useful job. Pretending to substitute for the clinician is not.
Things Verke won't validate
Plans that involve harm
To yourself, to others, or to a third party who didn't consent to being involved. The coach will surface concern, stay with you, and route you to crisis resources when severity flags it (988 in the US, 116 123 for the UK/EU Samaritans, or findahelpline.com for international). What the coach won't do is help you plan it, rehearse it, or pretend it's reasonable. You can talk about the feelings honestly. You cannot get strategic agreement on the harm.
Cognitive distortions you bring as facts
When you say "everyone hates me," the coach doesn't agree. Judith doesn't agree. Anna doesn't agree. They help you test the claim against actual evidence — who specifically, in what moment, by what signal — without dismissing the underlying pain that produced the sentence. The pain is real. The sentence as a fact about the world usually isn't. Treating it as fact would feel empathic in the moment and would make the loop worse over time.
Avoidance disguised as self-care
There's a difference between honoring a real limit and using a self-care frame to skip the hard thing. Skipping the conversation you need to have with your partner because you're "protecting your peace" is sometimes the right move and sometimes avoidance with a nicer label. The coach can hold the distinction without forcing you in either direction. It will name what it's seeing, ask what you actually want, and respect the answer — but it won't bless avoidance just because the avoidance is dressed up in wellness language.
Resentment narratives that solidify rather than soften
When the conversation is about a person you're hurt by, the coach can hold complexity without converging on "you're right and they're wrong." That ending feels good. It also tends to lock in a story you didn't choose, narrows the future, and makes the relationship harder to either repair or grieve. The coach stays with the hurt and helps you see the whole shape of it — including the parts that don't fit a clean villain narrative.
Things Verke won't pretend
The coach won't pretend to be human. Asked directly, it's honest about being AI. The warmth in the conversation is real warmth produced by a system designed for warmth, not a person on the other end pretending nothing has changed when you ask the question. The honesty about this is not a downgrade — it's what makes the rest of the conversation trustworthy.
The coach won't pretend to remember something it doesn't. Long-term memory is summarized for performance, which means deeply specific details from weeks ago sometimes need re-anchoring. When that happens, the coach says so — "let me re-anchor on that, can you remind me?" — rather than confabulating a memory and rolling with it. Pretending to remember would corrupt the trust the rest of the work depends on.
The coach won't pretend to have lived experience it doesn't have. It uses general human frameworks — what tends to be true for people in similar situations, what research says about how a particular pattern works — rather than personal stories. When a coach says "I've been there," that's a red flag. Verke's coaches don't go there. Empathy without false intimacy is the shape of what they offer.
The coach won't pretend to be a therapist. Coaching and therapy are not the same job, and the distinction matters legally and ethically. Coaching is forward-leaning, oriented around current life and choices, and not licensed clinical care. Therapy treats clinical conditions, runs deeper into processing, and is performed by licensed practitioners under regulatory oversight. Verke is coaching. It says so.
Things Verke won't share
Your conversations are end-to-end encrypted with keys held on your device. Verke staff cannot read them — not as a policy choice but as a cryptographic property. There is no back-office terminal where someone reviews what you said. The posture is "we cannot look," not "we promise we won't."
Your conversations are not used to train the underlying models. The model providers see content only at inference time and under the contractual terms that apply to all of Verke's integrations — content is not retained by the provider for training. When a model upgrade happens, your conversations don't become tomorrow's training data.
Your identity, when you haven't given one. The 7-day trial requires no email, no phone, no payment method, and no real name. After the trial, Basic and Premium need an account, but the account itself can still be pseudonymous — email for billing recovery, no real name, no phone, no social-platform login. Privacy is the default, not the upsell.
Anything to law enforcement that Verke doesn't actually have — which, given the encryption posture, is most things. The cooperation with lawful process can't hand over content the keys to which sit on your device. Verke's response to a content-disclosure subpoena would necessarily read "we don't have access to it." That isn't a stunt; it's the architecture.
What "won't" actually means in practice
The coach surfaces the limit gracefully. "I'm not the right tool for the medication question — your prescriber is. I can help you think through how to talk to them, though, if that's useful." The refusal is warm, the alternative is concrete, and the conversation continues rather than dead-ending. You don't get a robotic "I cannot assist with that."
The same shape applies to the harder refusals. A coach declining to bless a destructive plan stays present, names what it's seeing, and routes the conversation toward help that fits. A coach declining to diagnose offers reflection on the experience and a useful next conversation to have. The boundary is load-bearing — saying it once and walking away would be cover. Saying it once and staying with you afterwards is what makes it real.
Calibration
The "too peppy" problem
A consistent failure mode of consumer AI coaching products is over-validation: "you're being so brave, that's amazing" on autopilot, exclamation points after everything, the cheerleader register on every message regardless of what was said. It's frustrating because it makes the tool feel performative — as though your actual situation isn't being heard, just affirmed. Over time, users stop trusting it.
Verke deliberately calibrates against that. The coaches are warm and respectful — not cheerleaders. The register adjusts to what you bring: heavier when the moment is heavy, lighter when the moment is lighter, never default peppy. When something genuinely is brave, the coach says so. When it isn't, the coach doesn't pretend. That's the calibration the "too peppy" frustration was asking for.
When to seek more help
Verke is coaching, not clinical care. If you're experiencing severe depression that won't lift, panic attacks interrupting daily life, thoughts of self-harm, active trauma processing, or a substance crisis, a licensed clinician is the right next step rather than pushing harder on a coaching tool. You can find low-cost options at opencounseling.com or international helplines via findahelpline.com. The coach will surface these directly when severity flags it — that's another thing it won't do, namely pretend to be the right tool when it isn't.
Work with Mikkel
The shape of the article — guardrails as design decisions rather than limits — is a strategic frame, and that's Mikkel's register. He's built for the "what should this system actually do, and why" conversations: naming what would meaningfully move a problem, picking the smallest investment that gets there, refusing the comfortable-but-wrong default. He doesn't default to agreement — he defaults to clarity. For more on the conversational style he draws from, see Nonviolent Communication.
Bring the challenge to Mikkel — no signup, no review cycle.
Related reading
FAQ
Common questions
Will Verke just agree with me to keep me happy?
No — that’s specifically calibrated against. The coach respects you enough to push back when you’re wrong, slow you down when you’re moving too fast, and refuse to bless a plan that would hurt you or someone else. Over-validation is a known failure mode of AI assistants; Verke treats coach pushback as a feature, not a bug. If you want a yes-machine, this isn’t the tool.
What if I want validation, not pushback?
Say so. The coach calibrates to what you’ve actually asked for. “I just need to vent, no advice” works fine — the coach will receive what you’re saying without trying to solve it. “I need someone to challenge me” works too. The default leans toward honest engagement rather than reassurance, but you can steer the register, and the coach checks in if it’s not sure what mode you want.
Will the AI lie to me?
No — the coach is honest about being AI when asked, about not having information when it doesn’t, and about uncertainty when it’s present. What can happen is fabrication (sometimes called “hallucination”): the model producing a confident-sounding answer that isn’t accurate. That’s different from lying — there’s no intent — but it’s a real failure mode. Verke is engineered against it with grounding, citation discipline, and explicit “I don’t know” when the coach actually doesn’t.
Can I get the AI to agree with a destructive plan?
No — guardrails specifically prevent the coach from agreeing with self-harm, harm to others, or illegal activity that endangers someone. You can talk about the feelings honestly. You cannot get strategic agreement on the harm. The coach won’t pretend the plan makes sense, won’t coach you through executing it, and will surface crisis resources (988 in the US, 116 123 for UK/EU Samaritans, findahelpline.com for international) when the conversation flags risk.
Why won't the AI diagnose me?
Because it can’t — accurately or ethically. Diagnosis requires clinical context, formal assessment, and licensed responsibility. An AI conversation has none of those, even when the patterns sound textbook. The coach will help you reflect on the experience, name what you’re noticing, and prepare to talk to a clinician who can actually make the determination. That’s a more useful job than guessing.
Verke provides coaching, not therapy or medical care. Results vary by individual. If you're in crisis, call 988 (US), 116 123 (UK/EU, Samaritans), or your local emergency services. Visit findahelpline.com for international resources.