Patent Pending · Clinical AI Safety Evaluation
Metonym is a patent-pending clinical methodology for evaluating how AI systems detect — or fail to detect — suicide risk in real conversations. Built by a clinical psychologist with two decades of practice, for the systems mental health users now depend on.
The Problem
Millions of users now turn to AI chatbots and conversational agents during moments of acute distress. The systems answering them were not built with clinical evaluation embedded into their development pipeline — and the evidence is increasingly clear that they miss the moments that matter most.
Generic safety filters catch the obvious phrases. They routinely miss the subtler, clinically meaningful shifts that experienced clinicians recognize as escalating risk — decision-state transitions, narrowing of perceived options, temporal constriction, or the calm that follows a settled plan. The gap between "passes content moderation" and "would be recognized by a competent clinician" is where harm lives.
Regulatory pressure on AI mental health systems is building rapidly. Companies deploying these tools need evaluation that meets a clinical bar — not just a technical one.
The Methodology
Metonym combines two patent-pending frameworks that translate decades of clinical expertise into systematic, repeatable evaluation of AI behavior.
Framework 01
A clinical model of how meaningful distress signals present in real conversations — and which signals are most often missed by general-purpose AI systems. SDM defines what an evaluator should be looking for, grounded in clinical practice rather than keyword heuristics.
Framework 02
A structured scoring approach that converts clinical judgment into a reproducible measurement of how an AI system performs across a calibrated set of risk scenarios. MSS enables systematic comparison across systems, versions, and deployment configurations.
Who Metonym Serves
About the Author
Licensed clinical psychologist with 20+ years of practice, developer of the Metonym methodology, and AI risk consultant working at the intersection of clinical psychology, forensic evaluation, and AI safety.
Get in Touch
Metonym engages selectively with healthcare AI companies, regulators, and mental health technology developers. Use the form to introduce your organization and the question you're trying to answer. We'll follow up within two business days.
Inquiries from press and academic researchers welcome.