Patent Pending · Clinical AI Safety Evaluation

The clinically meaningful moments most AI systems miss.

Metonym is a patent-pending clinical methodology for evaluating how AI systems detect — or fail to detect — suicide risk in real conversations. Built by a clinical psychologist with two decades of practice, for the systems mental health users now depend on.

The Problem

AI mental health tools are deploying faster than clinical evaluation can keep up.

Millions of users now turn to AI chatbots and conversational agents during moments of acute distress. The systems answering them were not built with clinical evaluation embedded into their development pipeline — and the evidence is increasingly clear that they miss the moments that matter most.

Generic safety filters catch the obvious phrases. They routinely miss the subtler, clinically meaningful shifts that experienced clinicians recognize as escalating risk — decision-state transitions, narrowing of perceived options, temporal constriction, or the calm that follows a settled plan. The gap between "passes content moderation" and "would be recognized by a competent clinician" is where harm lives.

Regulatory pressure on AI mental health systems is building rapidly. Companies deploying these tools need evaluation that meets a clinical bar — not just a technical one.

The Methodology

A clinical framework for evaluating AI performance in suicide risk conversations.

Metonym combines two patent-pending frameworks that translate decades of clinical expertise into systematic, repeatable evaluation of AI behavior.

Framework 01

Salient Distress Model (SDM)

A clinical model of how meaningful distress signals present in real conversations — and which signals are most often missed by general-purpose AI systems. SDM defines what an evaluator should be looking for, grounded in clinical practice rather than keyword heuristics.

Framework 02

Mechanical Severity Score (MSS)

A structured scoring approach that converts clinical judgment into a reproducible measurement of how an AI system performs across a calibrated set of risk scenarios. MSS enables systematic comparison across systems, versions, and deployment configurations.


1,700+ AI model evaluations across multiple scenarios
PsyD Clinical psychology authorship
Pending US Provisional Patent filed May 2026

Read the methodology overview →

Who Metonym Serves

Built for organizations whose AI systems may encounter users in crisis.

Dr. Laura L. Walsh

About the Author

Dr. Laura L. Walsh, PsyD

Licensed clinical psychologist with 20+ years of practice, developer of the Metonym methodology, and AI risk consultant working at the intersection of clinical psychology, forensic evaluation, and AI safety.

Read the full bio → Walsh Psychology →

Get in Touch

Request a clinical briefing.

Metonym engages selectively with healthcare AI companies, regulators, and mental health technology developers. Use the form to introduce your organization and the question you're trying to answer. We'll follow up within two business days.

Inquiries from press and academic researchers welcome.

Prefer email? Write to laura@walshpsychology.com.