A clinical AI co-pilot that tripled diagnostic throughput and cut report errors by 93%.
Role
Tech Lead & Senior Product Designer
Timeline
2023 — Present
Team
Solo designer, engineering, clinical team
Platform
Web app (clinician-facing)
Cover image — Lucent AI dashboard
The Problem
BeaconHealth's core value proposition was not just diagnostics. It was interpretation. Unlike most labs, they gave patients actual recommendations alongside their results. But that required doctors, and they had very few.
Each doctor could only get through a limited number of diagnostic results per day, working manually: reading values, writing findings, generating recommendations from scratch. Results backed up. Patients waited. Doctors burned out.
The team had already built something valuable: an internal clinical playbook, a model of interpretation rules calibrated specifically for the Nigerian patient population. The question was how to turn that playbook into something that could scale without compromising clinical accuracy or doctor authority.
Before Lucent AI: the manual review workflow
The Approach
The core design principle was non-negotiable from the start: the AI processes, the doctor approves. We could not design a system that felt like it was replacing clinical judgment. That would fail adoption immediately.
Instead, the interaction model positioned the AI as a first-pass analyst. It processes the values fed in by the diagnostic technician through Olewerk, generates its findings and recommendations, and surfaces them to the doctor in a review interface. The doctor's job shifts from writing to verifying.
Annotated review interface
This changed the design challenge entirely. The question was not how to display lab results. It was how to design a review experience fast enough to matter but thorough enough to catch errors. Speed and trust had to coexist in the same interface.
I ran multiple rounds of iteration with the clinical team, observing their existing workflows, identifying where errors crept in, and designing review states that made anomalies impossible to miss. Every decision was tested against one question: would a doctor trust this?
Early wireframe, result review flow
Refined design — approved state
Outcomes
3×
Diagnostic throughput per doctor per day
93%
Reduction in report errors
Days → Hours
Patient result turnaround time
5
Doctors actively on the platform
The Lucent AI review interface — production
Reflection
Designing AI tools for clinical settings taught me that trust is a design output, not a given. Doctors do not adopt tools that feel like they are competing with their judgment. They adopt tools that make their judgment faster and more accurate.
Every interaction, every label, every error state had to reinforce that the human was still in control. Getting that balance right was the hardest and most rewarding design problem I have worked on.