Top 10 AI Prompts and Use Cases and in the Healthcare Industry in San Diego
Last Updated: August 26th 2025

Too Long; Didn't Read:
San Diego health systems are moving AI from pilots to ROI: top use cases include sepsis prediction (COMPOSER - 17% relative mortality reduction, >6,000 admissions), ambient scribes (~2 hours/day saved), MyChart drafting (~30s per reply), and a 130‑patient BP pilot (−14 mmHg in 12 weeks).
San Diego health systems are at a practical inflection point: AI tools that trim paperwork, predict deterioration, and streamline operations are moving from pilot projects to real ROI‑driven deployments, as noted in a roundup of 2025 trends that sees rising risk tolerance for AI paired with demands for measurable value (2025 AI trends in healthcare - HealthTech Magazine); local reporting highlights how these technologies are already helping San Diego providers cut costs and improve efficiency (How AI is helping San Diego healthcare providers cut costs and improve efficiency).
That matters for California leaders juggling equity, regulation and clinician burnout - doctors now face an avalanche of data (about 1,300 data points per ICU patient vs.
seven decades ago), so practical upskilling like the 15‑week AI Essentials for Work program can help administrators and clinicians turn AI promise into safer, more efficient care.
Program | Length | Early‑bird Cost | Registration |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | AI Essentials for Work registration and syllabus |
“In 2025, we expect healthcare organizations to have more risk tolerance for AI initiatives, which will lead to increased adoption.”
Table of Contents
- Methodology: How we selected the Top 10 Prompts and Use Cases
- Sepsis prediction and early intervention - UC San Diego Health
- Ambient clinical documentation scribe - UC San Francisco Health
- EHR-integrated generative decision support - Epic / UC San Diego Health pilot
- Diagnostic image and report augmentation - Mayo Clinic
- Clinical trial recruitment and screening - Mass General Brigham
- Health equity and bias detection - Duke Health
- Workflow optimization and operational AI - Kaiser Permanente
- Patient-facing conversational triage agent - Stanford Health
- Drug repurposing and research ideation - Vanderbilt Health
- Clinical AI model monitoring and algorithmovigilance - Vanderbilt Health
- Conclusion: Next steps for San Diego clinicians and administrators
- Frequently Asked Questions
Check out next:
Learn how privacy, bias, and California regulations shape responsible AI deployments in San Diego health systems.
Methodology: How we selected the Top 10 Prompts and Use Cases
(Up)Selection for the Top 10 prompts and use cases focused on real-world impact in California health systems: priority went to applications already piloted at UC San Diego that delivered measurable gains (for example, an LLM pilot that achieved about 90% agreement with manual quality reporting and can collapse a 63‑step SEP‑1 abstraction into seconds) and to projects governed by clear safety, equity and transparency rules; methods therefore weighted accuracy, time‑savings, clinician workflow reduction, scalability across hospitals, and safeguards for privacy and consent.
Case studies and pilots informed the shortlist - from the UC San Diego pilot that proved LLMs can streamline CMS quality measures (UC San Diego study on AI improving hospital quality reporting) to operational pilots that automatically draft patient portal messages under clinician review (UC San Diego Epic–Microsoft patient messaging pilot) - and each candidate was cross‑checked against UC San Diego Health's ethical, fairness and accountability principles to ensure equity, governance and monitoring (UC San Diego Health AI principles for ethical AI deployment).
Final ranking also favored use cases with built‑in clinician oversight, patient notification, and measurable evaluation plans so leaders can spot benefits, correct bias, and scale tools that actually save time without sacrificing safety.
“The integration of LLMs into hospital workflows holds the promise of transforming health care delivery by making the process more real‑time, which can enhance personalized care and improve patient access to quality data.” - Aaron Boussina, lead author
Sepsis prediction and early intervention - UC San Diego Health
(Up)UC San Diego Health's COMPOSER shows how local AI can translate into lives saved: activated in December 2022 in the Hillcrest and Jacobs Medical Center emergency departments, this deep‑learning surveillance tool continuously sifts more than 150 EHR variables to flag patients at imminent risk of sepsis - often 4–6 hours before clinicians would typically recognize deterioration - and a before‑and‑after study of over 6,000 admissions found a 17% relative reduction in sepsis mortality.
By sending discreet EHR alerts to nursing teams for bedside review, COMPOSER turns subtle, multi‑signal patterns into timely treatment decisions and scalable workflows, expanding now beyond EDs into inpatient units and a planned East Campus rollout; it's a practical example of AI reducing harm without replacing clinician judgment.
For the full study, see the UC San Diego report on COMPOSER sepsis surveillance and the Mayo Clinic Platform's review of sepsis prediction tools for broader context.
Activated | Deployment sites | Model type | Monitored variables | Outcome | Study size | Rollout |
---|---|---|---|---|---|---|
December 2022 | UC San Diego Medical Center (Hillcrest) & Jacobs Medical Center (La Jolla) EDs | Deep‑learning (artificial neural networks) | More than 150 (labs, vitals, meds, history) | 17% relative reduction in sepsis mortality | Examined more than 6,000 patient admissions | Expanded to inpatient units; East Campus upcoming |
“Our COMPOSER model uses real-time data in order to predict sepsis before obvious clinical manifestations.” - Gabriel Wardi, MD
Ambient clinical documentation scribe - UC San Francisco Health
(Up)UC San Francisco Health is piloting an “ambient” AI scribe that listens to clinical encounters and drafts notes in real time - Ambience Healthcare returned “impressive draft notes quickly” across demos and UCSF is rolling the tool out carefully to about 100 ambulatory physicians and pediatric ED teams in Oakland and Mission Bay so clinicians can review and edit before signing; the program is overseen by a multidisciplinary AI Scribe team and an AI Governance Committee to manage safety, privacy and clinician workflow integration (UCSF revolutionizing healthcare documentation, UCSF AI Scribe Program overview).
Early literature and reviews flag real gains - less after‑hours EHR time and potential burnout reduction - while also urging rigorous evaluation for hallucinations, omissions and medicolegal risk (JMIR Medical Informatics review of ambient AI scribes); in short, ambient scribes promise to free clinician attention back to the patient, but only if governance, clinician oversight, and careful measurement travel with deployment.
Vendor | Pilot size | Launch sites | Clinician oversight |
---|---|---|---|
Ambience Healthcare | ~100 physicians (initial) | Pediatrics EDs (Oakland, Mission Bay) and ambulatory clinics | Physician review and edit required before signing |
“Scribes have had an incredibly positive impact on our doctors and patients at UCSF. Having a scribe allows doctors to focus their attention where it belongs during a visit – with the patient – and not have to stare at the screen and type the whole time during visits.” - Dr. Tom Chi
EHR-integrated generative decision support - Epic / UC San Diego Health pilot
(Up)San Diego clinicians are already testing what happens when generative AI lives inside the EHR: UC San Diego Health joined an early Epic–Microsoft pilot that uses Azure OpenAI to draft MyChart replies and other patient‑facing messages so clinicians start with a polished draft they can edit, helping to shave routine typing from the day; the pilot's early feedback has been positive and every auto‑generated message carries a clear disclaimer and clinician review before it's sent (UC San Diego Health pilot of Epic–Microsoft MyChart drafting).
At the platform level, Epic and Microsoft are rolling ambient and generative tools - Nuance/DAX and the broader Dragon Copilot - deep into workflows to automate notes, surface evidence, and even capture orders from conversations, which promises to return time to bedside care (Microsoft Dragon Copilot features and EHR integration).
Epic's Launchpad is accelerating adoption across hospitals with ready‑to‑run workflows (the MyChart drafting assistant is reported to trim roughly 30 seconds per patient reply), so San Diego leaders can pilot carefully, measure safety and bias, and scale solutions that demonstrably reduce documentation burden without sacrificing oversight (Epic Launchpad's MyChart drafting assistant).
“Across the country, doctors are being inundated with messages, and it's a real problem we need to solve.” - Dr. Christopher Longhurst, UC San Diego Health
Diagnostic image and report augmentation - Mayo Clinic
(Up)Diagnostic image and report augmentation is advancing fast thanks to Mayo Clinic's work on multimodal foundation models that fuse images and text to speed radiology workflows: the Mayo Clinic–Microsoft Research collaboration is explicitly targeting chest X‑rays to automatically generate reports, flag tube and line placement, and detect interval changes from prior films - functions that could trim routine reads and surface urgent problems faster for California hospitals (Mayo Clinic multimodal foundation models with Microsoft Research).
In parallel, Mayo's partnership with Cerebras is training a genomic foundation model at scale using Cerebras' wafer‑scale engines (the famously large “dinner‑plate” AI chips) to accelerate personalized treatment signals; early clinical benchmarks show strong performance for tasks like cancer predisposition and drug‑response prediction, hinting that image and genomic augmentation together could shorten diagnostic timelines and reduce repetitive reads for busy imaging departments in San Diego and statewide (Mayo Clinic and Cerebras genomic foundation model - BiopharmaTrend).
These tools won't replace radiologists but promise pragmatic gains - faster, more consistent reports and clearer triage for emergent findings that matter at the bedside.
Partner | Focus | Key capabilities / metrics |
---|---|---|
Mayo Clinic + Microsoft Research | Radiology foundation model (chest X‑ray) | Auto‑reporting; tube/line placement evaluation; detect interval change |
Mayo Clinic + Cerebras Systems | Genomic foundation model | Trained at scale on exome + reference genome; reported metrics include cancer predisposition 96%, cardiovascular risk 83%, RA drug response 87% |
“Multimodal foundation models hold immense promise in tackling significant roadblocks across the radiology ecosystem. The innovations we're creating with Microsoft Research will help unlock valuable insights for the future of medical imaging to improve how radiologists work and how patients are cared for.” - Matthew Callstrom, M.D., Ph.D.
Clinical trial recruitment and screening - Mass General Brigham
(Up)Mass General Brigham's work shows how AI can shave friction from the hardest part of trials - finding the right people: a newsroom write‑up reports that generative models can accurately screen heart‑failure patients for trial eligibility, improving screening efficiency and helping teams surface candidates faster (Mass General Brigham AI screens heart‑failure patients for clinical trial eligibility).
Their “clinical‑trial” approach to evaluating AI - small, measured pilots that scale only after safety, monitoring and workflow checks - helped move ambient documentation from a 20‑physician test to broader rollout while tracking concrete outcomes, a useful model for San Diego systems balancing innovation with patient safety (Mass General Brigham clinical‑trial approach to AI - Becker's).
Recruitment tools at scale matter: Rally lists roughly 3,700 ongoing trials and lets prospective volunteers save searches or get weekly alerts, and practical improvements such as Advarra participant payments (reloadable Visa cards credited within five minutes) cut real barriers to participation - one small operational tweak that can lift enrollment rates and participant satisfaction (Rally clinical trial matching and alerts - Mass General Brigham, Mass General Brigham participate in research information), offering San Diego leaders a concrete playbook for faster, fairer screening and recruitment.
“When it comes to the technology, one of the biggest challenges we're facing right now is how do we evaluate these technologies? Do we continue to use, and I think we should, our clinical trials-informed approach and how robust should that change be based on the risk of the application?” - Rebecca Mishuris, MD
Health equity and bias detection - Duke Health
(Up)Duke Health's practical work on bias detection and governance offers a playbook California systems can use to keep AI fair and local: tools like the SCRIBE evaluation framework and complementary JAMIA methods give hospitals structured ways to test ambient scribes and LLMs for accuracy, fairness, and resilience before they touch a chart (Duke Health AI evaluation and governance publications), while the BE FAIR framework explicitly empowers nurses to spot and mitigate algorithmic bias at the bedside so equity is designed in, not bolted on.
Duke leaders also argue for recurring local validation - an MLOps‑style rhythm of site‑specific checks - instead of one‑time external validation, and participate in national efforts like TRAIN to share governance best practices; that matters for San Diego because local data shifts and diverse populations can make a once‑usable model dangerous if left unchecked.
These approaches are practical: ambient scribing pilots have reclaimed clinicians' “pajama time” (some report roughly two hours back per clinical day), but Duke's research emphasizes that measurable evaluation, ongoing monitoring, and nurse‑led equity checks are the guardrails that turn time savings into trustworthy, equitable care (Duke innovates on implementing and assessing AI, Duke frameworks for evaluating large language models - HCIT).
“Ambient AI holds real promise in reducing documentation workload for clinicians,” said Chuan Hong, PhD.
Workflow optimization and operational AI - Kaiser Permanente
(Up)Kaiser Permanente Northern California offers a clear blueprint for operational AI that matters to San Diego hospitals: predictive systems embedded in the EHR are already trimming avoidable admissions, speeding triage, and giving clinicians actionable lead time.
Their Advance Alert Monitor (AAM) scans nearly 100 data elements hourly across 21 hospitals to give virtual nurse teams about a 12‑hour heads‑up on patients likely to deteriorate - work that helped avert roughly 520 deaths per year in a major analysis and triggers more than 16,000 alerts a year for bedside review (Kaiser Permanente Advance Alert Monitor program).
ED risk tools like STRIDE‑HF were prospectively validated on 13,274 acute heart‑failure visits (identifying ~11% very‑low‑risk patients with <1% 30‑day mortality) and became available systemwide in January 2025, showing how bedside decision support can safely reduce admissions (STRIDE‑HF emergency department risk tool study).
Complementary CREST projects tie EMS data to triage and test algorithms like AGES for geriatric surgery risk, illustrating practical, measurable ways AI can unclog EDs and return time to direct patient care.
Tool / Project | Scope | Key metric / outcome |
---|---|---|
Advance Alert Monitor (AAM) | 21 KPNC hospitals | ~12‑hour lead time; ~520 deaths prevented/year; >16,000 alerts/year |
STRIDE‑HF | KPNC EDs (prospective 2023) | 13,274 patients; 11.4% very‑low‑risk (<1% 30‑day death) |
AGES model / CREST projects | ED risk stratification, EMS data integration | Predictive value ≈0.8 (AGES); improved triage and throughput |
“Analytics tools allow us to use complex patient data to improve our care in real-time.” - Vincent Liu, MD, MS
Patient-facing conversational triage agent - Stanford Health
(Up)Patient‑facing conversational triage agents are moving from curious demos to operational tools that could reshape how Californians access urgent care: Stanford's Nature Medicine‑reported experiments show a chatbot can outperform doctors who rely only on internet searches, and clinicians paired with a chatbot match that performance - suggesting a practical human+AI workflow that augments decision‑making rather than replaces it (Stanford Medicine chatbot study showing chatbot vs. clinicians).
For crisis and mental‑health pathways, Stanford teams built CMD‑1, an NLP triage layer that reduced time to reach patients from over 10 hours to about 10 minutes and achieved 97% sensitivity and specificity by surfacing high‑risk messages into human workflows (Stanford CMD-1 NLP triage for mental health crises).
Those operational wins matter for San Diego systems wrestling with ED crowding and rural access: rigorous, clinically grounded evaluation frameworks like Stanford's MedHELM - designed to test LLMs on 121 real‑world tasks with mapped datasets - give leaders a way to measure safety, bias, and utility before broad rollout (MedHELM benchmark and evaluation methods for medical LLMs).
The bottom line: well‑governed conversational triage can speed help for the sickest patients while keeping clinicians squarely in the loop.
Study / Tool | Key result |
---|---|
Stanford chatbot study | Chatbot outperformed doctors with only internet/resources; doctors + chatbot matched chatbot performance |
CMD‑1 (crisis detector) | Reduced response time from >10 hours to ~10 minutes; 97% sensitivity & 97% specificity |
MedHELM (evaluation) | Benchmarked LLMs across 121 real‑world clinical tasks using 31 datasets |
“For years I've said that, when combined, human plus computer is going to do better than either one by itself.”
Drug repurposing and research ideation - Vanderbilt Health
(Up)Vanderbilt's pilot shows how generative AI can turn a mountain of biomedical literature into a short, testable list of drug‑repurposing hypotheses - a practical approach San Diego researchers can adapt by pairing LLM‑driven ideation with local EHR checks.
In the npj Digital Medicine–style study, investigators used sequential ChatGPT prompts to nominate top candidates for Alzheimer's repurposing and then validated signals by measuring Alzheimer's incidence in patients over 65 across VUMC and the All of Us Research Program; notable hits included metformin, losartan, and simvastatin, all supported by meta‑analytic signals.
The upside is speed: LLMs can rapidly triage thousands of leads so teams spend lab time only on the most plausible pairs, while the downside is familiar - models can miss under‑reported drugs and need continuous benchmarking.
For health systems in California, that means a cost‑effective pipeline for early hypothesis testing that combines AI's literature‑scale recall with rigorous, local dataset validation rather than leaping straight to costly trials (see the Vanderbilt pilot on AI‑driven repurposing and Vanderbilt's CKM drug‑repurposing incubator for methods and governance).
Study | Method | Validation datasets | Example candidates |
---|---|---|---|
Vanderbilt LLM pilot (May 2024) | Sequential ChatGPT prompts to list top drugs | VUMC clinical data & All of Us Research Program (patients >65) | Metformin; Losartan; Simvastatin |
“When I first saw the list of candidates, I was shocked. The list was surprisingly rational, with some of the drugs already being studied for potential use in treating Alzheimer's.”
Clinical AI model monitoring and algorithmovigilance - Vanderbilt Health
(Up)Clinical AI model monitoring and algorithmovigilance are the guardrails that turn pilot projects into safe, scalable tools for San Diego hospitals: teams must watch for silent failures - data drift, concept drift and training‑serving skew - that can slowly erode models' usefulness and patient safety, and set up triggerable alerts, cohort‑level slices and retraining workflows before problems cascade.
Practical playbooks from ML observability emphasize a mix of direct performance metrics (accuracy, precision/recall where labels exist), proxy signals like prediction drift, and statistical checks (KS tests, PSI, Jensen‑Shannon) to catch input shifts early; see the Evidently ML in-production guide and open-source library for concrete dashboards and test suites, and Datadog's recommendations for pairing model metrics with system telemetry and alerting.
a model can work brilliantly on day one and still “decay” invisibly tomorrow - so embed monitoring, human review paths, and clear action triggers into any deployment so clinicians keep the final say and hospitals keep patients safe.
For San Diego clinical leaders juggling equity and compliance, the takeaway is simple but vivid: embed monitoring, human review paths, and clear action triggers into any deployment so clinicians keep the final say and hospitals keep patients safe (see the Evidently ML in-production guide and open-source library for dashboards and test suites: Evidently ML in-production guide and open-source library, and Datadog's model monitoring best practices for pairing model metrics with system telemetry and alerting: Datadog model monitoring best practices).
Conclusion: Next steps for San Diego clinicians and administrators
(Up)San Diego clinicians and administrators should treat the Top 10 use cases as a playbook: start with small, measurable pilots that pair clinician oversight with clear equity and monitoring plans, then use local seed funding and university mechanisms to scale what proves safe and effective.
Practical next steps include hunting UC San Diego research funding opportunities to underwrite feasibility work (UC San Diego research funding opportunities), pursuing targeted pilot grants like ACTRI's one‑year Pilot Project Awards (up to $30,000) to get early data, and partnering with community systems and vendors to test interventions that already show promise - for example, Neighborhood Healthcare's CIPRA.ai pilot (130 patients) produced an average 14 mmHg systolic drop in 12 weeks without adding medications, a vivid reminder that well‑designed AI pilots can change care quickly (Neighborhood Healthcare AI blood pressure pilot results).
Invest in workforce readiness so clinicians can lead safe deployments (consider practical upskilling like the Nucamp AI Essentials for Work 15-week program to teach usable prompts, tools, and governance: Nucamp AI Essentials for Work syllabus and details) and catalog operational metrics from day one so San Diego systems can move from pilots to equitable, monitored production.
Opportunity | Size / Note |
---|---|
UC San Diego Find Funding | Internal & external pilot/seed opportunities; commercialization support |
ACTRI Pilot Project Awards | One‑year awards up to $30,000; 2025 call opens Aug 4; deadline Sept 18, 2025 |
PanKbase / AI pilot programs | Research & partner awards (Research Teams ≤$150k; Partner Programs ≤$200k) |
Neighborhood Healthcare AI pilot | Enrolled 130 patients; avg systolic BP −14 mmHg over 12 weeks |
“The positive results we witnessed came without needing to add more medication to most treatment plans.” - Michelle Hughes, PharmD
Frequently Asked Questions
(Up)What are the top AI use cases transforming healthcare in San Diego?
Key use cases include sepsis prediction and early intervention (e.g., UC San Diego's COMPOSER), ambient clinical documentation scribes (UCSF pilots), EHR‑integrated generative decision support (Epic/UC San Diego pilot), diagnostic image and report augmentation (Mayo Clinic partnerships), clinical trial recruitment and screening (Mass General Brigham), health equity and bias detection frameworks (Duke Health), workflow and operational AI (Kaiser Permanente AAM and STRIDE‑HF), patient‑facing conversational triage agents (Stanford), drug repurposing and research ideation (Vanderbilt), and clinical AI model monitoring/algorithmovigilance (Vanderbilt/Evidently ML practices). These were chosen for measurable time savings, clinical impact, clinician oversight, scalability, and governance safeguards.
How were the Top 10 prompts and use cases selected and evaluated?
Selection prioritized real‑world impact in California systems with pilots that produced measurable gains (accuracy, time‑savings, workflow reduction). Candidates were informed by UC San Diego and other case studies, weighted for scalability across hospitals, clinician oversight, privacy/consent safeguards, and cross‑checking against ethical, fairness and accountability principles. Final ranking favored projects with built‑in clinician review, patient notification, and measurable evaluation plans to detect bias and ensure safety.
What measurable outcomes have local San Diego or partner pilots achieved?
Examples include COMPOSER's ~17% relative reduction in sepsis mortality across >6,000 admissions and earlier detection 4–6 hours before clinical recognition; Epic/MyChart drafting pilots reported roughly 30 seconds saved per patient reply; ambient scribe pilots reclaimed clinician after‑hours time (reports of ~2 hours/day saved in some pilots); Neighborhood Healthcare's CIPRA.ai pilot showed an average −14 mmHg systolic drop across 130 patients in 12 weeks; Kaiser Permanente AAM projects ~12‑hour lead time and analysis suggesting ~520 deaths averted per year systemwide. Other partners reported high sensitivity/specificity for triage tools (CMD‑1 ~97%/97%) and promising genomic/radiology augmentation metrics in lab benchmarks.
What governance, equity, and monitoring practices are recommended before scaling AI?
Recommended practices: run small, measured pilots with clinician oversight and patient notification; adopt local recurring validation (MLOps‑style) to catch data/concept drift; use evaluation frameworks (e.g., SCRIBE, BE FAIR, MedHELM) to test accuracy and bias; instrument model monitoring dashboards (accuracy, precision/recall, prediction drift, PSI/KS/Jensen‑Shannon) and trigger retraining or human review when thresholds breach; form multidisciplinary governance committees; require clinician review before sending AI‑drafted messages or notes; and prioritize equity checks (nurse‑led spot checks, subgroup analyses) to ensure local safety and fairness.
What are practical next steps for San Diego clinicians and administrators who want to adopt these AI solutions?
Start with targeted, measurable pilots tied to local operational metrics and clinician oversight; pursue seed funding and pilot grants (UC San Diego internal funding, ACTRI Pilot Project Awards up to $30,000, PanKbase/partner awards); partner with university researchers and vetted vendors; embed monitoring and human action triggers from day one; invest in workforce readiness and pragmatic upskilling (e.g., 15‑week AI Essentials for Work) so clinicians can lead deployments; and scale only after demonstrating safety, measurable benefit, and equity protections.
You may be interested in the following topics as well:
See how ambient clinical notes and NLP automation are cutting administrative headcount needs across local health systems.
Partnering with UC San Diego partnership opportunities gives workers a pathway to AI upskilling and pilots.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible