Top 10 AI Prompts and Use Cases and in the Healthcare Industry in Raleigh
Last Updated: August 24th 2025

Too Long; Didn't Read:
Raleigh healthcare integrates AI across diagnostics, staffing and patient outreach: top use cases include Sepsis Watch (≈50,000 records, 5‑hour lead, doubled SEP‑1 compliance), 70,000+ CT nodule scoring, OR scheduling (33,000 cases, 13% accuracy gain), and portal triage cutting 12–15 messages/day.
Raleigh sits at the center of the Research Triangle's surge in healthcare innovation, where hospitals, radiology groups and startups are layering AI into diagnostics, predictive staffing and patient outreach to cut costs and speed care - a trend captured in coverage of the region's growing health‑tech hub (Research Triangle healthcare innovation and health‑tech hub).
Local examples range from generative tools that accelerate radiology reporting to RTP teams building apps that help patients appeal insurance denials quickly; one RTP app's founders report faster reversals and real relief for families who once spent hours on hold (RTP AI app to fight health insurance denials).
For clinicians and staff looking to join this wave, practical programs such as Nucamp's AI Essentials for Work 15‑week bootcamp teach usable prompt‑writing and workflow skills that translate directly to hospitals and clinics.
“The amount of stress it adds to your life right when somebody's already sick, I literally think it's killing people.”
Table of Contents
- Methodology: How we selected the Top 10
- Duke Health - Sepsis Watch: Sepsis detection and rapid-response prompts
- UNC Health - Generative AI Staff Chatbot: Internal knowledge and triage prompts
- WakeMed - AI-drafted Patient Portal Messages: Empathetic response prompts
- Atrium Health Wake Forest Baptist - Virtual Nodule Clinic: Lung nodule scoring prompts
- Novant Health & Viz.ai - Acute Imaging Triage: Stroke and ER triage prompts
- OrthoCarolina - Medical Brain: Post-surgical follow-up and digital assistant prompts
- Duke Health - OR Scheduling Model: Surgery duration prediction prompts
- Novant Health - Behavioral Health Acuity Risk Model: Suicide-risk triage prompts
- Wake Forest University School of Medicine - Cognitive Health Index: Alzheimer's treatment selection prompts
- Glean - Enterprise Knowledge: Onboarding, summarization and agent prompts for healthcare operations
- Conclusion: Getting started with AI prompts in Raleigh healthcare - safety, governance, next steps
- Frequently Asked Questions
Check out next:
Explore concrete outcomes in our Wake County AI case studies that demonstrate measurable clinical and operational gains.
Methodology: How we selected the Top 10
(Up)Selections for the Top 10 prioritized real-world benefit in North Carolina: first, demonstrated clinical impact and measurable outcomes - examples from the North Carolina Health News survey (like Duke's Sepsis Watch, trained on over 42,000 encounters and tied to roughly a 31% drop in sepsis mortality) showed which prompts and workflows moved the needle; second, operational value and fit with existing workflows, echoing industry guidance that AI must solve a specific problem rather than be used for its own sake (NCACPA guidance on adopting AI in health care: Embracing the AI Wave); third, safety, evaluation and governance - only systems with clear evaluation frameworks or alignment with state guidance earned higher ranking (see the North Carolina government's NCDIT Responsible Use of AI framework and Duke's SCRIBE/JAMIA evaluation work on clinical LLMs and ambient scribing via Duke Health SCRIBE evaluation for safe, scalable AI in health care); and finally, local research and scalability potential (UNC and Carolina labs provide a talent and validation pipeline).
The methodology favored replicable, clinician-reviewed prompts and those with governance, measurable workflows, or clear patient-safety nets - so readers can see which ideas are proven in practice, not just promising in theory.
Selection Criterion | Evidence Source |
---|---|
Clinical impact & metrics | North Carolina Health News (Sepsis Watch outcomes) |
Problem‑tool fit & operations | NCACPA “Embracing the AI Wave” |
Safety & evaluation frameworks | NCDIT Responsible Use; Duke SCRIBE/JAMIA |
Local research & scalability | UNC research & campus AI initiatives |
“Ambient AI holds real promise in reducing documentation workload for clinicians. But thoughtful evaluation is essential. Without it, we risk implementing tools that might unintentionally introduce bias, omit critical information, or diminish the quality of care. SCRIBE is designed to help prevent that.” - Chuan Hong, Ph.D.
Duke Health - Sepsis Watch: Sepsis detection and rapid-response prompts
(Up)Duke Health's Sepsis Watch turned a deep‑learning model into a practical, nurse‑centered early‑warning system that pulls EHR data into a dashboard and risk‑stratification app so rapid‑response team nurses and ED clinicians can act before patients crash; the project - built and deployed at Duke - analyzed roughly 50,000 records (tens of millions of data points) and delivered a median five‑hour prediction lead time, helped double 3‑hour SEP‑1 bundle compliance, and was rolled into ED workflows via a bespoke iPad interface and close clinician collaboration (Duke Institute for Health Innovation Sepsis Watch project details).
Independent analysis of the program's real‑world rollout highlights that the tool scores risk regularly and required intensive social integration - training, new communication patterns, and leadership support - to realize mortality reductions and operational gains (MIT Technology Review analysis of Sepsis Watch real‑world rollout), a vivid reminder that smart models must meet clinical workflows to save lives.
Metric | Value |
---|---|
Launch | November 2018 |
Training data | ~50,000 records / ~32 million data points |
Monitoring / scoring | Monitors EHR every 5 minutes; hourly scoring reported |
Median prediction lead time | 5 hours before clinical presentation |
SEP‑1 compliance | Doubled 3‑hour bundle compliance |
Estimated impact | ~8 lives saved per month (estimated) |
Primary users | RRT nurses + ED attendings, integrated into bedside workflow |
“Sepsis is very common but very hard to detect because it has no clear time of onset and no single diagnostic biomarker.” - Mark Sendak, MD, MPP
UNC Health - Generative AI Staff Chatbot: Internal knowledge and triage prompts
(Up)UNC Health has been piloting a secure, governed generative AI staff chatbot - built with Azure OpenAI Service and part of its Epic collaboration - to help teammates find UNC‑specific guidance, surface training materials, and draft routine patient messages so clinicians can spend more time with patients and less time buried in screens; the initial rollout began with a small group of clinicians and administrators (five to ten physicians in early phases) and is designed to scale across UNC's 15 hospitals and 900+ clinics as teams identify new use cases and safety checks during testing (UNC Health piloting secure internal generative AI tool built with Azure OpenAI, UNC Clinical Informatics coverage of the generative AI chatbot pilot).
Early evaluations emphasize clinician-in-the-loop workflows and prompt engineering so drafts are helpful for routine tasks - medication refills, work excuses - without replacing judgment, a pragmatic approach that aims to shave minutes off admin chores and return dozens of those minutes to bedside care each day.
“This is just one example of an innovative way to use this technology so that teammates can spend more time with patients and less time in front of a computer.” - Dr. David McSwain, Chief Medical Informatics Officer, UNC Health
WakeMed - AI-drafted Patient Portal Messages: Empathetic response prompts
(Up)WakeMed has leaned on generative AI to draft MyChart responses and triage incoming messages, cutting roughly 12–15 patient portal items per provider each day by filtering unnecessary notes, routing inquiries to appropriate staff, and partnering on streamlined medication‑refill handling (North Carolina Health News article on how NC providers are harnessing AI, Becker's Hospital Review analysis of reducing patient portal messages); the changes have helped WakeMed scale patient access through MyChart while easing inbox burden for clinicians (WakeMed MyChart patient portal information).
Early research shows AI‑drafted messages can read as more detailed and empathetic, but disclosure that a message was AI‑generated may reduce patient satisfaction - an important reminder that human review and clear governance should stay central as systems automate routine communications.
“Given current expectations that response messages are written by physicians, a disclosure of AI authorship may feel like a deception.”
Atrium Health Wake Forest Baptist - Virtual Nodule Clinic: Lung nodule scoring prompts
(Up)Atrium Health Wake Forest Baptist has put an AI-powered Virtual Nodule Clinic into real practice to help pulmonologists and radiologists triage incidental lung nodules with greater confidence, using a model trained on more than 70,000 CT scans to sort nodules into high-, intermediate- and low‑risk buckets and flag who needs timely biopsy or closer surveillance - an important step in North Carolina where early detection can shift five‑year survival for small tumors from roughly 20% to as high as 90% (Wake Forest Baptist Optellum Virtual Nodule Clinic news release).
The system is woven into Atrium's lung nodule program's workflow - where teams already assess 500+ nodule patients a year - and pairs AI scoring with robotic bronchoscopy and multidisciplinary review so fewer patients undergo unnecessary biopsies while high‑risk cases move faster to treatment (Atrium Health Lung Nodule Program details, Optellum Virtual Nodule Clinic overview).
The memorable payoff: catching a growing nodule early can change a likely terminal trajectory into a curable one, which is precisely why clinician oversight and clear risk categories matter so much.
Metric | Value |
---|---|
Training data | >70,000 CT scans |
Risk categories | High / Intermediate / Low |
Annual nodule assessments | ~500+ patients at Wake Forest Baptist |
Primary benefits | Earlier detection, fewer unnecessary biopsies, faster triage to treatment |
“We are proud to be an early adopter of proven technologies that enable our clinicians to identify and treat lung cancer at early stages when survival opportunities are high.” - Christina Bellinger, MD
Novant Health & Viz.ai - Acute Imaging Triage: Stroke and ER triage prompts
(Up)Novant Health has paired its Carolina stroke network with Viz.ai's AI-powered care coordination to shave critical minutes - and sometimes hours - off triage, diagnosis and treatment for suspected large vessel occlusion (LVO) strokes, a difference measured in millions of brain cells lost per minute; the system auto‑analyzes CT images, pushes alerts and high‑fidelity imaging to stroke teams' phones, and streamlines communication so specialists can act faster across tertiary and community hospitals (Novant Health AI stroke care coordination).
Viz.ai's platform has shown faster time‑to‑notification and shorter treatment‑decision windows in real‑world settings, and Novant - one of the few systems with two Joint Commission advanced comprehensive stroke centers in the region - has reported measurable reductions in door‑to‑needle times and faster transfers across its network; that practical, workflow‑first approach is why leaders describe AI here as a coordination tool, not just a detector, and why hospitals in the region are layering validated algorithms into ED and radiology workflows to protect patients in the Stroke Belt (Viz.ai AI for radiology platform overview).
Metric | Value |
---|---|
Brain cells lost | Up to 2 million per 60 seconds |
U.S. strokes per year | ~795,000 |
NC stroke deaths (2017) | 5,098 |
Novant annual stroke patients | Forsyth ~1,100; Presbyterian ~800 |
Viz.ai reported impacts | 73% faster notification; 24% faster treatment decision; 96% sensitivity / 94% specificity (2,544 pts) |
Door‑to‑needle improvement (Novant) | From ~38 to ~28 minutes (sometimes as low as 10 min) |
“Time is very critical for the brain and we need to shave off minutes every opportunity we can.” - Dr. Laurie McWilliams, Novant Health
OrthoCarolina - Medical Brain: Post-surgical follow-up and digital assistant prompts
(Up)OrthoCarolina's partnership to deploy the Medical Brain AI platform brings a smartphone-based digital assistant into post‑surgical care that asks recovery questions, delivers tailored guidance, and routes anything it can't answer to a human triage line - an approach that married automated follow‑up with clinician oversight so every conversation is reviewed by a medical team.
In a pilot the app interacted with roughly 200 patients (averaging 30–60 messages per patient) and cut traditional messages and phone calls by about 70%, freeing staff time while using continuous monitoring to identify emerging care gaps and risks; the move builds on OrthoCarolina's patient‑centered recovery workflows and appointment tools and scales across a network of more than 300 providers at nearly 40 locations.
For clinicians and administrators in North Carolina, Medical Brain is a concrete example of an AI prompt set - automated check‑ins, red‑flag triage rules, and prewritten clinician responses - that reduces inbox burden without removing the clinician from the loop (see the OrthoCarolina recovery guidance, the Medical Brain announcement, and NCMS reporting for pilot details).
Metric | Value |
---|---|
Pilot patients | ~200 |
Average messages per patient | 30–60 |
Reduction in messages & calls | ~70% |
OrthoCarolina network | >300 providers; nearly 40 locations |
“For decades, OrthoCarolina has been committed to providing patient-first comprehensive care across a wide array of orthopedic specialties, and the integration of Medical Brain® into our care continuum will help us to better meet patients' real-time needs while also accelerating our organizational value-based care goals.” - Dr. Bruce Cohen
Duke Health - OR Scheduling Model: Surgery duration prediction prompts
(Up)Duke Health has applied machine‑learning to one of the hospital's priciest bottlenecks - the operating room - deploying models that are about 13% more accurate than human schedulers at predicting case length and now in use across more than 33,000 cases, a change that cut scheduling errors, reduced overtime and, by one estimate, could shave roughly $79,000 in overtime expenses over a four‑month span; the work - reported in Duke's newsroom and published in Annals of Surgery - shows how even modest accuracy gains translate into smoother workflows and faster access to the OR for patients (Duke Health algorithm improves accuracy of surgery scheduling).
These scheduling models sit alongside other Duke tools - like the Pythia risk calculator - that use local EHR data to help teams pick the right patients and timing for complex operations, reinforcing a systems‑level approach where precise time estimates and outcome prediction work together to reduce delays and improve throughput (Duke Pythia surgical risk prediction tool amplifies surgical expertise).
Metric | Value |
---|---|
Cases analyzed / in use | >33,000 |
Accuracy improvement vs. human schedulers | 13% |
Estimated overtime savings | ~$79,000 over 4 months |
Models trained | Three ML models on thousands of cases |
Published | June 26, 2023 (Annals of Surgery) |
“One of the most remarkable things about this finding is that we've been able to apply it immediately and connect patients with the surgical care they need more quickly.” - Daniel Buckland, M.D., Ph.D.
Novant Health - Behavioral Health Acuity Risk Model: Suicide-risk triage prompts
(Up)Novant Health's Behavioral Health Acuity Risk (BHAR) model brings a clinician‑facing, color‑coded suicide‑risk flag into the electronic medical record so teams can see and act on hidden warning signs during routine care: developed by Novant mental‑health, emergency‑medicine and psychiatry experts, the model examines data already in the chart and outputs a simple red/yellow/green risk that appears at the point of care, helping prioritize patients who need immediate evaluation or follow‑up (North Carolina Medical Society overview of AI uses in North Carolina healthcare).
Technically, BHAR uses a random‑forest approach and was validated with strong discrimination (area under the ROC ≈ 0.84), designed to run near‑real‑time and be hosted natively in the EHR so results are available to clinicians as they work (BHAR model preprint and validation study, Novant Health technical overview of the BHAR implementation).
The memorable payoff is practical: what used to be buried signals in notes can now flash as an explicit chart alert, turning an otherwise missed risk into a triage prompt that clinicians and care teams can act on immediately.
Metric | Value |
---|---|
Model | Behavioral Health Acuity Risk (BHAR) |
Technique | Random forests |
Performance | AUC ≈ 0.84 |
Age range | Patients aged 7 and older |
Deployment | Near‑real‑time; natively hostable in the EHR; color‑coded risk visible in chart |
Wake Forest University School of Medicine - Cognitive Health Index: Alzheimer's treatment selection prompts
(Up)Wake Forest University School of Medicine's Alzheimer's Disease Research Center (ADRC) supplies the region's richest set of clinical signals for a practical “Cognitive Health Index” - think cardiometabolic measures, blood biomarkers and social‑determinant data that could drive treatment‑selection prompts and early‑intervention workflows - because the Center explicitly studies vascular and metabolic contributors to Alzheimer's and works to boost enrollment of populations at increased risk (Wake Forest ADRC Alzheimer's Disease Research Center - early intervention and vascular/metabolic focus).
Local studies show the payoff: an ADRC analysis of 537 older adults found that neighborhood disadvantage maps to higher blood pressure and lower cognition even among those without diagnosed MCI, a vivid reminder that “where you live” can show up in a chart as an actionable risk factor (ADRC study: neighborhood disadvantage linked to higher blood pressure and lower cognition), and Wake Forest's role in national trials such as U.S. POINTER - where lifestyle interventions produced measurable cognitive gains across thousands of participants - supplies tested intervention options that a prompt‑driven index could recommend to clinicians and care teams (U.S. POINTER trial showing lifestyle changes improve brain health in older adults).
Metric | Value |
---|---|
ADRC national designation | Awarded by NIA (2016) |
Neighborhood study sample | 537 adults (ADRC Healthy Brain Study) |
U.S. POINTER participants (multi‑site) | 2,111 older adults |
Relevant biomarker publication | JAMA Network Open (Feb 3, 2025) |
“These findings show that living in a disadvantaged neighborhood has a bigger impact on heart health and brain function in people without preexisting cognitive issues.” - James R. Bateman, M.D.
Glean - Enterprise Knowledge: Onboarding, summarization and agent prompts for healthcare operations
(Up)For Raleigh health systems juggling prior authorizations, credentialing and fractured knowledge across EHRs, payer portals and SharePoint, Glean offers a practical way to turn scattered documents into actionable prompts - indexing SOPs, payer policies and training materials so clinicians and revenue teams can pull precise answers where they already work; see the Glean healthcare AI workflows overview for how the platform fits into provider and payer workflows and enforces HIPAA and SOC‑2 controls (Glean healthcare AI workflows overview).
By embedding permission‑aware search, summarization and deployable AI agents into tools like Teams or ServiceNow, organizations can shave routine friction (Glean reports up to 10 hours saved per user per year and big onboarding gains) and scale repeatable agent prompts - examples include prior‑auth assistants and chart‑gap trackers that reduce denials and speed billing (Glean blog on 10 AI agents transforming healthcare workflows).
The practical payoff is simple: less time hunting for answers, more time on patients and faster, audit‑ready operational decisions.
Metric | Value / Note |
---|---|
Native connectors | 100+ (Epic SharePoint, ServiceNow, Salesforce, etc.) |
Compliance | HIPAA compliant; SOC 2 |
Time savings | Up to 10 hours per user per year |
Onboarding impact | ~36 hours saved per employee (onboarding) |
Agent scale | 100M+ agent actions annually (platform-wide) |
Conclusion: Getting started with AI prompts in Raleigh healthcare - safety, governance, next steps
(Up)Raleigh providers ready to turn these Top 10 prompts into safe, scalable practice should start with governance not guesswork: follow North Carolina's NCDIT Principles for Responsible Use of AI framework (seven guiding principles that make privacy, human oversight, transparency and auditing mandatory rather than optional) and use tools like the OPDP's AI/GenAI questionnaire to perform an early Privacy Threshold Analysis (Privacy's Role in AI Governance).
Treat privacy as the default - lock the front door before inviting innovation - and design prompts with clinician review, explainability and a rollback path. Governance takes work and cross‑functional expertise (the Duke/Margolis briefing walks through why health‑system governance needs clear roles, auditing and training), so start small with pilot prompts that solve a single workflow problem, measure outcomes, and scale only after safety checks and vendor assessments pass muster (AI Governance in Health Systems).
For teams that need practical prompt skills and workplace-ready workflows, structured training like Nucamp AI Essentials for Work 15-week bootcamp teaches prompt writing, human‑in‑the‑loop design, and operational application - so clinicians and administrators can lead safe adoption, not just consume it.
A clear governance plan plus focused upskilling turns promising prompts into reliable, patient‑centered tools.
Program | Length | Core focus | Cost (early bird) | Registration |
---|---|---|---|---|
AI Essentials for Work | 15 Weeks | AI tools, prompt writing, workplace workflows | $3,582 | Register for Nucamp AI Essentials for Work (15-week bootcamp) |
Frequently Asked Questions
(Up)What are the top AI use cases and prompt types being used in Raleigh healthcare?
Raleigh health systems use a range of prompt-driven AI applications: 1) Early-warning sepsis detection prompts (Duke Sepsis Watch); 2) Secure generative AI staff chatbots for internal knowledge and triage (UNC Health); 3) AI-drafted patient portal and MyChart message prompts (WakeMed); 4) Lung nodule scoring and triage prompts (Atrium Health Wake Forest Baptist); 5) Acute imaging triage and stroke-alert prompts (Novant Health + Viz.ai); 6) Post-surgical follow-up and digital assistant prompts (OrthoCarolina/Medical Brain); 7) OR case-length and scheduling prediction prompts (Duke OR scheduling model); 8) Behavioral health acuity/suicide-risk triage prompts (Novant BHAR); 9) Cognitive Health Index treatment-selection prompts (Wake Forest ADRC); and 10) Enterprise knowledge/agent prompts for operational tasks (Glean).
How were the Top 10 AI prompts and use cases selected?
Selection prioritized real-world benefit in North Carolina using four criteria: demonstrated clinical impact and measurable outcomes (e.g., Sepsis Watch metrics), operational fit with existing workflows, safety/evaluation and governance alignment (state guidance, Duke SCRIBE/JAMIA work), and local research or scalability potential (UNC and Carolina labs). Preference was given to clinician-reviewed, replicable prompts with governance and measurable workflows.
What measurable impacts and safety considerations have local AI deployments shown?
Examples of measurable impacts: Duke Sepsis Watch provided a median 5-hour prediction lead time, doubled 3-hour SEP‑1 bundle compliance and is estimated to save ~8 lives/month; Duke OR scheduling improved accuracy by ~13% across >33,000 cases and reduced overtime costs; Viz.ai integrations showed faster notifications (73%) and quicker treatment decisions (24%) with high sensitivity/specificity. Safety considerations include clinician-in-the-loop review, disclosure policies (e.g., AI-drafted messages), evaluation frameworks (SCRIBE/JAMIA), privacy controls, and governance aligned with NCDIT and state guidance to prevent bias, omission, or patient-harm.
How should Raleigh health teams get started safely with AI prompts?
Start with governance: follow North Carolina's seven guiding principles (privacy, human oversight, transparency, auditing), conduct Privacy Threshold Analyses (OPDP AI/GenAI questionnaire), and require vendor assessments and explainability. Pilot small, measure outcomes, keep clinicians in the loop, design rollback paths, and scale only after safety checks pass. Invest in structured training that teaches prompt writing, human-in-the-loop design, and workflow integration so clinical teams can lead safe adoption.
What practical benefits can organizations expect from operational AI tools and training?
Operational AI and knowledge agents can reduce administrative burden (e.g., WakeMed reduced ~12–15 portal items per provider daily; OrthoCarolina cut messages and calls by ~70%), improve onboarding and search (Glean reports up to 10 hours saved per user annually and ~36 hours saved per employee onboarding), and improve throughput and outcomes (faster triage, fewer unnecessary biopsies, better OR utilization). Training programs (example: 15-week AI Essentials for Work) teach prompt-writing and workflow skills so staff can implement these gains while maintaining safety and governance.
You may be interested in the following topics as well:
Hospitals need staff for AI oversight and data-quality roles as automation scales across Raleigh health systems.
Tracking metrics like reduced length of stay and fewer readmissions helps measuring ROI from AI pilots in Raleigh hospitals become an organizational priority.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible