Top 10 AI Prompts and Use Cases and in the Healthcare Industry in Richmond
Last Updated: August 24th 2025

Too Long; Didn't Read:
Richmond healthcare teams should use clear, specific AI prompts for triage, documentation, staffing, trial matching, surveillance, and compliance. Pilots cut documentation time up to 72% (Suki) and speed literature review ~70%; recommended 15‑week training (AI Essentials for Work) costs $3,582.
Richmond health leaders must treat prompts as clinical tools: clear, specific prompt design reduces dangerous “hallucinations” (made-up facts or citations), helps guard patient privacy and mitigate dataset bias, and steers energy‑hungry models hosted in Virginia's many data centers that “gulp” cooling water and power; see the University of Richmond Library's guide on generative AI for local considerations.
Prompt engineering best practices - give context, specify output format, iterate - translate directly into safer triage notes, discharge summaries, and staffing workflows, and local research hubs like the VCU Human‑AI ColLab center for human–AI collaboration are already supporting that work.
For teams ready to build prompt skills, a 15‑week training like Nucamp's AI Essentials for Work bootcamp - registration and program details teaches practical prompt writing and workplace applications so Richmond clinicians and staff can harness AI while managing risk.
Program: AI Essentials for Work. Length: 15 Weeks. Cost (early bird): $3,582. Includes: AI at Work: Foundations; Writing AI Prompts; Job Based Practical AI Skills.
Syllabus and full program overview: AI Essentials for Work syllabus and course details.
Table of Contents
- Methodology: How We Selected the Top 10 Prompts and Use Cases
- Epic: Pre-visit Patient Summary Prompt
- Google Cloud: AI Doctor Assistant Documentation Prompt
- IQVIA: Literature Review and Trial Protocol Refinement Prompt
- Workday: Staffing and Scheduling Optimization Prompt
- Zoom: Voice-based Handoff and Escalation Prompt
- FDA PreCheck: Medical Device Submission Support Prompt
- State Privacy Laws: Patient Consent and Data Access Prompt (Virginia-specific)
- OIG Advisory: Compliance Review Prompt
- GAO/NIH: Grant Application and Reporting Prompt
- Local Public Health: Richmond City Health Department Surveillance Prompt
- Conclusion: Next Steps for Richmond Healthcare Teams
- Frequently Asked Questions
Check out next:
Learn practical bias and privacy safeguards for Richmond providers to meet HIPAA and state obligations while running fairness audits.
Methodology: How We Selected the Top 10 Prompts and Use Cases
(Up)Selection prioritized prompts that are practical, auditable, and rooted in Richmond's existing AI momentum: candidates had to demonstrably assist clinicians, protect patient data, and leverage high‑quality local datasets, reflecting VCU Health's emphasis on data readiness and deliberate AI adoption; see VCU Health's guidance on integrating AI.
Use cases that accelerated accurate diagnosis or trial matching scored highly - VCU's TACIT tool, which analyzes millions of cells in minutes and tightens the link between tissue data and personalized treatment, inspired several diagnostic and biomarker prompts (VCU TACIT research).
Finally, feasibility and workforce readiness were weighted through local research and training partnerships, including the VCU Human‑AI ColLab, so each prompt maps to an implementation pathway - clinical validation, audit trails, and staff upskilling - rather than speculative promise, producing a top‑10 list that aims to reduce risk while delivering faster, more personalized care.
“AI is an assistive technology, meaning it is there to assist and to help but it cannot replace people, especially in the health care setting.”
Epic: Pre-visit Patient Summary Prompt
(Up)An Epic-focused “Pre‑visit Patient Summary” prompt turns scattered chart data into a compact, clinic‑ready briefing that Richmond teams can use to speed triage and reduce error: instruct the model to pull the side‑by‑side trackboard fields (past medical history, meds, vitals, triage note, time‑stamped ED course) and any linked outside records via CareEverywhere, then output a one‑paragraph problem list plus three prioritized actions and suggested orders using SmartPhrases/SmartText syntax so the note can drop straight into Epic; ACEP's user guide explains how SmartPhrases, SmartText, and the SxS trackboard surface exactly these elements for seamless documentation.
Pilot work on Epic previsit questionnaires also highlights a practical caveat for Richmond practices - few patients printed or brought summaries in one study - so the prompt should include a clinician‑facing summary and a separate, brief patient‑facing checklist to boost uptake and reduce the “didn't bring paperwork” scramble at check‑in.
Pairing this prompt with local workflow training channels ensures summaries become clinical tools, not extra chores, and keeps the focus on safer, faster patient care in Virginia clinics.
ACEP guide to Epic SmartPhrases and trackboard integration • JMIR pilot study on Epic previsit questionnaires (2024)
Epic Feature | Role in Pre‑visit Summary |
---|---|
SmartPhrases / SmartText | Auto‑populate standardized clinician and patient summaries |
Side‑by‑side (SxS) Trackboard | Surface PMH, meds, vitals, triage note without leaving the board |
CareEverywhere | Pull linked records from other Epic organizations for completeness |
Previsit Questionnaire (pilot) | Patient-entered goals/data - low printing/bring rates noted in evaluation |
Google Cloud: AI Doctor Assistant Documentation Prompt
(Up)For a Google Cloud–backed “AI Doctor Assistant” documentation prompt, Richmond teams should treat prompt design as an iterative engineering task: Vertex AI Studio is the recommended sandbox to craft and test MedLM prompts (summarization, question‑answering) and to export working code for the Vertex AI REST or Python SDK - examples even show POST calls to the us‑central1 endpoint - while keeping clinician review gates in place because MedLM outputs are explicitly flagged as drafts that may be incorrect or biased; see Google Cloud Vertex AI MedLM prompts guide for model choices and prompt tips.
Note the important lifecycle notices: MedLM is deprecated (access ends 2025‑09‑29) and only medlm‑medium and medlm‑large are listed, so plan any pilot with replacement strategies or Vertex alternatives.
Practical integrations already in market illustrate the payoff: Suki's Assistant uses Google Cloud Vertex AI to deliver patient summaries and clinical Q&A that speed documentation and cut cognitive load - Suki reports clinicians complete notes up to 72% faster - so a carefully scoped prompt that returns a concise patient summary, three prioritized clinician actions, and an auditable provenance snippet (source chart lines or references) can move from prototype to clinic‑ready with local validation and EHR review workflows.
Google Cloud Vertex AI MedLM prompts guide - MedLM prompt design and examples • Suki patient summarization and clinical Q&A with Google Cloud Vertex AI - case study
Key item | Detail |
---|---|
MedLM access | Deprecated - access not available on or after 2025‑09‑29 |
Supported models | medlm‑medium, medlm‑large |
Interfaces | Vertex AI REST API, Vertex AI SDK for Python, Vertex AI Studio |
Safety notes | Outputs may be unreliable; treat as drafts and include human review/audit trails |
IQVIA: Literature Review and Trial Protocol Refinement Prompt
(Up)IQVIA's AI‑assisted literature review and agentic‑AI toolset promise practical gains for Virginia research teams and trial sponsors who need faster, auditable evidence to refine protocols and match patients: the company advertises a 70% faster per‑document extraction with its Literature AI and case studies where AstraZeneca cut an initial systematic review screen from 20 days to less than 3, while a top‑10 pharma trimmed per‑document processing from 12 to 4 hours and reviewed 8,000 papers in four weeks.
Richmond clinical researchers can use a declarative, human‑in‑the‑loop prompt that scopes searches, extracts study design and outcome details, and returns a prioritized list of protocol edits plus provenance snippets for audit - letting teams move weeks‑long background work into days without losing traceability.
For teams building pilots, IQVIA's materials on accelerating literature reviews and their webinar on agentic AI outline architectures and governance considerations to keep oversight central as automation scales.
IQVIA Accelerate Scientific Literature Reviews platform • IQVIA webinar: AI Agents and the Future of Literature Reviews
Metric | Result / Example |
---|---|
Per‑document extraction speed | ~70% faster (IQVIA claim) |
Systematic review screening (AstraZeneca) | 20 days → <3 days |
Manual extraction reduction (Top‑10 pharma) | 12 → 4 hours per document (≈70% reduction) |
Scale example | 8,000 papers reviewed in 4 weeks |
Workday: Staffing and Scheduling Optimization Prompt
(Up)A Workday‑focused “Staffing and Scheduling Optimization” prompt helps Richmond teams turn messy headcount and shift data into actionable plans: ask the model to ingest HCM and VMS summaries, flag where contingent workers (often ~40% of payor workforces) have access to PHI or critical systems, forecast staffing needs from patient‑volume drivers, and output a short prioritized playbook - near‑term schedule fills, cross‑training needs, and provisioning/deprovisioning checks - that integrates with Workday Adaptive Planning and a vendor management system for auditable change logs; see Workday's contingent workforce guidance for context.
For shift optimization, include clock/in-out telemetry and employee preferences so the prompt can propose swap/bid options that respect labor rules and improve retention - tools like CloudApper's Workday TimeClock illustrate how auto‑bidding and intelligent shift fills reduce last‑minute gaps.
Pair the prompt with a staffing partner plan for implementation and a human review gate so recommendations become safe, auditable schedule changes rather than risky automation.
See Workday's VMS guidance and CloudApper's Workday TimeClock for integration ideas.
30‑Second Test Question | Why it matters |
---|---|
How much are you spending on contractors? | Cost control and budgeting |
Which systems do contractors access (PHI exposure)? | Privacy and training risks |
Where are extended workers located? | Coverage and scheduling logistics |
What are contingent workers doing / logging? | Productivity and role alignment |
Are contractors provisioned/deprovisioned properly? | Security and audit readiness |
“Now we have significant tools in place, and we're all aligned on the processes... it's understanding everything that our VMS can do...”
Workday contingent workforce guidance for HCM and VMS integration | Workday Adaptive Planning product information | CloudApper Workday TimeClock integration and features | Workday vendor management system (VMS) guidance
Zoom: Voice-based Handoff and Escalation Prompt
(Up)A Zoom‑focused “Voice‑based Handoff and Escalation” prompt lets Richmond teams turn hurried verbal updates into auditable, HIPAA‑aware handoffs: configure Realtime Media Streams to capture call audio and transcripts, instruct the agent to produce a concise SBAR‑style summary with a required read‑back verification field (matching ACOG's handoff principles), and build an escalation rule that converts the call into a one‑click Zoom Meeting to summon specialists or a rapid‑response panel so everyone joins with the same context.
Pairing Zoom's agentic AI integrations (so the prompt can page an on‑call attending in ServiceNow or log the event in an incident tracker) with custom audio greetings and prompts keeps workflows consistent and patient‑facing messages clear, while the “Workplace for Clinicians” toolkit highlights compliance‑ready templates and EHR integration points for clinical documentation.
In practice, this prompt reduces missed details at handoff by forcing verification, creates an auditable transcript trail for follow‑up, and turns escalations from fragmented phone trees into coordinated, on‑the‑record clinical responses.
Zoom Realtime Media Streams and agentic AI updates • Zoom Workplace for Clinicians compliance-ready tools • ACOG communication strategies for patient handoffs
FDA PreCheck: Medical Device Submission Support Prompt
(Up)Richmond device teams can use an “FDA PreCheck / PCCP submission support” prompt to turn messy protocol drafts, device descriptions, and site readiness notes into audit‑ready chunks that map directly to federal requirements: instruct the model to draft a precise Description of Modifications, a Modification Protocol with traceability steps, and an Impact Assessment aligned with FDA PCCP principles (only a few verifiable modifications, per the Ropes & Gray PCCP summary), while flagging when an IDE under 21 CFR 812 is required or when IRB action suffices using UVA's investigational device guidance - this prevents the costly error of recruiting before FDA and IRB approvals.
Pair the prompt with PreCheck's two‑phase framework (Facility Readiness + Application Submission) so responses include Type V DMF elements and CMC‑focused language for early FDA engagement, and surface dates for stakeholders to weigh in at the public meeting and comment window.
The result: a compact, reviewer‑friendly submission storyboard that helps Virginia teams move from scattered notes to a compliance‑ready filing without losing provenance or clinical‑engineering checks.
See UVA's investigational device guidance, the Ropes & Gray PCCP alert on draft guidance, and Sidley's overview of FDA PreCheck for program context and timelines.
Prompt task | Regulatory deliverable |
---|---|
Summarize device and proposed changes | Description of Modifications (PCCP) |
Draft validation/trace tables | Modification Protocol with traceability |
Assess benefits/risks and mitigation | Impact Assessment (PCCP) |
Check IDE status and IRB needs | 21 CFR 812 / IRB notification checklist (UVA guidance) |
State Privacy Laws: Patient Consent and Data Access Prompt (Virginia-specific)
(Up)A Virginia‑specific “Patient Consent & Data Access” prompt should make compliance a first‑class output: instruct the model to detect and flag reproductive or sexual health information (including inferences from location or app data), check whether the record is covered by HIPAA, surface the signed consent form or the need for one (or a legally authorized representative for research subjects), and generate an auditable consent checklist plus a plain‑language patient notice ready for clinic workflows.
Recent state changes matter here - SB 754 (effective July 1, 2025) bars collection, use, or sharing of reproductive/sexual health data without consent and creates a private right of action (statutory damages start at $500 per violation, higher for willful breaches), so the prompt should also surface potential exposure windows and date‑stamped provenance to support legal review; see a detailed analysis of Virginia's new PRA for reproductive and sexual health information.
For research workflows, include a built‑in LAR and informed‑consent verifier that follows Virginia's informed‑consent rules (elements, assent, and documentation requirements under state regs and Code §32.1‑162.18) so the system never lets recruitment or dataset assembly proceed without the appropriate signed authorization.
That simple guardrail - one click to show the consent chain and the exact clause relied on - turns an abstract privacy rule into an operational safety net and prevents a single mis‑shared cookie or inferred datapoint from becoming a costly compliance headline.
Virginia informed‑consent regulations (22VAC30‑40‑100) - Virginia Administrative Code • Analysis of Virginia's new PRA for protecting reproductive and sexual health information (SB 754) - ByteBackLaw
Item | Key point |
---|---|
SB 754 (PRA) | Prohibits collection/use/sharing of reproductive/sexual health info without consent; effective 7/1/2025 |
Private damages | Statutory damages from $500/violation (higher if willful) |
HIPAA | HIPAA‑covered PHI generally excluded from the PRA's reach |
Informed consent (research) | State regs and Code §32.1‑162.18 require documented, voluntary consent or LAR signatures |
OIG Advisory: Compliance Review Prompt
(Up)Richmond compliance teams can turn a high‑stakes legal review into a practical AI prompt: an “OIG Advisory: Compliance Review” prompt should ingest ownership tables, investor agreements, revenue breakdowns and distribution terms, then map each fact to the small‑entity investment safe harbor tests so clinicians, counsel, and executives see a clear red‑amber‑green view.
The OIG's Advisory Opinion 25‑09 (posted Aug. 12, 2025) ruled favorably where physician owners held ≈35% and the requestor met all eight safe‑harbor conditions, so the prompt should flag the crucial 40% investor and revenue thresholds, identical investor terms, absence of loans/guarantees, no divestiture tied to referrals, and whether distributions are strictly pro‑rata; when any element fails, surface practicable mitigations drawn from the OIG analysis - broaden customer mix, document fair‑market‑value terms, add disclosure language, or restructure distributions - so fixes are operational, not theoretical.
Build in the legal caveat the OIG itself notes (the opinion applies only to the facts provided and does not resolve other laws) and link reviewers to the source documents for auditability; see the OIG Advisory Opinion 25-09 and a practical legal read-out for implementation guidance.
Safe‑harbor element | What the prompt should check |
---|---|
Investor test | Percent of investment held by referral‑capable investors (≤40%) |
Revenue test | Percent gross revenue from investor‑generated business (≤40%) |
Investment‑offer test & parity | Terms equal across passive investors; not tied to prior/expected referrals |
Financial safeguards | No loans/guarantees to investors; pro‑rata distributions |
Applicability note | AO applies to the requestor/facts only; does not address other statutes |
Posted | Advisory Opinion 25‑09 - Aug. 12, 2025 |
GAO/NIH: Grant Application and Reporting Prompt
(Up)A GAO/NIH
Grant Application and Reporting
prompt for Richmond research teams should produce reviewer‑ready drafts and an auditable checklist that maps every narrative section to NIH rules - turning scattered aims, budgets, and facility descriptions into a one‑page Specific Aims that a busy reviewer can immediately grasp and a Research Strategy organized into Significance, Innovation, and Approach.
Have the prompt validate formatting (page limits, fonts, attachment labels), flag when budgets hit the $500,000+ prior‑approval threshold, and assemble required plans (Data Management & Sharing, resource‑sharing, human subjects/clinical trial forms) so nothing gets dropped at submission.
Build in reproducibility checks that prompt inclusion of authentication plans, biological variables, and transparency language per NIH rigor guidance, and export a plain‑language Project Summary/Abstract suitable for RePORTER. Finally, include pre‑submission steps - SciENcv biosketch prep, eRA Commons/Government portal readiness, and a prereader review list - so local sponsors move from draft to compliant submission without last‑minute scramble; see NIH general grant writing tips for applicants • NIH guidance on rigor and reproducibility for section‑by‑section examples and checklists.
Local Public Health: Richmond City Health Department Surveillance Prompt
(Up)Richmond's public‑health AI prompt should act like a vigilant, local intelligence analyst - ingesting the Virginia Department of Health's syndromic ED feeds, reportable‑disease reports, immunization surveys, vital statistics, and environmental‑health alerts so the City Health Department can spot emerging clusters, prioritize investigations, and produce auditable situational briefs for clinicians and policymakers; VDH's Office of Epidemiology explains how provider, hospital, and lab reports feed standardized surveillance case definitions and monthly morbidity snapshots.
Pairing those streams with community indicators (county health rankings and social‑determinant dashboards) and interoperable syndromic systems proved invaluable in a DoD–VDH NSSP pilot that let military and civilian ESSENCE data appear side‑by‑side - leading to an early influenza alert at a military elementary school and a coordinated response.
A well‑scoped prompt should return a one‑page incident summary, geotagged case map, priority actions (testing, outreach, vaccine clinics), and a provenance log so Richmond teams move from noisy data to timely, defensible action.
Virginia Department of Health communicable disease data and monthly reports • CDC NSSP DoD–Virginia syndromic data‑sharing pilot details
VDH data stream | Surveillance use |
---|---|
Emergency Department (Syndromic) Data | Early outbreak detection and trend monitoring |
Reportable Disease Surveillance | Case counts, demographics, and classification |
Immunization Surveys | Coverage gaps and clinic prioritization |
Vital Statistics | Cause‑of‑death signals and maternal/infant surveillance |
Environmental Health | Food, water, rabies, and recreational water alerts |
Conclusion: Next Steps for Richmond Healthcare Teams
(Up)Richmond teams facing a fast‑moving AI moment should turn this momentum into disciplined action: establish an AI steering committee and acceptable‑use policies (city and system leaders already urging strategic adoption), prioritize a short list of high‑impact prompts from this top‑10 playbook for near‑term pilots, and pair each pilot with clear guardrails - privacy checks, human review gates, and ROI metrics - so work moves from experimentation to production rather than stalling (only a minority of pilots make that leap without governance).
Anchor decisions in clinician priorities - reducing caregiver burden and improving satisfaction remains the top organizational goal in recent surveys - and measure outcomes that matter locally (time saved on documentation, faster trial matching, fewer avoidable denials).
Use trusted local partners and training pipelines to build capacity: practical courses and bootcamps can upskill staff quickly while preserving clinical oversight.
For playbooks and local strategy advice, see the Richmond business guide to strategic AI adoption and the health‑system adoption survey that highlights workforce and governance priorities; teams ready to train prompt writers and operationalize safe prompts can enroll in a 15‑week practical program like AI Essentials for Work registration to seed that capability.
Program | Length | Early‑bird cost | Registration / Syllabus |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | Register for AI Essentials for Work • AI Essentials for Work syllabus |
“It's important to avoid resisting change merely due to discomfort with the unfamiliar, just as it is important not to implement changes without a clear purpose.”
Frequently Asked Questions
(Up)What are the top practical AI prompts Richmond healthcare teams should pilot?
Prioritize prompts that are practical, auditable, and locally feasible: (1) Epic Pre-visit Patient Summary to produce clinician and patient checklists; (2) Google Cloud Vertex AI Doctor Assistant for concise patient summaries with provenance; (3) IQVIA literature-review and trial-protocol refinement prompts for rapid evidence extraction; (4) Workday Staffing and Scheduling Optimization to forecast needs and manage contingent workers; and (5) Zoom Voice-based Handoff and Escalation to produce SBAR-style, auditable handoffs. Each prompt should include human review gates, provenance snippets, and local workflow integration.
How should Richmond teams design prompts to reduce AI risk (hallucinations, bias, privacy) and comply with local laws?
Use prompt-engineering best practices: give explicit context, specify output format (e.g., one-paragraph problem list + three prioritized actions), and iterate in a sandbox (Vertex AI Studio or equivalent). Include provenance traces that cite source chart lines or document snippets; require human-in-the-loop review; detect and flag sensitive categories (e.g., reproductive or sexual health) and surface consent status. For Virginia-specific compliance, build a Patient Consent & Data Access prompt that checks for SB 754 exposure, surfaces signed consent or need for legally authorized representative, and date-stamps provenance to support legal review.
Which operational safeguards and implementation steps are recommended before moving prompts into production?
Establish an AI steering committee, acceptable-use policies, and clear guardrails: privacy checks, audit trails, human review gates, and ROI metrics (time saved, faster trial matching, fewer denials). Pair each pilot with clinical validation, staff upskilling, and vendor/EHR integration plans (e.g., SmartPhrases for Epic, Vertex AI SDK for Google Cloud, Workday integrations). Use local partners and training pipelines - such as a 15-week AI Essentials for Work program - to build prompt-writing capacity and workforce readiness.
What measurable benefits have been documented for these AI use cases and prompts?
Documented and vendor-cited benefits include faster documentation (Suki reports up to 72% faster note completion), IQVIA claims ~70% faster per-document extraction and dramatic systematic-review speedups (e.g., 20 days to <3 days), and improved trial-matching and diagnostic workflows inspired by tools like VCU TACIT. Operational gains also include reduced handoff errors through verified SBAR summaries and staffing optimizations that lower last-minute gaps. Local pilots should capture these outcomes with baseline measures and audit logs.
What local data and governance considerations are specific to Richmond that teams must account for?
Leverage Richmond and Virginia resources: use high-quality local datasets (VCU, VDH syndromic feeds, immunization surveys), connect Epic CareEverywhere for cross-organization records, and integrate with state surveillance channels. Account for Virginia law changes - SB 754 (effective 7/1/2025) restricts collection/use/sharing of reproductive/sexual health data without consent and creates statutory damages - so prompts must surface consent status and sensitive exposures. Finally, weight feasibility and workforce readiness through local research hubs, legal review (OIG/Advisory guidance), and formal pilot governance to ensure explainability and auditability.
You may be interested in the following topics as well:
Don't miss the section on HIPAA and FDA considerations in Virginia that shape AI adoption in Richmond.
Understand the role of clinicians and administrators in protecting equity and privacy amidst AI adoption.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible