Top 10 AI Prompts and Use Cases and in the Education Industry in Durham

By Ludo Fourrage

Last Updated: August 17th 2025

Teachers and students in Durham using AI tools for personalized learning, guided by NC DPI and local resources.

Too Long; Didn't Read:

Durham education leaders should run targeted AI pilots (2-week classroom tests, Amira reading dosage 15–30 min/week), teach prompt literacy via short PD, and use GovAI vendor FactSheets - expect gains like 25% math boosts (Squirrel AI) and reduced support loads.

Durham's K–20 classrooms and campuses are already seeing practical wins from generative AI - from AI-powered study tools boosting retention at Duke and NCCU to administrators using chatbots for faster student support - and national analysis shows these tools can accelerate lesson planning, research, and student engagement while raising integrity and data-quality questions (see the EDUCAUSE review of generative AI in education: EDUCAUSE review of generative AI in education).

Local practitioners should pair small, measurable pilots with staff training: a concrete option is Nucamp's 15-week AI Essentials for Work syllabus that teaches prompt design and evaluation for real tasks, so districts can pilot tutor/chatbot use cases without overbuying vendor solutions (course syllabus and details: Nucamp AI Essentials for Work syllabus and course details).

For Durham leaders, the immediate “so what” is clear: run targeted pilots, teach prompt literacy, and measure retention and support-load improvements before scaling.

Additional local example: AI-powered study tools in Durham case study.

ProgramLengthKey focusEarly bird cost
AI Essentials for Work15 weeksPrompt design, practical AI skills$3,582

“Shouldn't higher education institutions be preparing graduates to work in a world where generative AI is becoming ubiquitous?”

Table of Contents

  • Methodology: How we selected the Top 10 Prompts and Use Cases
  • Pendo.io - In-app Guides and Product Analytics for Ed Tech Adoption
  • Squirrel AI Learning - Personalization & Adaptive Learning
  • Amira Learning - AI-enabled Reading & Speech Practice
  • University of Michigan U-M GPT - Campus-owned LLM Platforms
  • Automated Content Creation with tools like GPT-4o and DALL·E 3 - Teacher Copilots
  • Intelligent Assessment & Feedback - British University Vietnam Case
  • NC State Extension Guidance - Safe Prompting and Approved Tools
  • GovAI Coalition / ncIMPACT Recommendations - Policy & Vendor Vetting
  • Papers & Practical Prompts - Ready-to-use Templates for Durham Practitioners
  • Student-Centered Equity Use Case: Multilingual Materials & Accessibility (example: Safran University practices)
  • Conclusion: Next Steps for Durham Educators - Pilots, PD, and Governance
  • Frequently Asked Questions

Check out next:

Methodology: How we selected the Top 10 Prompts and Use Cases

(Up)

Selection prioritized prompts and use cases that Durham educators can run quickly, safely, and measurably: each candidate was mapped to a concrete instructional or operational goal, vetted against NC State's NC State AI Guidance and Best Practices for Responsible AI Use in Education (including data-classification limits and recommendations to treat AI outputs as drafts) and checked for compatibility with NC State's Approved AI Products List for Campus‑Approved Tools so pilots avoid unnecessary IT Purchase Compliance delays; prompts that operate only on “green” (non‑sensitive) data were elevated to the Top 10 to enable faster trials.

Prompt quality criteria followed NC State's write effective prompts checklist - specificity, role framing, and iterative refinement - and were validated against regional practitioner discussions such as sessions at AIM22 Conference Sessions on Artificial Intelligence for Education.

Prioritization also weighed training needs (can staff learn prompt literacy via short PD) and measurable outcomes (retention, support-load, or grading-time reductions).

The result: a compact, risk‑aware Top 10 that aligns with state guidance, approved tools, and real classroom priorities.

StepPrimary sourceIntended result
Map use cases to goalsNC State AI Guidance and Best Practices for Responsible AI Use in EducationClear pilot metrics
Vetting data & toolsNC State Approved AI Products List for Campus ToolsGreen-data prompts to avoid ITPC
Stakeholder validationAIM22 Conference Schedule and Session DescriptionsPractitioner-aligned prompts

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Pendo.io - In-app Guides and Product Analytics for Ed Tech Adoption

(Up)

For Durham districts and campus ed‑tech teams looking to raise adoption without blowing the budget, Pendo's all‑in‑one SXM platform gives product analytics, in‑app Guides, and visual Session Replay so teams can spot where students and faculty stall, push targeted help, and measure impact - all from a vendor headquartered in Raleigh, NC that already supports global customers; see how Pendo ties visual replays to analytics in the Pendo Session Replay guide (Pendo Session Replay guide and analytics overview).

Pendo's Orchestrate and AI features accelerate multi‑channel onboarding (email + in‑app) and AI localization has cut guide translation time by 80% for customers like Nelnet, a concrete “so what” for Durham: translate and deploy multilingual, in‑app help far faster to reduce support loads and boost equity.

For districts worried about student data and model use, Pendo documents per‑customer model training and opt‑in AI settings that keep behavioral data scoped to a subscription, helping IT and compliance teams evaluate risk before piloting.

FactDetail
HeadquartersRaleigh, NC
Core featuresAnalytics, In‑app Guides, Session Replay, Orchestrate, Pendo AI
Notable impactNelnet: 80% faster guide translation (Pendo AI)
Scale14,400+ customers; 23 trillion interactions (all-time)

“AI localization saves us SO much time. Before, it would take us 10 minutes per 1-step guide. Pendo AI helped us translate content in 2 minutes or less. Plus, we don't need additional programs to use and save .xliff files, so more PMs can translate.” - Liz Feller, Manager of Design and Analytics, Nelnet

Squirrel AI Learning - Personalization & Adaptive Learning

(Up)

Squirrel AI's intelligent adaptive learning platform combines smart‑learning tablets, a Large Adaptive Model (LAM) and an Intelligent Adaptive Learning System (IALS) to deliver nano‑level, personalized learning paths that adjust in real time to each pupil's gaps and strengths; for North Carolina classrooms this matters because the system reports a 25% improvement in math scores in one semester, offers 24/7 parent access for family engagement, and - for U.S. deployments - stores data within the U.S. under stated compliance practices, easing district vetting.

LAM (launched Jan 2024) raised question‑accuracy rates from 78% to 93% by leveraging exclusive data from 24 million students and 10 billion learning behaviors, making Squirrel AI a practical pilot option for PreK–5 math interventions in Durham districts.

Review adaptive‑learning market context and vendor implications in the EY white paper on AI adaptive learning and see Squirrel AI's platform overview for technical and privacy details: Squirrel AI adaptive learning platform overview and privacy details and EY white paper on AI adaptive learning market context.

Key metrics: Reported math improvement - 25% in one semester; Learning behaviors - 10 billion; Student data set - 24 million students; Learning centers - 3,000+; Nano‑level objectives - 10,000+.

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Amira Learning - AI-enabled Reading & Speech Practice

(Up)

Amira Learning pairs advanced speech recognition with Science of Reading pedagogy to give every early reader a 1:1, AI‑guided oral reading tutor that listens, diagnoses, and delivers micro‑interventions in real time - making it practical for Durham districts to scale individualized practice without hiring large tutoring teams; independent and state studies show measurable gains at modest dosages (Amira's research summary reports matching human tutoring after 30 sessions and ESSA Tier‑1 ratings) and state reviews (North Dakota, Utah, Texas, Louisiana) link 15–30 minutes/week of use to substantial score improvements, so the concrete “so what” for NC school leaders is this: a low‑cost pilot of 15–30 minutes/week per student can generate reliable screening data and faster tiered interventions.

Learn the aggregated evidence on Amira's efficacy at the Amira research hub and read the state efficacy summaries for details on usage and outcomes.

MetricReported result
Recommended student dosage~15–30 minutes/week (peak: 2–4 short sessions/week)
North DakotaHigh users grew roughly 2× as fast as non‑users
Utah effect size~0.40–0.43 for recommended usage
Texas STAAR impactAvg. ≈36 STAAR points per student; ~9 percentile ranks for dosage users
Evidence ratingESSA Tier 1; matched human tutoring after ~30 sessions

“I have seen firsthand that teachers and students who use Amira with fidelity achieve remarkable reading growth, reinforcing the impact of consistent, research-based literacy instruction.”

University of Michigan U-M GPT - Campus-owned LLM Platforms

(Up)

University of Michigan's campus-owned U‑M GPT offers a concrete template for Durham-area institutions weighing an in‑house LLM: ITS and U‑M's GenAI team combined licensed commercial models with internal controls to create a “closed AI” portal that emphasizes privacy, accessibility, and affordability, supports file uploads for summarization and extraction tasks, and enforces usage limits to curb misuse - details and usage guidance are documented in the University of Michigan U‑M GPT in‑depth guide (University of Michigan U‑M GPT in‑depth guide) and the EDUCAUSE case study on campus-built generative AI explains why a campus-built approach helps protect user data while broadening access (EDUCAUSE case study: How and why the University of Michigan built closed generative AI tools).

The practical takeaway for North Carolina colleges and districts: a secured, campus‑owned gateway can preserve FERPA‑level uses and still scale - U‑M reported heavy uptake (roughly 15,000 users/day) and a mix of foundational and reasoning models to serve teaching, advising, and operational workflows, so pilot designs should budget for model selection, prompt limits, and clear review processes to ensure accuracy and equity.

FactDetail
Text prompt limits≈75 prompts/hour (text models)
Image prompt limits≈10 prompts/hour (DALL·E 3 / image models)
File upload capsUp to 10 files/chat; PDFs: 50 pages or 10 files
Adoption snapshot~15,000 users/day reported
Data sensitivityApproved for “moderately sensitive”/FERPA uses

“AI will not take jobs away from you. But people who know how to use AI might.” - Ravi Pendse, U‑M VP for IT and CIO

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Automated Content Creation with tools like GPT-4o and DALL·E 3 - Teacher Copilots

(Up)

Teacher copilots like GPT‑4o for text and DALL·E 3 for images can automate first drafts of lesson plans, handouts, differentiated activities, and classroom visuals - turning hours of prep into editable starting points - so Durham teachers can reclaim time from the oft‑cited “10 hours, 40 minutes” workday by automating routine drafting and material creation (Edutopia article on ChatGPT time savings for teachers).

Practical entry points include using an editable lesson‑plan prompt template to produce objectives, standards alignment, assessments, and accommodations, then refining outputs for local curricula and student needs (TCEA lesson-planning prompts and editable ChatGPT template).

Pair text generation with campus policies: image generators can speed visual creation but may be rate‑limited in campus deployments (see image prompt caps in campus LLM examples), so pilot scope and review workflows should be defined before classroom rollout (University of Michigan campus LLM and GPT guidance).

The concrete “so what”: use AI to cut planning time, then spend the saved minutes on targeted interventions or equity‑focused differentiation - always review and align AI outputs to standards and student data.

Intelligent Assessment & Feedback - British University Vietnam Case

(Up)

The British University Vietnam's Artificial Intelligence Assessment Scale (AIAS) shows how a clear, tiered policy can turn GenAI from an integrity risk into an instructional asset: BUV defined a five‑point scale (Level 1 “No AI” → Level 5 “Full AI”) so faculty can specify permitted AI uses, and after launching the AIAS academic‑misconduct cases tied to GenAI fell from over 100 to zero while pass rates rose 33.3% and mean grades climbed 5.9% - a concrete “so what” for Durham schools seeking both integrity and equity gains, since the scale also helped English‑as‑additional‑language students demonstrate understanding via multimodal AI‑enhanced submissions; read the BUV case study for details and implementation notes (BUV AI Assessment Scale case study) and compare this approach with local pilots such as Nucamp's overview of AI‑powered study tools in Durham when designing an NC‑friendly rubric that clarifies expectations, supports multilingual learners, and reduces complaint and adjudication workload.

MetricBUV result
GenAI academic violationsOver 100 → 0 (post‑AIAS)
Pass rate change+33.3%
Mean grade change+5.9%
Assessment scale5 levels (No AI → Full AI)

NC State Extension Guidance - Safe Prompting and Approved Tools

(Up)

NC State Extension's practical playbook for safe prompting makes two things non‑negotiable for Durham educators: run pilots on campus‑approved products using your NC State account, and never paste sensitive or “red/purple” data into a public model.

The Extension guidance catalogs approved generative tools (examples include OpenAI ChatGPT Teams/Enterprise, Google Gemini, Microsoft Copilot, and Grammarly) and stresses that many approved tiers require paid accounts - free versions are not recommended because they may use university inputs to train public models - so plan budgets for campus‑sanctioned subscriptions and update data‑sharing settings if a free account is used.

Prompting best practices are equally concrete: treat AI outputs as editable drafts, ask the model to iterate and cite sources, avoid personal identifiers in custom instructions, and consult OIT's data classification before any pilot.

For implementation resources and the campus approved‑products list, see NC State Extension's AI Guidance and the NC State Approved AI Products overview.

Approved tool (examples)Account requirementData allowed
OpenAI ChatGPT (Teams/Enterprise)Use NC State account; paid tiers recommendedGreen (non‑sensitive) only; red/purple needs ITPC
Google Gemini (Chat / Workspace)Campus‑approved; follow OIT settingsGreen/Yellow; sensitive use requires review
Microsoft Copilot (M365)Campus subscription; configured by ITGreen/Yellow; higher sensitivity requires ITPC
Grammarly Pro / Education, Zoom AI CompanionPaid/education licenses preferredGreen only unless approved

“AI is a tool to augment your expertise and ingenuity, not replace them.”

GovAI Coalition / ncIMPACT Recommendations - Policy & Vendor Vetting

(Up)

Durham and other North Carolina education purchasers can accelerate safe AI adoption by borrowing the GovAI Coalition's practitioner-ready governance kit - policy templates aligned to the NIST AI Risk Management Framework, an AI Policy and Governance Handbook, an AI Incident Response Plan, and standardized vendor tools such as the AI FactSheet and Vendor Agreement - so districts avoid reinventing procurement language and force consistent vendor disclosures (vendors may register and submit AI FactSheets via the Coalition's registration process).

Because the Coalition already counts North Carolina members (City of Durham, City of Raleigh, County of Wake and the State of North Carolina Department of Information Technology among others), local teams can use the GovAI deliverables to compare real contracts in the AI Contract Hub, require FactSheets up front, and shorten legal and privacy review cycles - practical wins when pilots must protect student data and meet FERPA‑level expectations.

These resources are intended as templates and not legal advice, so pair them with district counsel before signing vendor agreements.

ResourcePurpose
GovAI templates and resources for AI policy, governance, and incident responseAI Policy, Governance Handbook, Incident Response templates (NIST‑aligned)
GovAI Coalition overview and vendor registry informationMembership, deliverables, and vendor registry information
AI FactSheet and standard Vendor Agreement templatesStandardized vendor disclosures to support procurement vetting

Papers & Practical Prompts - Ready-to-use Templates for Durham Practitioners

(Up)

Durham practitioners can skip the blank‑page problem by pairing curated prompt libraries with state PD: use ready templates for lesson planning, rubrics, accessibility scaffolds, parent communications, and assessment that are already field‑tested - examples include the aiforeducation K‑12 prompt collection for lesson plans and rubrics (aiforeducation K‑12 prompt collection for lesson plans and rubrics), Panorama's classroom‑focused prompt sets and an AI Roadmap with 100+ prompts for district leaders (Panorama AI prompts for K‑12 and district AI Roadmap), and a concise educator prompt library that teaches prompt engineering basics (iLearnNH K‑12 educator prompt library and prompt engineering guide).

Pair these templates with NCDPI's living generative‑AI guidance and Wednesday webinar series - practitioners who attend live webinars (certificates available) can earn PD while testing turnkey prompts; a concrete “so what”: borrow a rubric or lesson‑plan prompt today and run a 2‑week pilot, capture time‑saved and student engagement gains, then scale with district PD and vendor vetting through statewide resources.

SourceWhat it offersNC relevance
aiforeducation K‑12 prompt collectionCurated K‑12 prompts (lesson plans, rubrics, accessibility)Ready templates for classroom pilots
Panorama AI prompts and district AI Roadmap30+ prompts + AI Roadmap (100+ prompts)District‑scale prompt collections and privacy-conscious product options
NCDPI generative AI resources and webinar seriesGenerative AI guidance, webinar series, and cohort materialsState PD, recorded webinars, and implementation templates for NC schools

Student-Centered Equity Use Case: Multilingual Materials & Accessibility (example: Safran University practices)

(Up)

Centering students means making core materials truly accessible: use AI to generate timely multilingual translations, plain‑language summaries of assignments, accurate captions and transcripts, and descriptive alt text so non‑English families and students with disabilities can engage on equal terms; run a short, measurable pilot (for example, translate back‑to‑school packets and track a two‑week change in family engagement or appointment attendance) before scaling to district systems.

Local evidence already shows practical impact - AI‑powered student study tools are boosting retention at Duke and NCCU - so pairing small translation pilots with the state's guidance and research-backed inclusive practices is a pragmatic equity play for Durham leaders (see open access social & behavioral sciences research for inclusive pedagogy and local AI partnership recommendations).

Useful starting resources: research compendium of social and behavioral sciences (research compendium of social and behavioral sciences on inclusive education), a local case summary of AI tools in Durham education (local case summary: AI‑powered student study tools in Durham schools), and guidance on building cross‑sector pilots (guide to building cross‑sector AI pilot programs in Durham).

Conclusion: Next Steps for Durham Educators - Pilots, PD, and Governance

(Up)

Durham educators should sequence three practical moves now: run short, measurable pilots; invest in job‑embedded PD; and adopt clear governance before scaling.

Start with a 2‑week classroom pilot using vetted prompt templates or an accessibility translation test and capture concrete metrics - teacher planning time saved and student engagement, or for reading tutors follow Amira's recommended ~15–30 minutes/week dosage to measure learning gains - then use those results to justify subscriptions.

Pair pilots with NCDPI's living guidance and webinar PD so staff earn certificates while building AI literacy, and shorten procurement timelines by requiring vendor FactSheets and contract templates from practitioner kits.

Together, these steps turn policy into action: small pilots produce evidence, PD builds staff capacity, and GovAI‑style procurement templates protect student data while accelerating useful tools into classrooms.

Next stepQuick actionResource
Pilot classroom use Run a 2‑week prompt/rubric pilot; track teacher time‑saved and student engagement; for reading tools measure 15–30 min/week usage NCDPI Generative AI Guidebook for K-12 schools
Professional development Attend NCDPI webinars (live or on‑demand) and claim PD certificates to build prompt literacy NCDPI AI resources and webinar series for educator professional development
Governance & procurement Require AI FactSheets and use standardized contract templates to speed legal/privacy review GovAI Coalition policy templates and vendor vetting resources

“Generative artificial intelligence is playing a growing and significant role in our society.”

Frequently Asked Questions

(Up)

What are the top AI use cases Durham educators should pilot first?

Prioritize small, measurable pilots that use only non‑sensitive (green) data: examples include AI‑guided reading tutors (Amira), adaptive math practice (Squirrel AI), teacher copilots for lesson planning and materials (GPT‑4o/DALL·E 3), multilingual/accessible materials generation, and campus‑hosted LLM gateways for safe summarization and file analysis (U‑M GPT model). Measure teacher time saved, student engagement, and learning gains (e.g., Amira: ~15–30 minutes/week).

How should Durham schools manage data privacy and vendor risk when piloting AI?

Follow campus approved‑tool lists (examples: OpenAI ChatGPT Teams/Enterprise, Google Gemini, Microsoft Copilot) and NC State/NC Extension guidance: run pilots on approved accounts, avoid pasting red/purple sensitive data into public models, budget for paid/enterprise tiers, require vendor AI FactSheets and standardized contract templates (GovAI Coalition) to speed procurement and vendor vetting, and consult IT/Legal before scaling.

What practical steps and metrics should leaders use to pilot AI in Durham?

Run time‑boxed pilots (e.g., 2 weeks for lesson‑plan/rubric prompts or 15–30 minutes/week dosing for reading tutors). Capture concrete metrics: teacher planning time saved, change in student engagement, support‑load reductions, screening/assessment accuracy, and learning gains (use district/state assessment comparators). Pair pilots with job‑embedded PD (prompt literacy) and require vendor FactSheets for procurement.

How can schools ensure equity when adopting AI tools?

Center pilots on student access and inclusion: use AI to produce multilingual translations, plain‑language assignment summaries, accurate captions and transcripts, and descriptive alt text. Run short pilots (e.g., translate back‑to‑school packets) and track family engagement or appointment attendance. Use research‑backed tools and vendor compliance (U.S. data storage where required) to ease vetting and protect student privacy.

What training or curriculum should staff complete before scaling AI initiatives?

Invest in prompt literacy and practical AI skills through short PD and job‑embedded courses (for example, Nucamp's 15‑week AI Essentials for Work covers prompt design and evaluation). Combine PD with state resources (NCDPI webinars, certificates) and curated prompt libraries so staff can use field‑tested templates, iterate on prompts, and treat AI outputs as editable drafts aligned to standards.

You may be interested in the following topics as well:

N

Ludo Fourrage

Founder and CEO

Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. ​With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible