Top 10 AI Prompts and Use Cases and in the Education Industry in New York City
Last Updated: August 23rd 2025

Too Long; Didn't Read:
NYC schools can pilot 10 practical, ethics‑aware AI uses - personalized learning, Khanmigo tutoring, Eklavvya grading, Perplexity feedback, DeepL translation, admin bots, mental‑health triage, creative tools - across 50 classrooms (80% Title I) with ERMA privacy checks and $1.9M contract caution.
New York City education is at a crossroads where classroom pilots and citywide scrutiny meet: the New York Academy of Sciences reports teachers in its Scientist‑in‑Residence program are integrating GenAI to elevate STEM inquiry across 50 classrooms in all five boroughs (80% Title I), even as Comptroller Brad Lander urged the DOE to pull a $1.9 million contract for an AI reading aid pending efficacy and policy review; that mix of promise and caution means schools need practical, ethics‑aware training and short, job‑focused pathways for educators and administrators.
Programs like Nucamp's AI Essentials for Work bootcamp - AI training for educators (15 weeks) teach prompt writing and classroom workflows, offering a concrete route to build safe, effective AI practices alongside district policy discussions - bridging pilot success and the accountability NYC leaders are demanding.
See the Comptroller's full statement and the Academy's pilot findings for context.
Program | Length | Courses | Early-bird cost | Register |
---|---|---|---|---|
AI Essentials for Work | 15 Weeks | AI at Work: Foundations; Writing AI Prompts; Job-Based Practical AI Skills | $3,582 | Register for AI Essentials for Work (Nucamp) |
“Before we spend millions on an AI program that could shape our kindergartners' reading abilities, let's make sure we're doing this right. While AI presents innovative opportunities to help teachers be more effective and improve student outcomes, the DOE's plans to approve a contract for AI technology in our classrooms before studying its effects or developing any guidelines is premature... The DOE should immediately pull this item from the PEP meeting agenda while it evaluates the effectiveness of AI classroom aids and develops a citywide policy in dialogue with educators, parents, and students for the appropriate use of AI in classrooms.”
Table of Contents
- Methodology: How we selected these top 10 prompts and use cases
- Personalized learning pathways
- AI tutoring & virtual assistants (Khanmigo)
- Automated assessment generation & grading (Eklavvya)
- AI-assisted feedback and revision workflows (Perplexity)
- Content creation and course design (Canva and Gamma)
- Administrative automation & operational agents (playlab.ai)
- Mental health & student support tools (Panorama Solara / TEAMMAIT-inspired)
- Multilingual & accessibility supports (DeepL and Speechify)
- Research, literature synthesis & teacher professional learning (Perplexity / Johns Hopkins Agent Lab methods)
- Creative arts & career-preparation workflows (Canva, Adobe Express, Suno)
- Conclusion: Next steps for NYC teachers and districts
- Frequently Asked Questions
Check out next:
Learn which generative AI tools used by schools are powering content creation and student support across NYC campuses.
Methodology: How we selected these top 10 prompts and use cases
(Up)Selection prioritized safety, scalability, and classroom impact by applying three policy‑grounded filters: hard compliance with the NYC DOE Data Privacy and Security Compliance Process (vendors must disclose AI features, obtain ERMA approval, and may not train models on NYCPS PII), adoption of the Future of Privacy Forum's risk‑based, equity‑focused recommendations for school AI, and alignment with state‑level transparency and impact‑assessment trends documented in the NCSL 2025 AI legislation summary.
Prompts and use cases were chosen only if they (a) reduce teacher workload without increasing student‑data exposure, (b) produce explainable outputs that teachers can review and contest, and (c) fit procurement and auditing workflows so pilots can move to district‑wide use.
The net result is a prioritized list of classroom and administrative prompts that are practical for NYC pilots and ready for ERMA vetting and public disclosure.
NYC DOE Data Privacy and Security Compliance Process, Future of Privacy Forum recommendations for NYC classrooms, and the NCSL 2025 AI legislation summary guided every inclusion decision.
Selection Criterion | Primary Source |
---|---|
Policy & compliance (no PII for model training) | NYC DOE Data Privacy and Security Compliance Process |
Privacy, equity, risk-based targeting | Future of Privacy Forum recommendations |
Transparency, provenance, impact assessments | NCSL 2025 AI legislation summary |
Personalized learning pathways
(Up)Personalized learning pathways let NYC educators turn assessment data into daily, actionable instruction: NYS-aligned providers like Savvas NYS-aligned mathematics curriculum supply grade-banded, standards-aligned modules for K–12 so Math interventions map directly to state expectations, adaptive platforms such as DreamBox Math adaptive learning platform use Intelligent Adaptive Learning™ to adjust lessons in real time for K–9 students, and city systems can stitch these tools to MAP Growth via NWEA Instructional Connections diagnostic-to-instruction pipeline so placement and pacing happen without extra baseline tests.
The practical payoff for NYC: teachers get daily-updated, standards-tied learning paths and reports that surface a single high-impact next step for each student, reducing time spent on manual diagnostics and helping small-group instruction target the precise skill that moves a learner to grade level.
Districts can pilot combinations of NYS-aligned curriculum, adaptive practice, and diagnostic-to-instruction pipelines to accelerate recovery and keep enrichment meaningful for diverse, multilingual classrooms.
Program | Grades | Key feature |
---|---|---|
Savvas Math | K–12 | NYS standards-aligned curriculum and personalized assessments |
DreamBox Math | K–9 | Real-time adaptive lessons with continuous formative assessment |
Exact Path | K–12 | Diagnostic-driven learning paths and tiered interventions |
Do The Math | 1–5 (and MS intervention) | Research-based, classroom-tested intervention modules |
“It's a connected experience delivering teacher-led core and supplemental instruction across mathematics and literacy.”
AI tutoring & virtual assistants (Khanmigo)
(Up)Khan Academy's Khanmigo is emerging as a practical AI tutoring and teacher‑assistant option districts can pilot in urban systems: the platform - described on the Khanmigo site as an “always‑available teaching assistant” - supports student‑mode tutoring and a teacher‑mode for lesson planning and real‑time progress signals (Khanmigo AI tutor from Khan Academy); nearby Newark Public Schools moved from a classroom pilot to a broader rollout after testing Khanmigo in North Ward schools (a third‑grader completed math work at a 6th–8th grade level while using the tool) and won a $25,000 Gates Foundation grant to expand the program, illustrating how city districts can combine philanthropy, training, and targeted pilots to scale safely (Newark expansion with Gates Foundation grant coverage).
Evidence reviews of AI‑enhanced, high‑dose tutoring show AI can increase reach and efficiency when paired with human tutoring and clear implementation goals, making Khanmigo a tool for districts that aim to boost dosage without hiring large numbers of new tutors (NORC report on AI‑enhanced high‑dose tutoring).
The crucial “so what?” for NYC: districts can pilot Khanmigo to extend personalized practice and free teacher time, but must pair it with teacher training, privacy agreements, and local evaluation before scaling.
Pilot | Cost (reported) | Reach | Notable outcome |
---|---|---|---|
Newark Public Schools (North Ward) | $35/student (district pricing start) | Used by 400+ districts (Khan Academy partners) | Third‑grader completed 6–8 grade math work in pilot |
“In an ideal world every student would have a human tutor.” - Khan Academy spokesperson
Automated assessment generation & grading (Eklavvya)
(Up)Eklavvya's AI-powered on‑screen evaluation offers New York schools a practical path to speed and fairness in large-scale descriptive assessment: the platform digitizes handwritten scripts with advanced OCR, evaluates answers against model solutions and rubrics using fine‑tuned LLMs, and generates point‑by‑point feedback teachers can reuse for small‑group instruction - translating to independent research‑backed time savings of about 31% per response and 33% per answer sheet and removing handwriting bias by grading content, not penmanship.
That mix of scalability, transparency (audit trails and rubric-based scoring), and human-in-the-loop moderation (recommended spot checks) makes it suitable for district pilots that need fast turnaround without sacrificing appealability or equity; explore Eklavvya's product page and feature brochure to review security, proctoring, and reporting details before ERMA-style procurement and privacy review.
Eklavvya AI answer sheet grading product page | Eklavvya online assessment feature list and brochure
Metric | Value / Capability |
---|---|
Grading time reduction (per response) | 31% |
Grading time reduction (per answer sheet) | 33% |
Bias reduction | Handwriting transcription removes penmanship influence |
Evaluation pipeline | OCR + LLM scoring + human QA (~10% oversight) |
“At Devbhoomi University, we used to have a hard time making the answer sheet checking process smooth. But with Eklavvya's onscreen marking system, things got a lot easier. Examiners and moderators now have a simpler time, and the whole process of grading answer sheets has gotten better all because of Eklavvya's system.”
AI-assisted feedback and revision workflows (Perplexity)
(Up)Perplexity AI can streamline NYC classroom feedback by turning multi‑source research and rubric language into transparent, citation‑backed revision prompts teachers can edit and deliver: its Academic mode and real‑time web search help synthesize evidence, generate inline citations, and suggest clearer sentence structures so feedback stays readable for multilingual students and defensible during district reviews.
Pair its outputs with rubric templates from the TCEA rubric generator to auto‑populate criterion‑aligned comments, then follow Writable's best practices - send AI suggestions early, keep revision cycles front and center, and always review suggested grades - so human oversight preserves equity and FERPA‑aligned data practices.
With features built for citation and synthesis and roughly 15 million monthly users as of early 2024, Perplexity offers a practical research-to-feedback pipeline for NYC pilots that need explainable outputs and audit trails teachers can trust in Mary's classroom or citywide professional learning communities.
Feature | Classroom use |
---|---|
Perplexity AI academic mode and citation generation for research-backed feedback | Build evidence‑backed feedback and student revision prompts with source links |
TCEA rubric generator for creating educator rubrics and criteria | Generate rubric criteria and descriptors to align AI comments to standards |
Writable AI feedback best practices for teachers and classrooms | Operationalize review, timing, and revision workflows to maximize student uptake |
“Optimal perplexity enhances reader understanding by making complex ideas accessible and engaging. It also helps maintain academic standards by ensuring that the content is both informative and readable.”
Content creation and course design (Canva and Gamma)
(Up)City classrooms benefit when visual courseware and unit templates are produced faster and with equity in mind: design tools such as Canva and Gamma can accelerate lesson packaging, multilingual handouts, and engaging slide decks so educators spend less time on formatting and more on targeted instruction - but adoption must follow the same equity and procurement guardrails NYC expects.
District leaders should pair tool rollouts with the equity-focused AI initiatives for NYC education (equity-focused AI initiatives for NYC education), embed the practical retraining steps for NYC educators so staff can design with new workflows instead of being displaced (practical retraining steps for NYC educators), and evaluate vendor claims against current market trends and VC funding shaping local edtech adoption (market trends and VC funding in NYC edtech).
When design tools are paired with policy-aligned training, schools can convert routine prep into extra small-group minutes and culturally responsive materials for diverse NYC classrooms.
Administrative automation & operational agents (playlab.ai)
(Up)Administrative automation in NYC districts can start small and stay ethical by using community‑driven labs to prototype operational agents: Playlab's hands‑on workshops let educators and leaders build and test bots that draft empathetic attendance nudges, summarize incident reports, or automate routine scheduling and report generation in days rather than months (Playlab AI education workshops and prototyping); those prototypes should follow Playlab's own implementation guidance on privacy, bias mitigation, and school‑wide rollout so pilots integrate with district procurement and FERPA/COPPA expectations (Playlab AI implementation guidelines for schools).
Combine those lightweight prototypes with proven attendance automation to move from prototype to impact - attendance platforms and features can automate truancy notices and analytics so teams spend less time on paperwork and more on targeted interventions (see attendance automation examples like NudgeK12 attendance automation features).
The practical payoff for NYC: small, school‑built agents that free counselors and clerks for student contact work while conforming to district privacy and equity rules.
Administrative use case | How Playlab helps |
---|---|
Attendance nudges & outreach | Prototype empathetic nudge apps and templates, then pair with district attendance platforms |
Scheduling, reports, paperwork | Build small automation agents to draft and summarize routine documents, saving staff time |
Responsible rollout | Follow Playlab guidelines for privacy, PD, pilots, and community engagement |
“Everything I do is operating under the belief that all students CAN, with the right support.” - LeAnita Garner, Instructional Coach
Mental health & student support tools (Panorama Solara / TEAMMAIT-inspired)
(Up)AI-driven mental‑health tools can help New York City schools triage scarce counseling resources, meet students where they are, and surface concerns adults might otherwise miss: a large EdSurge analysis of more than 250,000 student–bot messages across 19 states found the same top stressors - balancing extracurriculars and school, sleep struggles, loneliness, test anxiety, and procrastination - and showed chatbots can prompt students to disclose high‑risk thoughts (about 2% of conversations were high risk, with roughly 38% of those admitting suicidal ideation) and increase follow‑up with adults (41% of students shared chat summaries with a counselor).
Pairing those insights with district monitoring and triage platforms helps overwhelmed staff prioritize cases - Securly notes many counselors manage roughly 385 students each - while clinical safeguards (evidence‑based training, crisis protocols, human escalation, and careful data governance) must remain non‑negotiable.
Pilot designs for NYC should treat bots as workforce multipliers and data sources for program planning, not replacements for clinicians, and follow the APA and school‑safety guidance embedded in district procurement and privacy reviews; see the EdSurge report on what students tell AI and Securly's overview of AI supports for resource‑constrained schools for practical implementation points.
Top 10 chat topics (students) |
---|
1. Balancing extracurricular activities and school |
2. Sleep struggles |
3. Finding a relationship / loneliness |
4. Interpersonal conflict |
5. Lack of motivation |
6. Test anxiety |
7. Focus and procrastination |
8. How to reach out for support |
9. Having a bad day |
10. Poor grades |
“I hope we move the conversation away from telling kids what they struggle with to being a partner. It's, ‘I know you know you're struggling. How are you dealing with it?' and not just a top down, ‘I know you're not sleeping.'” - Elsa Friis, Alongside
Multilingual & accessibility supports (DeepL and Speechify)
(Up)Multilingual and accessibility supports in NYC classrooms gain practical traction when tools handle whole documents, speech, and tone as easily as short messages - DeepL now translates text, PDFs, .docx and .pptx files and offers style/tone presets and an API that districts can integrate into parent‑communication workflows and IEP materials to reach Spanish, Chinese, Arabic and other families without lengthy manual rework; DeepL's July 2025 New York Tech Hub announcement highlights live speech translation updates that matter for multilingual parent conferences and school events (DeepL July 2025 New York Tech Hub press release), while independent reviews report high accuracy (about 89% in a Centus study cited by Smartling), a usefulness point for districts that must preserve nuance in legal or special‑education texts (Smartling DeepL accuracy review).
For districts concerned about privacy and auditability, DeepL Pro advertises unlimited translation with
“maximum data security,”
letting schools pilot multilingual outreach and accessible course materials with documented quality and shorter turnaround than human‑only workflows (DeepL Translator Pro security and features).
Capability | Classroom or district benefit |
---|---|
Text & document translation (.pdf, .docx, .pptx) | Fast, formatted parent letters, IEPs, slide decks |
Live speech translation | Real‑time multilingual meetings and conferences |
Style/tone presets & DeepL Write | Accessible, culturally attuned communications |
DeepL Pro (security features) | Districts can pilot with stronger data protections |
Research, literature synthesis & teacher professional learning (Perplexity / Johns Hopkins Agent Lab methods)
(Up)Rigorous, transparent evidence synthesis can reshape NYC teacher professional learning by turning scattered pilot results into clear implementation guidance: Yale's Education Collaboratory has advanced pre-registered, fully transparent meta‑analysis methods - including the first Registered Report of a meta‑analysis published in Child Development - that set a reproducible standard for weighing intervention effects and implementation features (Yale Education Collaboratory evidence synthesis methods).
Complementary work from a Local Evidence Synthesis on Teacher Learning synthesizes 62 studies to show what actually changes classroom practice: collaborative research, teacher agency, attention to emotional well‑being, and formative instructional practices produce durable teacher learning (local evidence synthesis on teacher learning (62 studies)).
The so‑what for NYC: adopt pre‑registered synthesis and fund school‑based collaborative inquiries that measure both pedagogical fidelity and staff well‑being, then align district PD investments to those high‑leverage practices; practical retraining and job‑focused pathways can translate these findings into classroom routines (practical retraining and job-focused pathways for NYC educators).
Source | Key contribution for NYC PD |
---|---|
Yale Education Collaboratory | Pre‑registered, transparent meta‑analysis methods for reproducible evidence and policy alignment |
Local Evidence Synthesis on Teacher Learning | 62‑study synthesis: collaborative inquiry, teacher agency, well‑being, and formative practices drive teacher learning |
Creative arts & career-preparation workflows (Canva, Adobe Express, Suno)
(Up)Creative-arts and career-prep workflows in NYC classrooms gain traction when visual and generative tools cut busywork and let students focus on craft: Adobe Express's commercially-safe generative AI, 220,000+ templates, bulk-resize and branding kits let teachers and CTE programs produce polished portfolios, audition flyers, and employer-ready one-pagers far faster - Adobe cites a 70% faster time to market for clients - while built-in PDF editing and Photoshop/Illustrator sync reduce file wrangling during portfolio review cycles; see Adobe Express's feature overview and a practical deep-dive on integrating the app into school pipelines in the Adobe Max session on creative workflows.
District leaders should pair these tool capabilities with local equity strategies and retraining so that savings on formatting translate into more lab time, coached revisions, and industry-aligned artifacts for graduates (review market trends and VC funding shaping NYC edtech adoption for procurement context).
“I love using Adobe Express to create flyers for my small business. All the tools in Adobe Express, such as the different fonts and graphics help bring my vision to life.” - Erica X., Owner at Photoshoots by Erica
Conclusion: Next steps for NYC teachers and districts
(Up)Conclusion - next steps for NYC teachers and districts: pilot with policy first, scale with people. Start every classroom or admin pilot by routing vendors through the NYC DOE Data Privacy and Security Compliance Process (ERMA approval, vendor AI disclosure, and explicit prohibitions on using NYCPS PII to train models), engage the city's K12 AI Policy Lab to co‑design local guardrails guided by EDSAFE benchmarks, and require human‑in‑the‑loop workflows, audit logs, and pre-registered evaluation plans so outcomes - not marketing - drive procurement decisions.
Comptroller Lander's call to pause the $1.9M reading‑AI contract reinforces the need for short, measurable pilots that include teacher training and clear privacy contracts; pair those pilots with job‑focused upskilling (for example, cohort programs that teach prompt writing and classroom AI workflows) so educators can operate, assess, and contest AI outputs before tools reach students.
Small, transparent pilots that meet ERMA rules and invest in teacher capacity give NYC districts a path to scale real gains in instructional time and equity without exposing student data or buying unproven systems.
NYC DOE Data Privacy & Security Compliance Process, NYC K12 AI Policy Lab, AI Essentials for Work bootcamp (Nucamp)
Program | Length | Early-bird cost | Registration Link |
---|---|---|---|
AI Essentials for Work | 15 Weeks | $3,582 | Register for AI Essentials for Work (Nucamp) |
“Before we spend millions on an AI program that could shape our kindergartners' reading abilities, let's make sure we're doing this right. While AI presents innovative opportunities to help teachers be more effective and improve student outcomes, the DOE's plans to approve a contract for AI technology in our classrooms before studying its effects or developing any guidelines is premature. The DOE should immediately pull this item from the PEP meeting agenda while it evaluates the effectiveness of AI classroom aids and develops a citywide policy in dialogue with educators, parents, and students for the appropriate use of AI in classrooms.”
Frequently Asked Questions
(Up)What are the top AI use cases and prompts recommended for New York City schools?
The article highlights ten practical, policy‑aligned AI use cases and classroom prompts: personalized learning pathway generation from assessment data, AI tutoring/virtual assistants (e.g., Khanmigo), automated assessment generation and grading (e.g., Eklavvya), AI‑assisted feedback and revision workflows (e.g., Perplexity), content creation and course design (Canva, Gamma), administrative automation/operational agents (Playlab prototypes), mental health and student support triage tools, multilingual and accessibility supports (DeepL, Speechify), research and evidence synthesis for teacher professional learning, and creative arts/career‑prep workflows (Adobe Express, Suno). Each use case is selected for safety, explainability, and fit with NYC procurement and privacy workflows.
How were these prompts and use cases selected - what methodology and policy filters were used?
Selection prioritized safety, scalability, and classroom impact using three policy‑grounded filters: strict compliance with the NYC DOE Data Privacy and Security Compliance Process (vendors must disclose AI features, obtain ERMA approval, and may not train models on NYCPS PII); adoption of the Future of Privacy Forum's risk‑based, equity‑focused recommendations; and alignment with state transparency and impact‑assessment trends from the NCSL 2025 AI legislation summary. Candidate prompts had to (a) reduce teacher workload without increasing student‑data exposure, (b) produce explainable outputs teachers can review and contest, and (c) fit procurement and auditing workflows for district pilots.
What practical steps should NYC districts and schools take before piloting or scaling AI tools?
Start pilots with policy-first requirements: route vendors through the NYC DOE Data Privacy and Security Compliance Process (ERMA approval, vendor AI disclosures, prohibitions on using NYCPS PII for training), require human‑in‑the‑loop workflows, audit logs, pre‑registered evaluation plans, and public disclosures of pilot outcomes. Pair pilots with teacher training and job‑focused upskilling (e.g., prompt writing and classroom workflows), community engagement, and equity‑focused implementation guidance (EDSAFE/Future of Privacy Forum) so scaling follows measured impact rather than marketing claims.
What are the benefits and risks of specific AI tools cited (e.g., Khanmigo, Eklavvya, Perplexity, DeepL) in NYC classrooms?
Benefits: Khanmigo can expand personalized tutoring dosage and support teacher planning when paired with training and evaluation; Eklavvya reduces grading time (~31% per response, ~33% per answer sheet) while offering OCR and rubric‑based scoring with audit trails; Perplexity supports citation‑backed, research‑synthesized feedback and revision prompts; DeepL and Speechify enable fast, formatted multilingual and accessibility supports (document translation, live speech translation, style presets). Risks: potential student data exposure if vendors train on PII, opaque model outputs without explainability, overreliance on bots for clinical tasks (mental‑health tools must have human escalation), and procurement or equity mismatches - all mitigated by ERMA review, data contracts, human oversight, and pre‑registered evaluations.
How can districts measure impact and ensure equity when using AI in classrooms?
Use pre‑registered evidence synthesis and transparent evaluation methods (e.g., Registered Report approaches used in higher‑ed meta‑analyses), define measurable short pilots with clear outcomes (student learning gains, time saved for teachers, engagement metrics), require explainable outputs and audit logs, monitor differential impacts across multilingual and Title I populations, and align evaluation to local procurement and privacy rules. Invest in teacher PD tied to classroom fidelity, collect qualitative data on student and family experience, and only scale once pilots demonstrate efficacy, equitable outcomes, and compliance with NYC DOE data protections.
You may be interested in the following topics as well:
Instructional designers should pivot toward project-based and culturally responsive design that AI struggles to generate authentically.
Local educators are using adaptive tutoring platforms to personalize lessons and reduce long-term tutoring expenses.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible