Top 10 AI Prompts and Use Cases and in the Government Industry in Portland

By Ludo Fourrage

Last Updated: August 25th 2025

City of Portland staff using GenAI tools for permitting and public engagement, illustrated with permit forms and chatbot icons.

Too Long; Didn't Read:

Portland's government AI pilots used 2,400+ real help‑desk interactions plus ~200 synthetic examples to train chatbots and prompt libraries, improving 15‑minute permit booking accuracy, staff confidence, and benchmarking workflows with reusable toolkits, human‑in‑the‑loop review, and measurable pass rates.

Portland is fast becoming a practical testbed for government AI prompts: the City's Digital Services team partnered with US Digital Response to pilot a generative AI chatbot that helped residents stop booking the wrong 15‑minute permitting appointments (many people were “booking multiple, different kinds of appointments just to cover their bases”) by training on over 2,400 real help‑desk interactions and about 200 synthetic examples to improve accuracy and staff confidence; the project paired hands‑on prompt work with human‑centered design and produced a reusable toolkit for future pilots (see the City's pilot writeup and the Smart City PDX ADS work on responsible use).

These Portland experiments - and national guidance on barriers from MetroLab - show why practical skills like writing effective prompts and evaluating outputs matter, which is exactly what the AI Essentials for Work bootcamp teaches for workplace AI adoption.

ProgramDetails
AI Essentials for Work15 weeks; courses: AI at Work: Foundations, Writing AI Prompts, Job Based Practical AI Skills; early bird $3,582 / $3,942 after; Register for AI Essentials for Work (15-week bootcamp)

“If your content is confusing or conflicting or poorly structured, AI doesn't have a solid foundation to work from.” - Evan Bowers

Table of Contents

  • Methodology - How we built this list
  • Smarter Permitting Assistant - City of Portland Digital Services
  • Chatbot Triage for Permit Help Desk - US Digital Response (USDR)
  • Synthetic Training-Data Generation - Portland Digital Services
  • Prompt-Edit Collaboration Workflow - US Digital Response (USDR) & Christopher Fan
  • Benchmarks & Evaluation Prompts - Portland Reusable Toolkit
  • Human-Centered Content-Cleaning Prompts - Evan Bowers / Portland Digital Services
  • Responsible-AI Compliance Prompts - InnovateUS & NASPO resources
  • Intelligent Document Processing Prompts - Sergio Ortega / AWS reference
  • Public Engagement Synthesis Prompts - InnovateUS workshops / Eric Gordon
  • Literacy & Accessibility Prompts - Portland Digital Services accessibility efforts
  • Conclusion - Practical next steps for Portland agencies
  • Frequently Asked Questions

Check out next:

Methodology - How we built this list

(Up)

This list was built from the hands‑on practices Portland's Digital Services team and partners shared publicly: scoping interviews with permitting technicians, analysis of more than 2,400 real help‑desk interactions, and the creation of roughly 200 SME‑labeled synthetic examples that served as the “source of truth” for training and testing the chatbot; details of that pipeline and the pilot's internal Dialogflow prototype are documented in the City's pilot writeup and the InnovateUS session on Portland's GenAI permitting pilot.

Methodology highlights included resident and staff engagement to define failure modes (residents often booked multiple 15‑minute appointments “to cover their bases”), iterative prompt editing with team‑wide edit suggestions, embedding the prototype behind a login for controlled staff testing with built‑in feedback tools, and benchmarking approaches that fed a reusable toolkit of prompt libraries and evaluation methods for future Oregon pilots.

The result is a practical, replicable set of prompts and checks aimed at improving booking accuracy while preserving human oversight in public service workflows; watch the workshop recording and read the City writeup for the step‑by‑step playbook.

“If your content is confusing or conflicting or poorly structured, AI doesn't have a solid foundation to work from.” - Evan Bowers

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Smarter Permitting Assistant - City of Portland Digital Services

(Up)

The City of Portland's “Smarter Permitting Assistant” turned a common friction point - residents booking multiple, different kinds of 15‑minute appointments “to cover their bases” - into a focused, human-centered pilot that combined user research with iterative prompt work and concrete data: the Digital Services team trained a Dialogflow prototype on more than 2,400 real help‑desk interactions plus roughly 200 SME‑labeled synthetic examples, embedded the bot behind a staff login with built‑in feedback and edit suggestions, and used prompts and benchmarks from that process to raise booking accuracy and staff confidence; read the City of Portland Smarter Permitting Assistant pilot writeup for details and the US Digital Response case study for how partners helped operationalize continuous evaluation and prompt libraries for reuse across Oregon agencies.

“If your content is confusing or conflicting or poorly structured, AI doesn't have a solid foundation to work from.” - Evan Bowers

Chatbot Triage for Permit Help Desk - US Digital Response (USDR)

(Up)

US Digital Response (USDR) helped turn Portland's permitting pain point into a disciplined chatbot‑triage workflow: by pairing the City's Dialogflow prototype with a testing system of 177 real‑world scenarios and the dataset of 2,400 help‑desk interactions plus ~200 SME‑crafted synthetic examples, the team iteratively wrote and rewrote prompts, embedded the bot behind a staff login with built‑in feedback, and measured improvements in booking accuracy and staff confidence; read the City's pilot writeup for the workshop context and the USDR case study for how that controlled testing and prompt‑editing loop produced reusable prompt libraries and benchmarking methods that other Oregon agencies can adopt for safer, more reliable triage.

“The key thing here is to have some place where anybody within the team could suggest edits to the prompt.” - Christopher Fan, USDR

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Synthetic Training-Data Generation - Portland Digital Services

(Up)

Portland Digital Services leaned on synthetic training-data generation to fill the gaps that real help‑desk logs couldn't cover - augmenting the City's 2,400+ real interactions with roughly 200 SME‑crafted synthetic examples so the Dialogflow prototype could see rare, confusing booking patterns that otherwise never appear in training sets (for example, unusual combinations of permit questions that drove residents to “book multiple, different kinds of appointments”); this aligns with industry best practices around creating diverse, privacy‑preserving synthetic text and dialogue, validating fidelity with statistical and task‑specific tests, and mixing synthetic with real examples to avoid models overfitting to generation artifacts, as outlined in Digital Divide Data's guide to synthetic data generation and StateTech's overview of municipal use cases.

Those practices - clear objectives for generation, privacy‑by‑design safeguards, repeated validation, and thorough documentation of pipelines - help Portland keep models reliable and auditable while protecting constituent data, and they mirror federal interest in verified, privacy‑preserving synthetic solutions from DHS as cities and counties scale AI responsibly.

Synthetic TypeWhen to Use
Partial synthetic dataProtects sensitive fields within otherwise real datasets
Full synthetic dataUsed when no usable real data exists or to simulate entire scenarios

“A synthetic dataset is only as valuable as its ability to support accurate, generalizable model performance.”

Prompt-Edit Collaboration Workflow - US Digital Response (USDR) & Christopher Fan

(Up)

Portland's prompt‑edit collaboration workflow, built with USDR and City teams, treated prompts like living policy documents: a centralized repository with reusable templates, clear versioning, and a low‑friction way for any staffer to suggest edits so iterations happen quickly and transparently.

Tools and practices that emphasize organization and repeatability - organizing prompts into libraries and templates as recommended by the guide Guide: How to Organize AI Prompt Workflows, and using visual, team‑friendly authoring and evaluation flows such as Azure's Azure Prompt Flow documentation - make it possible to run controlled A/B prompt tests, track changes, and compare variants against the 177 test scenarios USDR used.

The result is a human‑centered loop: propose edits, run them behind a staff login against representative cases, collect in‑tool feedback, and promote only the prompts that meet benchmark criteria - leaving a clear audit trail so future teams can see not just the final prompt, but the decisions that led to it; a small but vivid payoff: staff stop booking redundant appointments because someone on the team could press “suggest edit” and fix the wording that confused residents.

“The key thing here is to have some place where anybody within the team could suggest edits to the prompt.” - Christopher Fan, USDR

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Benchmarks & Evaluation Prompts - Portland Reusable Toolkit

(Up)

Portland's reusable toolkit turns prompt-writing from guesswork into repeatable engineering by pairing curated prompt libraries with systematic evaluation: teams run batch tests that pull from historical logs, SME-labeled cases, and generated synthetic examples, then score outputs against concrete criteria like JSON correctness, semantic similarity, and response quality so a passing‑score threshold and an “accuracy” number live in the run history for easy audits; see Microsoft's batch‑testing guidance for how to upload test sets, define evaluation rules, and compare runs.

That empirical layer gets married to prompt‑governance practices - version control, access restrictions, monitoring, and layered system prompts - that treat prompts as critical control points rather than ad hoc text, as explained in VerityAI's governance writeup.

The practical payoff in Portland: a prompt edit either clears a 177‑case benchmark or it doesn't, so staff stop guessing and point to a measurable pass rate when they promote a prompt to production.

Benchmark/ToolPurpose
Azure AI Builder batch testing prompts documentationValidate prompts across datasets; compute accuracy scores and track run history
Evaluation criteriaJSON correctness, semantic similarity, response quality; configurable passing‑score thresholds
VerityAI system prompt governance blog postVersion control, access control, monitoring, and layered governance to enforce compliance

Human-Centered Content-Cleaning Prompts - Evan Bowers / Portland Digital Services

(Up)

Human-centered content‑cleaning prompts helped Portland Digital Services turn messy permit pages into clear, actionable guidance by treating content like data: teams ran quarterly audits, applied plain‑language edits, removed duplicate or outdated FAQs, and standardized labels so residents find the right appointment the first time instead of “booking multiple, different kinds of appointments.” This approach follows federal human‑centered design principles - see the Digital.gov human‑centered design guide - and practical web content tips like editing for plain language and pruning obsolete pages from the Spring Cleaning playbook, which notes a striking stat: 90% of data is never accessed again 90 days after it's stored.

Pairing content cleanup with data readiness practices (for example, standard formats and privacy checks highlighted in Getting Your Data Ready for AI) makes prompts more reliable, improves searchability through better metadata, and creates a small but vivid payoff: a visitor finds the exact permit instructions in one session, not four.

“Dispose of anything that does not fall into one of three categories: currently in use, needed for a limited period of time, or must be kept indefinitely.” - Marie Kondo

Responsible-AI Compliance Prompts - InnovateUS & NASPO resources

(Up)

Oregon agencies looking to move from experiments to dependable operations can lean on practical, government‑focused compliance guidance from InnovateUS - workshops like “How to Write a Generative AI Policy for Your Jurisdiction” walk teams through a “responsible experimentation” approach used by cities such as Boston, while the broader Artificial Intelligence for the Public Sector workshop series from InnovateUS collection offers free, self‑paced modules (including a five‑part prompting framework) and targeted courses such as the AI procurement training developed with NASPO to help procurement and legal teams ask the right questions of vendors; concrete, repeatable prompts for compliance center on three quick rules taught in these sessions - fact‑check and review all AI outputs, disclose when content is AI‑generated, and never include sensitive or private data in prompts - so policy stays usable rather than prohibitive.

The payoff is tangible: treat generative tools as everyday assistants (think “spreadsheet or spellcheck”), design a lightweight sandbox plus clear disclosure rules, and agencies get safer, auditable AI that Portland teams can plug into existing prompt libraries and bench tests without risking constituent trust.

“[AI] is a tool,” Garces said. “Public servants are responsible for the good and the bad that comes with the usage of the tool. If Bard or ChatGPT puts out something that you use, it's you using it. You are going to be held responsible.”

Intelligent Document Processing Prompts - Sergio Ortega / AWS reference

(Up)

Portland agencies facing stacks of multi‑page permit packets and legacy paper records can make a measurable dent in backlogs by adopting intelligent document processing (IDP) patterns from the AWS reference architecture - think Amazon Textract to pull text and tables, Amazon Comprehend to classify content, serverless Lambda orchestration, S3/SQS/SNS for scalable pipelines, DynamoDB for indexed storage, and Amazon A2I to route low‑confidence items to a human reviewer - so automation handles routine extraction while people focus on exceptions; the same “human in the loop” design also powers real‑time notifications and audit trails that residents and staff can trust (AWS Intelligent Document Processing reference architecture).

Federal examples underline the payoff: NARA's push to make billions of paper records searchable - using Textract and related tools to process census and handwritten files - shows how IDP can unlock archival and operational value for government teams.

Practical targets help teams avoid overreach - benchmarks like processing ~80% of documents without human touch are useful success criteria - so Portland pilots can balance speed, accuracy, and compliance when building prompt workflows for classification, redaction, routing, and review.

“We needed a solution that could not only learn different document types but also continuously learn the variations within each type and get better over time,” says Aaron Seamans, PharmaCord's VP of IT.

Public Engagement Synthesis Prompts - InnovateUS workshops / Eric Gordon

(Up)

Public engagement synthesis prompts - taught in InnovateUS workshops such as the "Artificial Intelligence for the Public Sector" series and practical sessions like "AI Prompts Unleashed" - turn messy streams of resident input into usable insights for Oregon agencies, helping staff surface common concerns, spot service gaps, and produce plain‑language summaries that decision‑makers can act on instead of wading through thousands of raw comments; Deborah Stine's workshop specifically stresses the value of sharing evaluation lessons so communities see how their input changed outcomes, and Eric Gordon's "Towards Deep Listening" session reframes engagement as an information stream that AI can help synthesize rather than replace.

Because InnovateUS already partners with Oregon to deliver no‑cost training to state employees, these prompt patterns - templates for summarizing comments, clustering themes, and flagging equity or accessibility issues - are immediately practical for Portland teams aiming to scale civic listening without losing human judgement, so public servants get a crisp briefing, not a dizzying inbox.

Literacy & Accessibility Prompts - Portland Digital Services accessibility efforts

(Up)

Portland Digital Services pairs literacy‑focused prompts with plain‑language and accessibility practices so AI responses actually help residents get things done: prompts steer models to prefer short sentences, active voice, clear labels, and translation‑friendly phrasing so pages and chatbot replies match federal expectations like the Plain Writing Act and plain language resources from Digital.gov plain language resources; teams also bake in checks for multilingual clarity and Section 508‑style accessibility so summaries and form instructions work for people with limited English proficiency and assistive technologies.

These prompts sit alongside content audits and human review - the same playbook that encourages pruning outdated FAQs - so the payoff is concrete and memorable: a visitor finds the exact permit instructions in one session, not four.

Oregon agencies can mirror statewide examples like Maryland's plain language initiative and executive order to scale training, measurement, and plain‑language governance across departments.

“to improve the effectiveness and accountability of federal agencies to the public by promoting clear government communication that the public can understand and use.”

Conclusion - Practical next steps for Portland agencies

(Up)

Portland agencies ready to move from pilots to production can follow a tight, practical roadmap already emerging from local and national work: formalize an AI governance framework with cross‑bureau representation and public engagement (the City's Smart City PDX ADS work recommends policies, training, and meaningful community input), treat early projects as bounded pilots focused on high‑value, low‑risk use cases, and require human‑in‑the‑loop design, transparent benchmarking, and versioned prompt libraries so every change has an audit trail; pair those steps with staff capacity building - use free, role‑specific training like the InnovateUS “Artificial Intelligence for the Public Sector” workshop series to teach safe prompt practices and impact assessment, and supplement with hands‑on skills courses (for example, the AI Essentials for Work bootcamp - Nucamp registration and syllabus) so teams can reliably vet outputs.

Start small, measure against clear success criteria, share results publicly, and prioritize plain‑language disclosures so Portland keeps experiments practical, auditable, and trusted by the communities they serve - so a resident finds the exact permit instructions in one session, not four.

Frequently Asked Questions

(Up)

What were Portland's key AI use cases and outcomes in the permitting pilot?

Portland's Digital Services team, with US Digital Response, piloted a generative AI chatbot (Dialogflow prototype) for permitting help that was trained on over 2,400 real help‑desk interactions plus ~200 SME‑crafted synthetic examples. The pilot combined user research, iterative prompt editing, staff‑only testing with built‑in feedback, and benchmarked evaluation across 177 test scenarios. Outcomes included improved booking accuracy, increased staff confidence, a reusable prompt library and evaluation toolkit, and documented playbooks for responsible reuse across other agencies.

How did Portland use synthetic data and why was it important?

Portland augmented real help‑desk logs with roughly 200 SME‑labeled synthetic examples to expose rare or confusing booking patterns that didn't appear often enough in the real dataset. This mixed approach (partial/full synthetic when appropriate) improved model coverage while protecting privacy. Best practices used included clear generation objectives, privacy‑by‑design safeguards, validation against statistical and task‑specific tests, and combining synthetic with real data to avoid overfitting to synthetic artifacts.

What prompt governance, collaboration, and evaluation practices did Portland adopt?

Portland and USDR treated prompts as living policy documents: centralized prompt libraries with versioning, a low‑friction ‘suggest edits' workflow so any staffer can propose changes, staff‑only A/B and benchmark testing, and a requirement that prompts clear a predefined benchmark (e.g., pass a 177‑case suite) before promotion. Evaluation criteria included JSON correctness, semantic similarity, and response quality; runs are tracked with pass thresholds, access controls, monitoring, and an audit trail for transparency and reuse.

What human‑centered and compliance safeguards were recommended for government AI?

Recommended safeguards include human‑in‑the‑loop design (route low‑confidence outputs to reviewers), content cleaning and plain‑language edits to improve prompt reliability, accessibility and multilingual checks, disclosure when content is AI‑generated, and never including sensitive/private data in prompts. Agencies should adopt lightweight sandboxes for experimentation, written responsible‑AI policies, training (e.g., InnovateUS workshops), and procurement/legal checks (NASPO guidance) so deployments remain auditable and preserve constituent trust.

Which practical next steps and success criteria should Portland agencies use to scale pilots?

Start with bounded, high‑value/low‑risk pilots; formalize cross‑bureau governance and public engagement; maintain versioned prompt libraries and benchmarked evaluation; require human review for exceptions; run regular content audits and accessibility checks; and invest in staff training (role‑specific workshops and hands‑on courses). Use measurable success criteria such as booking accuracy rates (e.g., pass threshold on test suites) and targets for document processing (e.g., ~80% automated processing without human touch) and publish results to build public trust.

You may be interested in the following topics as well:

N

Ludo Fourrage

Founder and CEO

Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. ​With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible