This Month's Latest Tech News in Gainesville, FL - Wednesday April 30th 2025 Edition
Last Updated: May 1st 2025

Too Long; Didn't Read:
Gainesville, FL's tech scene is booming: UF launched HiPerGator AI 2.0 (504 NVIDIA B200 GPUs, $24M investment), expanding supercomputing access. Gleim Aviation revealed the first AI pilot examiner. Meta launched its Llama 4 AI app. Surveys show over 40% of Americans haven't used AI news tools. National AI policy debates and payment innovations are also reshaping the landscape.
Gainesville's tech sector is drawing national attention as the University of Florida's (UF) Master of Science in Applied Data Science and AI Systems programs kicked off 2025 with a rare guided tour of the HiPerGator supercomputing facility, one of the nation's premier academic computing centers.
This event showcased the university's thriving AI ecosystem, including hands-on demonstrations of NVIDIA A100 GPU servers and conversations on emerging AI ethics, echoing UF's leadership in advanced technology and educational community-building.
At the same time, a robust local debate continues as Florida students and professors respond to global estimates that AI could displace up to 300 million jobs, with many at UF advocating for nuanced oversight and the continued value of human creativity and judgement in an AI-driven world (see The Alligator's coverage of AI's workplace impact).
On the regulatory front, Gainesville's innovations mirror national trends, as landmark antitrust proceedings highlight how giants like Google leverage AI and massive data assets to influence competition - raising questions that will shape the ethical future of AI development, according to WUFT's in-depth analysis of the Google antitrust trial and AI leadership.
“It definitely sparked my interest to learn more about GPUs and their applications.”
Table of Contents
- Gleim Aviation Launches First AI Digital Pilot Examiner in Gainesville
- University of Florida Opens HiPerGator Supercomputer to Students for AI Research
- Meta Unveils Llama 4 AI App, Aiming to Rival ChatGPT
- Visa Teams with AI Giants for Automated Purchases via Credit Cards
- Microsoft Rethinks AI Data Centers Amid Industry Realignment
- Trump Administration Targets ‘Woke AI', Sparking New Policy Debate
- Americans Remain Skeptical of AI-Generated News, Study Says
- Audience Wants Visible, Simple AI Rules in Reporting
- Poynter Institute Leads ‘Future of Facts' Forum Against Misinformation
- International Experiments Signal Generative AI's Newsroom Future
- Conclusion: Gainesville's AI Leadership and the Road Ahead for Trustworthy Technology
- Frequently Asked Questions
Check out next:
Hear why Geoffrey Hinton warns on superintelligence risks and what experts say about urgent governance measures.
Gleim Aviation Launches First AI Digital Pilot Examiner in Gainesville
(Up)Gainesville-based Gleim Aviation has launched the industry's first AI-powered Digital Pilot Examiner (Gleim DPE™), an innovation set to transform pilot training and oral exam preparation nationwide.
Debuted at the 2025 SUN 'n FUN Aerospace Expo, the Gleim DPE leverages advanced conversational AI through its digital examiner, Otto™, allowing students to simulate the high-stakes FAA “checkride” oral exam anytime and anywhere.
This technology not only quizzes aspiring pilots like a real examiner but also helps them practice articulating knowledge aloud to reduce stress and improve their pass rates.
The product has received strong endorsements from industry veterans, with David St. George, Executive Director of the Society of Aviation and Flight Educators, stating,
“The big problem on flight tests is the inability to vocalize knowledge. The Gleim DPE will test knowledge and make sure that it is comprehensive. Get this tool and you're going to do a lot better.”
Key features and benefits are summarized below:
Feature | Benefit |
---|---|
AI-powered, conversational simulation | Authentic oral exam practice |
Accessible anywhere, any time | Flexible study options |
Proprietary Otto™ examiner | Personalized, interactive feedback |
Reduces test anxiety | Improved exam performance |
Backed by 50 years of expertise, Gleim's new tool is being hailed as a major advancement in aviation education.
Learn more about this ground-breaking launch at the official Gleim Digital Pilot Examiner page, explore in-depth product coverage and endorsements via GlobeNewswire's launch report, and see the innovative technology in action in this Gleim DPE demonstration video.
University of Florida Opens HiPerGator Supercomputer to Students for AI Research
(Up)This month, the University of Florida (UF) marks a major milestone by expanding access to its HiPerGator supercomputer, now unveiling the fourth-generation system powered by NVIDIA's latest Blackwell GPUs, for student-led artificial intelligence research and education.
Backed by a $24 million investment and one of the nation's first deployments of the NVIDIA DGX B200 SuperPOD, HiPerGator AI 2.0 is expected to be up to 10 times faster than its predecessor, providing nearly 60,000 CPU cores, 600 NVIDIA L4 GPUs, and 504 NVIDIA B200 GPUs accessible to thousands of users across the Southeast through the unique Library HiPerGator Sponsorship Program UF invests $24M in world-class supercomputer.
UF's long-standing partnership with NVIDIA - whose co-founder Chris Malachowsky is a UF alumnus - has fueled a rapid expansion of AI curriculum and new faculty, and positioned UF as America's first “AI University” NVIDIA partnership powers UF research.
Students now leverage HiPerGator for innovative work in language translation, environmental science, and soil health, with tailored librarian support connecting undergraduates and graduate students from across 16 colleges Student AI research with HiPerGator.
“The University of Florida's commitment to AI and high-performance computing has set a new standard for academic excellence. By being the first university in the country to adopt an NVIDIA DGX SuperPOD powered by Blackwell, UF will have an incredible AI supercomputing infrastructure to tackle the world's most pressing challenges.” - Chris Malachowsky, NVIDIA co-founder
In the past year alone, HiPerGator supported nearly 7,000 users and 33 million research requests, with its latest upgrade ensuring UF remains at the national frontier in AI-driven discovery and advanced workforce training.
Version | CPU Cores | GPUs | Performance Boost | Investment |
---|---|---|---|---|
HiPerGator 4th Gen | ~60,000 | 504 B200, 600 L4 | 7–10x (prev. gen) | $24M |
Meta Unveils Llama 4 AI App, Aiming to Rival ChatGPT
(Up)Meta has formally entered the AI assistant race by launching its dedicated Meta AI app, powered by the advanced Llama 4 language model, and setting its sights on ChatGPT's dominance.
Available initially in the US, Canada, Australia, and New Zealand, the app stands out with its voice-first interface, multimodal support, and deep personalization using data from Facebook and Instagram accounts for context-aware, relevant responses.
Personalization features include remembering user interests - such as dietary restrictions or travel preferences - and providing custom recommendations across platforms, while a new “Discover” feed transforms AI interaction into a communal, social experience by showcasing sharable prompts and AI-generated content.
Integration with Ray-Ban Meta smart glasses enables seamless cross-device conversations, letting users start chats on their glasses and continue them on their mobile device or web interface.
Underpinning these innovations, Llama 4's multimodal “Scout” and “Maverick” variants outperform competition in areas like coding, reasoning, and multilingual interaction, which are compared in the table below.
As Meta scales this ecosystem, it balances innovation with privacy, excluding the EU for now due to strict data regulations and prompting industry-wide debates about AI's role in social media and user data ethics.
As Mark Zuckerberg stated at LlamaCon,
“Part of the value around open source is that you can mix and match … as developers, you have the ability to take the best parts of the intelligence from different models and produce exactly what you need. … This is part of how I think open source basically passes in quality all the closed source [models]… it feels like sort of an unstoppable force.”
Learn more about the personalized integration of Meta's new assistant in Meta's official announcement of the Meta AI app, read a detailed breakdown of Llama 4's technical strengths and privacy approach at Meta's Llama 4 multimodal intelligence blog, and explore expert analysis of Meta's competitive strategy against rivals at TechCrunch's LlamaCon coverage and analysis.
Model | Active Parameters | Context Window | Key Capabilities | Benchmark Performance |
---|---|---|---|---|
Llama 4 Scout | 17B | 10M tokens | Multimodal, coding, reasoning, multilingual | Beats Gemma 3, Gemini 2.0 Flash-Lite, Mistral 3.1 |
Llama 4 Maverick | 17B (128 experts) | 1M tokens | Advanced reasoning, image inputs, high efficiency | Beats GPT-4o, Gemini 2.0 |
Llama 4 Behemoth | 288B | TBD | STEM, large-scale reasoning | Outperforms GPT-4.5, Claude Sonnet 3.7 |
Visa Teams with AI Giants for Automated Purchases via Credit Cards
(Up)Visa is taking a bold step into the future of payments by partnering with leading artificial intelligence firms - including Anthropic, Microsoft, OpenAI, Perplexity, Mistral, IBM, Samsung, and Stripe - to launch AI-ready credit cards that enable autonomous shopping agents to make purchases for consumers within user-defined preferences and spending limits.
The initiative, dubbed “Intelligent Commerce,” addresses a critical gap for generative AI assistants, which have excelled at product discovery but struggled to complete payments without human intervention.
Now, pilot projects are empowering AI agents to securely handle tasks from routine grocery shopping to complex travel bookings, with consumers able to establish budgets and purchase parameters for their digital assistants.
As Visa's Chief Product and Strategy Officer Jack Forestell explains,
“We think this could be really important... transformational, on the order of magnitude of the advent of e-commerce itself.”
Collaborators like Perplexity highlight the enhanced personalization possible when AI agents access past transaction histories with user consent.
To provide a clear comparison of key players and features, see the following table:
Company | AI Agent Feature | Key Partners | Status |
---|---|---|---|
Visa | AI agents can shop and make purchases with tokenized, AI-ready credit cards | Anthropic, IBM, Microsoft, OpenAI, Perplexity, Samsung, Stripe, Mistral | Pilot underway, wider rollout 2026 |
Mastercard | Agent Pay integrates AI shopping assistants with payment credentials | Microsoft, IBM, Braintree, Checkout.com | Announced, scaling with partners |
PayPal | Agentic commerce via AI agents for online shopping | Not specified | In development |
Amazon | “Buy for Me” AI shopping assistant | Internal development | Testing with subset of users |
As U.S. credit card debt reached $1.21 trillion at the end of 2024, Visa prioritizes user control - ensuring humans set strict spending limits for their AI agents and emphasizing that fully autonomous payment will expand gradually.
For more details on Visa's strategy and industry collaboration, see the TechCrunch report on Visa and Mastercard's AI-powered shopping initiatives, the ZDNet analysis of AI-ready credit card features, and the AP News coverage of Visa's AI agent program.
Microsoft Rethinks AI Data Centers Amid Industry Realignment
(Up)Microsoft is strategically “slowing or pausing” the construction of several AI data centers, including a $1 billion project in Licking County, Ohio, as the company recalibrates its infrastructure investments amid surging demand, economic uncertainties, and evolving industry dynamics.
This adjustment comes as Microsoft maintains plans to invest over 80 billion dollars globally in AI infrastructure for the current fiscal year and enters a new phase in its partnership with OpenAI - now allowing OpenAI to build its own computing capacity while leveraging Microsoft's Azure platform exclusively for critical API services.
Factors influencing these strategic pauses include higher construction costs due to tariffs, increased regulatory scrutiny, and the need to align data center growth with actual customer demand.
According to President of Microsoft Cloud Operations Noelle Walsh:
“In recent years, demand for our cloud and AI services grew more than we could have ever anticipated and to meet this opportunity, we began executing the largest and most ambitious infrastructure scaling project in our history. Any significant new endeavor at this size and scale requires agility and refinement as we learn and grow with our customers. What this means is that we are slowing or pausing some early-stage projects.”
Across the globe, Microsoft has doubled its data center capacity over the last three years, now operating over 350 facilities in at least 60 regions, yet is increasingly focusing on strategic enhancements rather than just expansion.
For further details on paused projects and the broader market context, read coverage by The Hill on Microsoft's adjustments to AI data center projects and TechCrunch's analysis of Microsoft's global data center plan pullbacks.
The industry-wide trend mirrors moves by Amazon and others, suggesting a careful balance between innovation, economic headwinds, and responsible scaling moving into the second half of 2025.
Trump Administration Targets ‘Woke AI', Sparking New Policy Debate
(Up)The Trump Administration's recent moves to dismantle “woke AI” initiatives have reignited sharp debate over the future of algorithmic fairness and accountability in American technology policy.
Through new executive orders and policy memoranda, President Trump has revoked previous Biden-era directives that had prioritized reducing AI bias and promoting diversity, equity, and inclusion (DEI) in federal programs, with the Department of Commerce and agencies like the EEOC withdrawing guidance related to AI fairness and discrimination in hiring practices.
According to the Associated Press's coverage of this political transition in AI policy, the administration now instructs agencies and contractors to focus on “reducing ideological bias” in AI, reframing the issue as one of political neutrality rather than addressing well-documented cases of algorithmic discrimination - such as self-driving cars' difficulty detecting darker-skinned pedestrians and hiring algorithms that reinforce gender or racial stereotypes.
This split in policy has led many experts to raise concerns that halting work on algorithmic bias could result in technologies with a narrower, less inclusive perspective, particularly as state governments step in to mandate bias audits and impact assessments.
As summarized by Holland & Knight, federal policy now emphasizes the advancement of American-made, supposedly ideology-free AI, while state-level regulations enact annual bias audits and stricter oversight, as outlined in the table below:
Jurisdiction | Status | Key AI Inclusion Requirement |
---|---|---|
New York City | Enacted | Annual independent bias audits for hiring tools |
Colorado | 2026 | High-risk AI standards, bias audit, risk management in employment |
California | Proposed | Impact/risk assessments before and through AI deployment |
As sociologist Ellis Monk, whose work led Google to overhaul its AI image recognition standards, cautioned,
“Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune … Could future funding for those projects be lowered? Absolutely.”
For deeper analysis on the policy shifts and new federal acquisition rules for AI, see Holland & Knight's insights into Trump Administration's AI Executive Order and memoranda, and explore legal developments in state-level AI regulation as detailed in Holland & Knight's briefing on AI in hiring.
The coming months will test whether a national approach centered on political neutrality can address persistent algorithmic bias - or if responsibility for AI fairness will now rest primarily with local and state governments.
Americans Remain Skeptical of AI-Generated News, Study Says
(Up)Recent nationwide studies reveal that nearly half of Americans are not interested in receiving news from generative artificial intelligence, while 20% say publishers should avoid using AI altogether, according to data gathered by the Poynter Institute and University of Minnesota.
The broad skepticism is rooted in anxieties over trust, job impacts, and the perceived loss of a “human element” in reporting. As shown in the survey below, most Americans have never used AI tools for news, and strong majorities demand clear disclosure and the development of ethical policies before AI integration into newsrooms.
High news literacy audiences - those most likely to pay for news - are particularly clear: more than 90% want explicit disclosures for AI-generated content. Despite these concerns, successful international use cases, such as Aftonbladet's EU chatbot and Ringier Axel Springer's travel assistant, demonstrate AI's potential for value when paired with transparency and oversight.
As media expert Benjamin Toff summarized,
“The data suggests if you build it, do not expect overwhelming demand for it.”
News organizations are advised to approach innovation with transparency and caution, prioritizing ethical standards and maintaining readers' trust through clear labeling and human editorial review.
For a detailed breakdown, see the table below. For further insights, read Poynter's analysis on AI's uneasy fit in the American news diet, the Minnesota Journalism Center's full report on public skepticism toward newsroom AI, and Futurism's summary of persistent distrust in AI-generated news coverage.
Metric | Finding |
---|---|
Survey Dates | March 6-10, 2025 |
Sample Size | 1,128 U.S. adults |
Never Used AI Tools | ~40% |
Oppose Any AI in Newsrooms | 20% |
No Confidence in AI Use | ~60% |
Demand for Ethical Guidelines | 58% |
Disclosures “Very Important” | ~50%; 90% among high-literacy audiences |
Audience Wants Visible, Simple AI Rules in Reporting
(Up)As AI technology rapidly reshapes journalism, new research reveals a clear message from news audiences: they want visible, simple, and consistent rules when it comes to AI in reporting.
According to a nationally representative survey, 58% of Americans believe news organizations should establish clear ethical guidelines before experimenting with AI, and half consider disclosure of AI involvement in news “very important.” Calls for transparency aren't just theoretical - audiences want labeling practices like Meta's updated “AI info” labels for manipulated media, and prefer clear, on-content notices or even universal symbols making it obvious when content is AI-generated or AI-edited.
A Poynter Institute analysis underscores that nearly all audiences desire easy-to-understand AI policies, expecting human oversight and disclosure of the mix between automated and human-authored news.
Yet, skepticism remains high - roughly 60% express low or no confidence in newsroom AI use, and only 20% use such tools regularly. These concerns have led to evolving industry standards, such as those shaped by the Associated Press and other leading outlets, which limit AI-generated content to non-publishable roles unless its creation is central to the news story itself, reinforcing human accountability and accuracy (AP's AI guidelines and newsroom standards).
As summarized in recent research:
Statistic | Key Finding |
---|---|
Daily/weekly AI use | 20% |
Never used AI | ~40% |
Low/no confidence in AI newsroom use | ~60% |
Importance of AI disclosure | ~50% “very important” |
Support for ethical AI guidelines | 58% |
“Audiences desire simple, clear, and transparent policies (e.g., bullet points vs. complex texts) ... Disclosure of the percentage of human-generated vs. AI-generated/edited content is expected.”
Poynter Institute Leads ‘Future of Facts' Forum Against Misinformation
(Up)The Poynter Institute is taking a national lead in combating misinformation with its “Future of Facts Online” forum, set for May 6, 2025, at its St. Petersburg headquarters.
This timely event will delve into the growing threats to online trust, such as AI-driven content, the declining role of fact-checking on tech platforms, and how financial incentives shape what news the public sees.
Attendees will hear firsthand from top voices in journalism, including Drew Harwell of The Washington Post, MediaWise director Alex Mahadevan, and PolitiFact Editor-in-Chief Katie Sanders, all dedicated to equipping consumers with the tools to critically assess digital information.
According to Poynter's Brittani Kollar,
“Our interactive exhibit demonstrates that present-day challenges have parallels in the past that we can learn from and apply to our lives today.”
The Institute's ongoing initiatives, from the MediaWise traveling exhibit to the recently announced OnPoynt - Values Rising report, highlight how journalism is adapting with new digital tools and AI, prioritizing community relevance and trustworthy reporting.
For details on the event and how to participate, see the official event announcement from Poynter and view their calendar of upcoming initiatives at the Poynter's events page.
The Poynter Institute's steadfast mission - educating journalists, leading digital media literacy, and upholding ethical standards - continues to empower both newsrooms and consumers in the fight for facts in an AI-driven world.
International Experiments Signal Generative AI's Newsroom Future
(Up)International media experiments are signaling a pivotal shift in how generative AI shapes the newsroom of the future. Insights from the 2025 EBU News Report highlight the strategic integration of AI, balancing newsroom efficiencies with the imperative to maintain public trust and creativity.
As Dr. Alexandra Borchardt notes,
“As the technology races ahead, there's a mismatch with media organizations embracing some AI solutions while being wary of implications for accuracy, integrity, public trust, and legitimacy in a flood of AI-generated content. Newsrooms are becoming more strategic about how to bring staff along, audience reactions, effects on creativity, and how AI might help journalism flourish.”
At events like the Nordic AI in Media Summit, publishers are moving from pilot projects to fully embedded AI workflows, with tools like Djinn and Watchdog slashing document research time and AI agents automating routine content creation and curation (Nordic AI in Media Summit 2025 insights).
However, adoption remains measured, as a study by Tietoevry Create found that only 7% of Nordic companies report widespread AI use among employees, and just 17% perceive a significant business impact so far (Tietoevry Create AI business impact study).
The evolving landscape requires newsrooms to foster AI literacy and responsible use while ensuring that human judgment and community connections remain at the journalism core.
AI Adoption Metric | % of Respondents (Nordics) |
---|---|
Early phase of implementation | 52% |
Widespread adoption by employees | 7% |
Significant business impact | 17% |
Conclusion: Gainesville's AI Leadership and the Road Ahead for Trustworthy Technology
(Up)As Gainesville cements its place at the forefront of artificial intelligence innovation, the University of Florida's investment in HiPerGator AI 2.0 marks a pivotal moment not just for the city but for the national landscape of trustworthy technology.
Powered by 504 of NVIDIA's new Blackwell GPUs across 63 DGX B200 systems, HiPerGator is now among the fastest university supercomputers in the nation - serving thousands of UF students and faculty across more than 230 data science and AI courses (HiPerGator AI 2.0 launch details).
This significant leap builds on a decade-long evolution in Gainesville, ensuring robust support for interdisciplinary research and initiatives like the GatorTronGPT medical AI and cutting-edge workshops such as the AI4SC for superconductivity (HiPerGator supercomputing impact).
Yet as local research capacity explodes and global AI ventures attract unprecedented funding - over $60 billion in Q1 2025 alone, with US startups dominating the late-stage market - the broader AI industry faces mounting scrutiny over governance and ethical guardrails.
The debate over OpenAI's for-profit restructuring exemplifies this high-stakes tension: as one expert cautions,
“They're proposing to disable that off-switch,”
highlighting the need for clear oversight and public accountability (OpenAI governance controversy).
Gainesville's trajectory demonstrates that real leadership means pairing world-class AI infrastructure with unwavering commitments to transparency, ethics, and education, setting a benchmark for trustworthy technology development in the years ahead.
Frequently Asked Questions
(Up)What are the latest AI advancements at the University of Florida in Gainesville, FL?
The University of Florida (UF) has expanded access to its HiPerGator supercomputer with the unveiling of the fourth-generation system powered by NVIDIA Blackwell GPUs. This new HiPerGator AI 2.0 is up to 10 times faster than its predecessor and supports thousands of students and faculty in AI research and education. UF has also integrated AI deeply into its curriculum, offering the country's first 'AI University' distinction.
What innovation did Gleim Aviation launch in Gainesville this month?
Gleim Aviation introduced the first AI-powered Digital Pilot Examiner, known as Gleim DPE™. This tool uses a conversational AI named Otto™ to simulate FAA oral exams for pilot training. It is designed to enhance knowledge articulation, reduce test anxiety, and improve pass rates by providing authentic practice and personalized feedback accessible anytime, anywhere.
What is Meta's new Llama 4 AI app and how does it compare to ChatGPT?
Meta has launched a new AI assistant app powered by the Llama 4 language model, aiming to compete with ChatGPT. Features include a voice-first interface, multimodal capabilities, deep personalization using Facebook and Instagram data, and integration with Ray-Ban Meta smart glasses. Early benchmarking shows Llama 4 variants outperforming several peer models like GPT-4o and Gemini 2.0 in coding, reasoning, and multilingual tasks.
How is Visa collaborating with AI companies to change the future of payments?
Visa is partnering with major AI companies - including Anthropic, Microsoft, OpenAI, IBM, and others - to launch AI-ready credit cards that enable autonomous AI shopping agents to make purchases within user-defined preferences and spending limits. This initiative, called 'Intelligent Commerce,' is in pilot stages and is designed to securely automate shopping and payments while giving consumers full control over setting budgets and permissions.
How do Americans feel about AI-generated news and what standards are they demanding?
Surveys indicate that nearly half of Americans are uninterested or skeptical about receiving news generated by AI, with 20% opposing its use in newsrooms entirely. A significant majority - especially high-literacy audiences - demand clear disclosures and ethical guidelines for AI-generated content, emphasizing transparency, simple labeling, and ongoing human oversight to maintain trust in journalism.
You may be interested in the following topics as well:
Step onto the set of a next-generation film and uncover AI-powered local history with Crane Creek.
Learn why Bonnet Springs Park's national recognition signals a new era of tech-driven urban spaces in Lakeland.
Be the first to learn how AI-powered smart traffic lights in St. Petersburg promise safer and more efficient commutes for everyone.
Uncover how Florida schools' AI language pilot programs are transforming classrooms and breaking down language barriers across the state.
Read how Orlando hospitals accelerate AI adoption with cutting-edge clinical integration spurred by new AMA policies.
Uncover how the city's partnerships fueling a tech ecosystem are creating more jobs and investments for the future.
Learn about the gap in local tech education programs and what it means for aspiring developers in the region.
See how Barry University launches Ethics & Anti-Fraud AI Center to protect Miamians from the latest AI-driven scams.
Ludo Fourrage
Founder and CEO
Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible