The Complete Guide to Using AI in the Government Industry in Colombia in 2025

By Ludo Fourrage

Last Updated: September 6th 2025

Illustration of AI in Colombia government 2025: CONPES 4144, SIC guidance and public-sector AI innovation in Colombia

Too Long; Didn't Read:

Colombia's 2025 AI strategy centers on CONPES 4144 - 106 actions and ≈ COP 479 billion (~$479M) to build ethics, data infrastructure, talent and regional hubs - paired with a risk‑based Bill, SIC rules requiring privacy impact studies and heavy sanctions (fines, suspension up to 24 months).

Why AI matters for Colombia's government in 2025 is simple: policy and momentum are converging, and public servants must be ready. The CONPES 4144 national AI policy lays out 106 actions and a COP 479 billion investment to build ethics, data infrastructure, talent and risk mitigation across the state (CONPES 4144 national AI policy), while a Draft Bill submitted on 28 July 2025 proposes a risk-based legal framework and a Ministry of Science-led governance model to classify systems from prohibited to low risk (Draft Bill to Regulate AI in Colombia - introduced 28 July 2025).

With SIC guidance on privacy impact studies and mounting sanctions for noncompliance, practical skills - mapping AI by risk, running impact assessments, and writing safe prompts - are now core public‑sector duties; targeted training like Nucamp's AI Essentials for Work bootcamp (15 weeks) helps teams turn policy into responsible, deployable practice.

AttributeInformation
DescriptionGain practical AI skills for any workplace; learn AI tools, effective prompts, and apply AI across business functions
Length15 Weeks
Cost (early bird)$3,582
RegistrationRegister for AI Essentials for Work (Nucamp)

Table of Contents

  • What is the national AI strategy in Colombia? CONPES 4144 explained
  • Colombia's regulatory landscape in 2025: laws, bills and authorities
  • Proposed risk-based AI model in Colombia: prohibited to low-risk systems
  • SIC External Directive 002 (2024) and data protection for AI in Colombia
  • How to use AI in government: practical steps for Colombian public teams
  • Building Colombia's AI ecosystem: talent, infrastructure and startups
  • Operational risks, gaps and compliance challenges for Colombia
  • Global context: which country aims to lead by 2030 and US AI regulation in 2025 - lessons for Colombia
  • Conclusion & checklist: next steps for beginners using AI in Colombia's government
  • Frequently Asked Questions

Check out next:

  • Colombia residents: jumpstart your AI journey and workplace relevance with Nucamp's bootcamp.

What is the national AI strategy in Colombia? CONPES 4144 explained

(Up)

CONPES 4144 is Colombia's national AI roadmap: approved in February 2025, it frames AI as a productivity and inclusion engine through six strategic axes - Ethics & Governance; Data & Infrastructure; R+D+i; Capacity Development & Digital Talent; Risk Mitigation; and Use & Adoption - and commits a coordinated push of public and private action to 2030.

The policy defines 106 concrete actions and channels roughly COP 479 billion (about $479 million) to build regional AI hubs, strengthen data infrastructure, and expand skills so that SMEs and rural territories share the gains; it also uses incentives and co‑financing to attract investment and scale solutions across sectors.

For government teams, the plan is both a compliance checklist and an operational playbook: expect stronger governance rules, more cross‑ministerial projects (MinICT, MinCiencia, Education, Labor, Commerce and the DNP), and practical public uses - from landslide early‑warning systems in mountain towns to AI tools that speed up routine citizen services.

Read the official policy breakdown and independent analysis for implementation detail and timelines.

AttributeInformation
Approval dateFebruary 2025
Budget≈ COP 479,273 million (≈ $479 million)
Actions106 actions through 2030
Strategic axes6 pillars (ethics, data, R+D+i, talent, risk, adoption)

“The approval of CONPES 4144 reflects Colombia's commitment to the responsible adoption of emerging technologies, positioning the country at the forefront of innovation and digital transformation in the region.” - CONPES 4144 coverage

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Colombia's regulatory landscape in 2025: laws, bills and authorities

(Up)

Colombia's 2025 regulatory landscape for AI sits squarely on a mature personal‑data framework: Law 1581 (the Data Protection Law) and its implementing decrees (notably Decree 1377 and NRDB rules) set a consent‑based regime, special rules for sensitive data and children, mandatory privacy notices, and strict cross‑border transfer limits under Article 26 - so any government AI project that ingests health, biometric or political data must treat those inputs as high‑risk from day one (Colombia Law 1581 (Data Protection) overview).

Financial and credit data remain governed by Law 1266, adding a parallel compliance track for systems that touch citizen credit records. The Superintendence of Industry and Commerce (SIC) is the enforcement hub: public bodies are subject to the same habeas‑data rights as private actors, must report breaches to SIC (within 15 business days), register databases in the NRDB when required, and face graduated sanctions - including fines in the order of hundreds of thousands of dollars and operational suspensions - powerful enough to halt a pilot overnight (see SIC enforcement and country guidance).

The practical takeaway for public teams is clear: map data flows, document legal bases, register relevant databases, bake breach‑response into procurement, and assume SIC scrutiny will follow any AI deployment that processes personal data.

AttributeInformation / Source
Core lawLaw 1581 of 2012 - general data protection, consent, sensitive data, NRDB (Colombia Law 1581 (Data Protection) overview)
Supporting rulesDecree 1377 (implementing), Decrees 886/090 on NRDB registration (Colombia data protection laws and Decree 1377 overview)
Special regimeLaw 1266 (financial/credit data) - separate rules and supervisors
Authority & enforcementSuperintendence of Industry and Commerce (SIC): NRDB, breach notifications, fines and sanctions
Breach notificationNotify SIC within 15 business days of detection

Proposed risk-based AI model in Colombia: prohibited to low-risk systems

(Up)

The Proposed Bill brought to Congress in 2025 builds a clear, risk‑based ladder for AI that matters for every public team: it sorts systems into four buckets - prohibited (critical risk), high‑risk, systems with specific transparency obligations, and low/minimal risk - so that the legal treatment follows the harm profile of the tool rather than the technology itself (see the Baker McKenzie draft bill analysis for AI regulation in Colombia).

Prohibited systems include those that undermine human dignity - think subliminal manipulation, social‑scoring or real‑time biometric ID by authorities - while high‑risk tools (education, justice, health, public services) must satisfy strict data‑quality rules, human oversight, mandatory impact assessments, registration and detailed documentation; limited‑risk systems (chatbots, recommendation engines, unlabeled deepfakes) carry disclosure and deactivation duties; and low‑risk tools are steered by ethics and good‑practice guidance (summary from the White & Case AI Watch summary on AI governance).

Governance-wise the Ministry of Science is named lead, with the SIC retaining data‑protection oversight, and noncompliance can trigger heavy sanctions - from fines up to 3,000 monthly minimum wages to suspension or closure of AI activities for up to 24 months - so classifying systems correctly and keeping impact assessments and traceable records isn't paperwork, it's a compliance lifeline.

Risk categoryWhat it meansKey compliance requirement
Prohibited / CriticalUnacceptable risks to rights or securityBan on use (except strict exceptions)
High riskSignificant impact on health, safety or rightsImpact assessments, data quality, human oversight, registration
Transparency obligations (limited risk)Interacts with people or generates realistic contentInform users, disclose AI nature, allow deactivation
Low / Minimal riskMinimal societal or individual harmFollow ethical guidelines and good practices

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

SIC External Directive 002 (2024) and data protection for AI in Colombia

(Up)

SIC's External Directive 002 of 2024 puts data protection squarely at the center of any AI project in Colombia, giving public teams a crisp ten‑point checklist that reads like a compliance-first operating manual: processing must satisfy suitability, necessity, reasonableness and proportionality; uncertain harm scenarios should be avoided and preventive controls put in place; risks must be identified, measured, controlled and monitored under the accountability principle; and when a high risk to data subjects is detected before design, a privacy impact study is mandatory (see the White & Case summary of Colombia AI guidance).

The Directive also demands truthful, complete and up‑to‑date personal data, strong security measures for confidentiality/integrity/availability, transparency so data subjects can obtain information about processing, and recommends privacy‑enhancing techniques - notably differential privacy - to meet privacy‑by‑design and privacy‑by‑default expectations.

Paired with SIC's related soft‑law (Circular 003/2024) that raises board‑level accountability, the Directive turns privacy into an early‑stage stoplight: run your impact study and mitigation plan before you build, or risk having a pilot paused under regulatory scrutiny (IAPP coverage of SIC circulars on Colombia AI regulation).

Crucially, “publicly accessible” information is not automatically fair game, so teams cannot assume consent when harvesting datasets.

Directive 002 / Key pointPractical meaning for government AI
Suitability, necessity, proportionalityCollect only what's needed and justify it
Preventive measures for uncertain harmsAvoid deployment where risks cannot be mitigated
Accountability & risk monitoringDocument risk assessments, controls and audits
Privacy Impact Study (PIA)Mandatory before design if high risk
Data quality & securityKeep data accurate, secure and subject‑accessible
Publicly accessible ≠ public natureDo not assume consent for scraped or open data
Privacy‑enhancing technologiesDifferential privacy recommended for analytics

How to use AI in government: practical steps for Colombian public teams

(Up)

Practical use of AI in Colombian public agencies starts with clear classification and a step‑wise checklist: use the White & Case AI Watch breakdown to classify each system by risk (prohibited, high, limited or low) and then map data flows and legal bases so privacy duties are visible from day one (White & Case AI Watch regulatory tracker for Colombia).

Before design, follow SIC‑style advice - run a privacy impact study, apply suitability/necessity/proportionality tests, and adopt privacy‑enhancing techniques such as differential privacy where analytics require personal data (guidance summarized by Law Gratis) (Colombian AI and data protection guidance from Law Gratis).

Bake human oversight, documentation and traceability into procurement for any high‑risk tool; register and keep impact assessments current to avoid sanctions.

Start with low‑risk pilots - administrative automation, chatbots, or local use cases like a landslide early‑warning that pairs rainfall forecasts with terrain models to trigger targeted alerts - and scale only after controls prove effective (landslide early‑warning AI use case in Colombia).

Finally, pair technical controls with training and workforce transition plans so teams can operate systems safely and retain institutional memory as Colombia's AI policy and bills evolve.

“Artificial intelligence is presented as a fundamental tool that can positively shape the future of our nation. But its development must be guided by solid ethical principles and a strategic vision that guarantees the well‑being of all Colombians.” - Yesenia Olaya, Ministry of Science

Fill this form to download the Bootcamp Syllabus

And learn about Nucamp's Bootcamps and why aspiring developers choose us.

Building Colombia's AI ecosystem: talent, infrastructure and startups

(Up)

Building Colombia's AI ecosystem means more than rules on paper - it's about seeding talent, compute and entrepreneurs so public needs become local products: CONPES 4144 funds COP 479 billion and explicitly promises a national AI research centre and regional AI hubs to catalyze applied R+D+i, skills pipelines and public‑private partnerships (CONPES 4144 national AI policy); industry press notes the policy aims to boost ethical investment and infrastructure across the country, signalling opportunities for data centres, cloud services and startup acceleration (BNamericas: Colombia releases AI policy to boost ethical use and investments).

Practical demand - from landslide early‑warning pilots to predictive maintenance for public assets - creates runway for small AI firms and vocational programs to train people for applied roles, while targeted research centres can bridge university science with commercial products and government pilots (see local use cases and training pathways at Nucamp AI Essentials for Work bootcamp syllabus - landslide early‑warning use case), so the ecosystem grows around real problems, not abstract labs, and keeps value in Colombia.

“AI is being integrated into every industry and every discipline.” - Hod Lipson

Operational risks, gaps and compliance challenges for Colombia

(Up)

Operational risks for Colombian public teams are less about futuristic failures and more about gaps that can stop a project cold: uneven connectivity and regional digital divides mean pilots in mountain towns may never scale, entrenching inequality instead of fixing it (regional digital inequalities in Colombia); a shifting, still‑uncertain legal map (there are currently no settled, AI‑specific statutes) creates legal ambiguity for procurement and cross‑border data flows so agencies can't reliably plan compliance or investment (Colombia AI regulatory tracker - White & Case); and patchy data governance - poor lineage, unclear ownership and stale datasets - drives biased or brittle outputs that invite SIC scrutiny, mandatory privacy impact studies and steep sanctions.

The Bill now before Congress raises the stakes further: it pairs a risk‑based classification with fines and powers to suspend or block systems (up to 24 months) and makes the Ministry of Science the lead authority, so misclassification, missing impact assessments or weak IP/consent processes can shut down an initiative overnight (Colombia AI bill - penalties, roles and risk categories).

The practical fix is governance first - map data flows, assign clear owners (RACI/CoE), run PIAs early, and start with well‑governed, low‑risk pilots so technology benefits don't arrive like a boat with no bridge to shore: visible, but unusable.

Operational gapPractical impact / compliance challenge
Regional digital dividePilots fail to scale; unequal access to AI benefits; investment wasted
Unclear AI legal frameworkRegulatory ambiguity for procurement, cross‑border data and long‑term projects
Poor data governance & consent/IP gapsBiased outputs, mandatory PIAs, SIC sanctions, and possible suspension/closure of systems

Global context: which country aims to lead by 2030 and US AI regulation in 2025 - lessons for Colombia

(Up)

Global AI moves matter for Colombia because the two dominant playbooks taking shape today point to trade‑offs Bogotá must manage: China's explicit, state‑backed sprint to “lead by 2030” - backed by massive public funds, regional AI hubs, talent pipelines and even energy and compute planning that aims to power data centres with new capacity - shows how fast coordinated investment and local incentives can seed real industry (see Morgan Stanley analysis of China's 2030 AI leadership ambition); by contrast, the United States in 2025 is pushing a pro‑growth, infrastructure‑first agenda through “America's AI Action Plan” that pairs deregulation and data‑centre buildout with a fragmented overlay of state laws and procurement signals that already shape markets and vendor behaviour (see America's AI Action Plan (White House, 2025)).

The practical lesson for Colombian public teams is concrete and immediate: channel public funds into talent, regional hubs and reliable compute/energy while keeping a clear, risk‑based governance regime (impact assessments, procurement clauses, data quality and privacy) so pilots aren't stalled by later compliance scrutiny; and watch the U.S. example of divergent state rules - a patchwork that can complicate suppliers and interoperability - when designing national incentives and harmonized standards (see the NCSL 2025 state AI legislation tracker).

That combination - targeted investment plus enforceable safeguards - is the most defensible route to turn policy into usable, scalable public services without giving up oversight or sovereignty.

“China has been methodically executing a long-term strategy to establish its domestic AI capabilities,” says Shawn Kim, Morgan Stanley's Head of Technology Research in Asia.

Conclusion & checklist: next steps for beginners using AI in Colombia's government

(Up)

Conclusion & checklist for beginners: start small, stay legal, and learn fast - Colombia's CONPES 4144 funds and principles set the “why” (COP 479 billion to 2030) so begin by mapping your project to that roadmap (Colombia CONPES 4144 national AI policy); next, classify your system against the risk ladder in the government's Proposed Bill so you know whether a tool is prohibited, high‑risk or low‑risk and what obligations follow (Colombia draft bill to regulate artificial intelligence - Baker McKenzie).

On the operational side: 1) map data flows and legal bases; 2) run a privacy impact study before design if there's any chance of harm; 3) bake human oversight, documentation and breach response into procurement; and 4) prefer well‑scoped, low‑risk pilots (administrative automation, chatbots, or monitored prediction pilots) before scaling.

Treat the PIA like a boarding pass - no PIA, no takeoff - and pair controls with practical training so teams can manage tools safely; for skilling, consider Nucamp's targeted offering to build workplace AI skills in 15 weeks (AI Essentials for Work bootcamp registration).

These steps convert policy into usable, compliant services without surprises.

AttributeInformation
DescriptionGain practical AI skills for any workplace; learn AI tools, write effective prompts, and apply AI across business functions
Length15 Weeks
Courses includedAI at Work: Foundations; Writing AI Prompts; Job Based Practical AI Skills
Cost (early bird)$3,582 (after early bird $3,942)
PaymentPaid in 18 monthly payments; first payment due at registration
SyllabusAI Essentials for Work syllabus
RegistrationAI Essentials for Work registration

Frequently Asked Questions

(Up)

What is CONPES 4144 and what does Colombia commit to under the national AI strategy?

CONPES 4144 is Colombia's national AI roadmap approved in February 2025. It defines 106 concrete actions through 2030 and allocates approximately COP 479,273 million (roughly USD 479 million) across six strategic axes: Ethics & Governance; Data & Infrastructure; R+D+i; Capacity Development & Digital Talent; Risk Mitigation; and Use & Adoption. The plan funds regional AI hubs, a national AI research centre, skills pipelines and public‑private projects, and serves as both a compliance checklist and an operational playbook for government deployments.

Which laws and agencies regulate AI and personal data in Colombia in 2025?

Colombia's AI regulatory landscape builds on its mature personal‑data framework: Law 1581 of 2012 (data protection) and implementing rules such as Decree 1377 and NRDB registration requirements. Financial and credit data are covered by Law 1266. The Superintendence of Industry and Commerce (SIC) enforces data protection duties for both public and private actors, requires breach notifications (notify SIC within 15 business days of detection when applicable), manages NRDB registrations and can impose significant fines and operational sanctions.

What is the proposed risk‑based AI model in the 2025 Bill and what compliance obligations and penalties does it create?

The 2025 Draft Bill sorts AI systems into four risk buckets - prohibited (critical risk), high‑risk, systems with transparency obligations (limited risk), and low/minimal risk - so legal duties scale with harm. Prohibited systems (e.g., social scoring, subliminal manipulation, real‑time biometric ID by authorities) are banned except in tightly defined exceptions. High‑risk systems (education, justice, health, public services) require mandatory impact assessments, data‑quality controls, human oversight, registration and detailed documentation. Limited‑risk systems must disclose AI use and allow deactivation; low‑risk tools follow ethics and good practices. The Bill names the Ministry of Science as lead authority (with SIC retaining data oversight) and attaches heavy sanctions for noncompliance, including fines up to 3,000 monthly minimum wages and suspension or closure of AI activities for up to 24 months.

What does SIC's External Directive 002 require for government AI projects and what practical controls should teams implement?

SIC's External Directive 002 centers data protection in AI projects and sets a ten‑point compliance checklist: processing must satisfy suitability, necessity and proportionality; risks must be identified, measured, controlled and monitored under accountability; and when a high risk is detected before design, a privacy impact study (PIA) is mandatory. The Directive also mandates truthful, up‑to‑date personal data, strong confidentiality/integrity/availability controls, transparency to data subjects, and recommends privacy‑enhancing techniques (e.g., differential privacy). Practically, teams should run a PIA before design if any high‑risk processing is possible, map data flows, document legal bases, apply privacy‑by‑design controls, and avoid assuming publicly accessible data equals consent.

How should Colombian public teams start using AI safely and what training options are available?

Practical steps: 1) classify each system using the Bill's risk ladder (prohibited/high/transparency/low); 2) map data flows and legal bases and register databases when required; 3) run a PIA before design for high‑risk scenarios and apply suitability/necessity/proportionality tests; 4) bake human oversight, documentation, traceability and breach‑response into procurement; and 5) begin with well‑scoped, low‑risk pilots (administrative automation, monitored chatbots, local early‑warning systems) before scaling. For targeted upskilling, consider structured programs such as Nucamp's 15‑week AI work course (modules include AI at Work: Foundations; Writing AI Prompts; Job‑Based Practical AI Skills). Example pricing: early‑bird US$3,582 (standard US$3,942), payable in up to 18 monthly payments with the first payment due at registration.

You may be interested in the following topics as well:

N

Ludo Fourrage

Founder and CEO

Ludovic (Ludo) Fourrage is an education industry veteran, named in 2017 as a Learning Technology Leader by Training Magazine. Before founding Nucamp, Ludo spent 18 years at Microsoft where he led innovation in the learning space. As the Senior Director of Digital Learning at this same company, Ludo led the development of the first of its kind 'YouTube for the Enterprise'. More recently, he delivered one of the most successful Corporate MOOC programs in partnership with top business schools and consulting organizations, i.e. INSEAD, Wharton, London Business School, and Accenture, to name a few. ​With the belief that the right education for everyone is an achievable goal, Ludo leads the nucamp team in the quest to make quality education accessible