Mexico’s Government Loves AI, but Rules Lag in Practice
Mexico is quickly adopting AI in tax enforcement, security, and citizen services. However, a review of 45 transparency requests reveals that the rollout is moving faster than the rules. Sensitive data is protected inconsistently, and the new Chapultepec principles have little real power within the government.
A WhatsApp Bot Asks for Your CURP
At first, the chat seems normal, like any bureaucratic task moved to a phone. You get a greeting, a menu, and options that narrow down. Then things change quietly. Tested on WhatsApp, the PTAT chatbot from Mexico’s foreign ministry asks for personal details to book a passport renewal appointment, including your CURP and passport number.
People usually comply with this kind of request because the alternative means more waiting, more paperwork, and lost time. A small but telling detail is the glow of the screen in your hand, the brief pause before you type, and that quick thought about what you’re sharing and with whom.
Mexico’s government says it’s using artificial intelligence to make institutions more efficient. The problem is that this push for efficiency is happening without clear laws. The documents show a quiet but clear approval for AI across the federal government, but no specific laws or consistent rules for how AI should be governed, audited, or made transparent.
That mismatch matters because the systems are not being used for trivia. Documents obtained through transparency requests show AI in sensitive areas by nature: oversight and detection of irregularities, security and surveillance, citizen services and procedures, and administrative management. These are the places where the state touches lives and holds power. The wager here is that faster processes will mean better government. The risk is that faster processes can also mean faster mistakes, faster leaks, and faster erosion of public trust when no one can clearly explain what a system is doing.
By November 2025, at least 14 federal agencies were using AI or working on internal AI projects, according to a review of 45 transparency requests. While 14 doesn’t cover the whole government, it shows a clear pattern: AI is no longer just an experiment. It’s in active use.

AI Spreads Across Sensitive Offices Without One Rulebook
In tax enforcement, the Servicio de Administración Tributaria uses statistical learning models to identify factureras and their users, and to detect irregular behavior among importing companies. The stated aim is to monitor and ensure compliance with tax obligations. That is a powerful promise, and it is also a reminder that AI in government often starts where the state already has the deepest files.
Administrative uses seem harmless until you think about what governments actually store. The documents show AI helping with internal tasks, handling large data sets, translating texts, and supporting research. Mexico’s culture ministry uses four tools in its Cultural Information System, such as an image classifier and a record linker. The Secretaría Anticorrupción y Buen Gobierno has tested Google’s Gemini to standardize text in training certificates using optical character recognition.
Citizen services are the most visible part, often delivered through chatbots and virtual assistants that answer questions, speed up processes, and offer more services. PTAT, the foreign ministry’s chatbot developed with the National Autonomous University of Mexico, provides general guidance on procedures through an options menu. But the WhatsApp test in the documents challenges the ministry’s claim that PTAT doesn’t collect, store, or process personal data and only shares standard information. When a chatbot asks for your CURP to book an appointment, people don’t see it as just guidance—they see it as collecting their data.
Security is where risks are highest. The documents say AI helps with surveillance and event management, including at the Institute of Security and Social Services for State Workers. FOVISSSTE uses AI to collect and manage security events. PENSIONISSSTE applies machine learning in several security roles. This shows AI isn’t just automating tasks—it’s deeply involved in protecting critical infrastructure and workers’ personal data.
Other agencies are more guarded. The National Guard accepts AI integration but does not specify its purpose—the Financial Intelligence Unit and the Attorney General’s Office classified information about their use. The Financial Intelligence Unit argues disclosure would threaten national security and endanger activities aimed at preventing crime and terrorist financing. The Attorney General’s Office says making its information public could expose it to attacks aimed at manipulating AI models.
These concerns are understandable. But they also highlight a key problem. If the government uses AI more in policing, intelligence, and prosecution but keeps details secret, the public is asked to trust automated decisions without proof. Trust alone is not enough for good governance.
The documents also point to another, less obvious risk: public employees using generative AI like ChatGPT or Gemini on their own. While this can improve productivity, it can also put confidential information at risk when people aren’t aware or don’t have clear rules about what data is public, confidential, or restricted before entering it into AI tools.
“I know some armed forces members who have used AI to process information faster, and often that information is sensitive,” Juan Manuel Aguilar Antonio, a researcher at UNAM’s Center for Research on North America, told Wired.
Aguilar’s warning is not only about intent. It is about how large language models operate as systems that process whatever is uploaded to them. In that context, the notes describe the possibility that sensitive content could go viral if someone uploads official documents, then a later user asks for similar information. Measuring and systematizing informal use is difficult. That difficulty becomes another kind of vulnerability.
Jorge Ordelin, a researcher at the Center for Research and Teaching in Economics, argues that informal use cannot be systematized, making it impossible to know how often it occurs, which processes it shapes, and when personal data is introduced. Ordelin is not against AI in government, the notes say. His argument is narrower and more demanding: transparency is not enough; algorithmic transparency is needed to understand how systems work and how data protection is being guaranteed.
Principles, Guides, and the Gap That Keeps Growing

Mexico’s regulatory gap is not a secret inside the government. It is simply unresolved. Aguilar says the solution must go beyond general cybersecurity awareness. He argues for stronger professionalization of government staff, and for specific AI legislation, norms, manuals, and protocols.
“I don’t see a law or legal framework for how these AI systems should be used, and I think there also needs to be internal rules, likely managed by control bodies within institutions,” Aguilar told Wired.
Ordelin, who co-created the Reporte de Algoritmos 2024 with César Rentería, criticizes vague regulations. “The more general AI regulation is, the worse it is,” he told Wired. He says broad rules make it harder for people to grasp specific requirements, such as the need for open code, parameters, and models.
Attempts to legislate have not gone far. The notes cite a proposal presented in May 2023 to issue an Ethical Regulation Law for Artificial Intelligence and Robotics that did not prosper in Congress. Partial guidance exists, but it is uneven. Only Banco de México reported having a guide for the safe use of public generative AI tools. The central bank uses seven AI systems developed internally or by external providers, and reported spending 8.3 million pesos, according to the notes. It stands out precisely because others do not.
By contrast, the Secretaría Anticorrupción y Buen Gobierno, while testing Gemini, says it has no internal guidelines, manuals, guides, or policies on the ethical and responsible use of AI. The tax authority says it does not identify documentation, despite deploying statistical learning models. The foreign ministry argues that because its chatbot does not store data, there is no need to generate documentation on ethical use, even as the WhatsApp interaction described in the notes requests personal details for appointments.
The latest attempt at a shared framework is the Chapultepec Principles, presented on January 29, 2026, by the Agency for Digital Transformation and Telecommunications and the Secretaría de Ciencia, Humanidades, Tecnología e Innovación. They outline ten ethical principles to guide AI development and use. But the notes emphasize the limitation: as a declaration, it is not a binding law, leaving the actual implementation and auditing in an ambiguous state.
This uncertainty is at the heart of the story. AI is now a daily tool in Mexico’s government, handling personal, sensitive, or classified information. But oversight systems are falling behind, not just temporarily, but deeply. In a country where public trust is fragile, the risk isn’t just technical failure. It’s a failure of democracy. When the government becomes more advanced than the rules that limit it, people are left facing a black box that keeps asking for more data.
Also Read: Latin America Hears Pope’s Silence Plea as Phones Feed Addiction, Steal Sleep




