Facing Artificial Intelligence

Versión en español


The launch of Chinese AI company DeepSeek’s latest model in January broke through usually niche technical research spheres to make international headlines. The surprise: an upstart Chinese AI company had produced a model that was not only on par with reasoning models released by frontier US labs, but seemingly at one thirtieth of the cost. In doing so, it reset fundamental assumptions about where AI innovation could come from in the future. For Mexico and other emerging markets, this has re-opened the window of opportunity in what was seen as a two-horse AI race.

DeepSeek’s success suggests that competitive, homegrown AI platforms are possible, even in countries that do not enjoy the US’ unrestricted access to advanced chips or its vibrant technology ecosystem. Raw computational power remains undoubtedly important, but DeepSeek’s high efficiency and cost model suggests countries, like Mexico, have an opportunity to reap the benefits of AI even as the US and China dominate the AI frontier.

There is more nuance to DeepSeek’s innovations, as well as lingering questions about its real significance. There are misgivings about the startup’s reliance on open-source US models and methodology for evaluating unprecedently low training costs. DeepSeek said it spent $5.6 Mn and used around 2 000 NVIDIA chips to train its model, a fraction of what OpenAI and Google spend to train comparably sized models. However, analysts have suggested the true figure may be closer to $500 Mn once other necessary costs, such as training runs and R&D, are considered. Hardware, engineering talent, and access to capital remain important building blocks for AI innovation. Nevertheless, DeepSeek’s success has expanded the Overton window for many countries beyond the US and China as they consider where their AI ambitions fall relative to their resources.

Ilustración: Víctor Solís

Sovereign AI efforts are, once again, on the rise in the DeepSeek aftermath. The French government, for instance, has since announced plans to build a French equivalent to the US’s $500 Bn Stargate project to build AI data centers in America. President Macron announced €109 Bn for new AI data centers financed by domestic and international investors, including from the US, Canada and the UAE. The UK announced its own plans to build a new supercomputer and an additional $17 Bn worth of data center projects via its AI Opportunities Action Plan in a bid to increase available computing power.

Meanwhile, the UAE continues to emerge as an international player in AI infrastructure, forging AI cooperation agreements with France and the US, and investing billions of dollars in AI development and data centers both domestically and abroad. Sovereign AI efforts are also underway in the emerging world. India’s Reliance Industries has set out plans to build the world’s largest data center by capacity, a 3-gigawatt facility in Gujarat. States are increasingly serious about investing in, and controlling part of, the infrastructure, data, workforce, and technology stack that underpin AI development and deployment.

The other paradigm that DeepSeek has resurfaced—and arguably a more impactful, longer-term one—is that cheaper, more efficient AI will enable widespread adoption of the technology globally. Microsoft CEO, Satya Nadella, responded to DeepSeek’s innovation by pointing to “Jevons’ Paradox,” whereby improvements in AI efficiency and accessibility will increase overall resource consumption rather than reduce it. This is a critical piece of the puzzle for countries such as Mexico that do not have the computing power or infrastructure resources to compete at the cutting edge. Instead, their best bet for reaping economic benefits of the projected annual 3.3 % productivity boost to the global economy from AI is to organize their institutions to maximize the diffusion and adoption of AI across as many sectors in the economy as possible.

The Sheinbaum administration is now able to take action to reverse recent inertia on AI policy and development. Mexico led the region in 2018 when it started a process to write a national AI strategy but has since ceded its regional leadership to others. Chile, Brazil, and Uruguay now top the Latin American Artificial Intelligence Index based on high investments in technological infrastructure, training programs, and enabling policy. Meanwhile, as of the 2024 Index, Mexico has fallen to 7th place out of 19 countries assessed.

As a starting point, the government should build on the work of the multi-stakeholder National Alliance for AI and craft a national AI strategy taking initial actions for AI autonomy and adoption on three fronts.

First, developing sufficient computing power to drive research and meet demand across the economy. One option would be to lead the establishment of a Central and South American regionally distributed advanced computing cluster that could pool resources in the region from both a GPU cost and R&D perspective, while re-establishing Mexico’s regional leadership credentials in AI advancement as the initiator of the project. This would also likely require negotiation with the US which tightly controls access to hardware required for substantial computing power as well as resolving some of the domestic energy sector challenges that may hamper the necessary power procurement to run AI data centers.

Secondly, Mexico can make some strategic choices about where to carve out a niche in AI innovation and application. This may involve doubling down in a specific sector or sub-technology where Mexico already holds strong advantages such as industrial applications or manufacturing automation.

Finally—and critically—Mexico should continue to lay the foundation for economy-wide AI adoption through strong incentives for AI talent, continued upgrades to connectivity and internet access, and encouraging private sector adherence to common standards for AI.

***

While it isn’t necessary—nor advisable—to attempt to compete with the large GPU clusters that frontier labs in the US and China are building, Mexico should ensure it has sufficient AI chips, data centers, and engineering talent to grow a domestic AI ecosystem focused on applications for the Mexican linguistic, economic, and cultural context.

As a starting point, the government could set back in motion efforts to develop a national AI strategy which evaluates what investments and regulatory architecture would enable Mexico to establish a basic level of AI autonomy—this could include regional cooperation to establish shared computing resources, such a Central and South American advanced computing cluster. A regional compute cluster, enabled by “distributed low communication” methods for training models, would both defray the costs of GPUS—joint chip procurement could be explored—and establish a collaborative effort that simultaneously enables sovereign digital infrastructure.

This would serve both economic resilience and national security by ensuring some portion of the Mexican AI stack is secured in-country, creating regional redundancy in the case of broader failures in the global AI ecosystem and a way to deploy sensitive national data in a secure manner. In practice, AI autonomy could also involve building sufficient data centers to power some domestic AI demand and securely store strategic or sensitive national datasets, catalyzing the creation of local AI research labs through the regional compute cluster, and continuing efforts to participate in the global semiconductor value chain.

Efforts to build and operate AI infrastructure will require access to advanced AI chips and therefore place Mexico—and any countries participating in joint chip procurement—squarely in the crosshairs of US-China technology competition. Washington has long been concerned that, as US chip makers and hyperscalers deploy overseas, AI chips and IP may end up diverted to China.

In the final days prior to President Trump’s inauguration, the Biden administration released an ambitious policy document attempting to control for these concerns by dictating which countries can import advanced semiconductors and at what volumes. Mexico falls into the second, lower tier of access, despite its deeply intertwined relationship with the US.

Despite Recent turbulence in the US-Mexico relationship, President Sheinbaum has proven herself a highly skilled diplomat, obtaining multiple suspensions of the Trump tariffs without drawing public ire or hostility from the US administration. Within USMCA negotiations next year, the Sheinbaum administration should consider how to negotiate the removal of, or flexibility to, the fixed cap on the numbers of US chips it can import. A compromise may involve rejecting Chinese technology players, including additional tariffs on Chinese goods or rejecting initiatives such as Huawei’s Seeds of the Future program. Integration into the US AI supply chain would likely be well worth the tradeoff.

***

With sufficient computing power secured, the next step is to leverage AI infrastructure in service of sectors or applications where Mexico can develop a leading position. One report on AI adoption policies suggests that targeting specific sectors can help speed up AI adoption rates. It highlights Singapore, India, and the UAE, countries with AI adoption rates 50 % above the G7 average, all of which identified priority sectors for applied AI within their national AI strategies.

Identifying priority sectors for AI deployment can help kickstart adoption in flagship industries and more quickly unlock AI productivity benefits. Mexico’s position as a nearshoring hub for manufacturing and industrial production hub opens broad possibilities for AI to solve real-world problems. AI applications in the industrial manufacturing sector—where Mexico already holds comparative advantages—could focus on improving efficiency through real-time data processing in factories, improved supply chain management, and task automation. Successfully carving out a niche in applied AI will not only bring productivity benefits to Mexico but also begin to establish Mexico’s standing as a competitive AI player on the international stage.

The federal branch can support sector-specific efforts to focus Mexican AI research and commercialization on real-world applications by signaling their priority status. This would help marshal public and private stakeholders, and their resources, to solve targeted, sector-specific challenges. A National Center for Applied Industrial AI could bring together government resources, such as national data assets, private talent and funding to solve real-world operational challenges for Mexican businesses. The selection of priority sectors would benefit from being done in consultation with the private sector and can leverage existing multi-stakeholder vehicles, such as the National Alliance on AI.

***

Last, but not least, strategic decisions about AI autonomy, infrastructure and leadership should be accompanied by economy-wide efforts to improve Mexico’s AI readiness. Mexico may not be at the cutting edge of AI breakthroughs, but it can still maximize the opportunities by creating the right conditions for companies and workers to use technology.

In his recent book,1 Professor Jeffrey Ding has argued that not only there is a lag between cutting-edge breakthroughs and widespread adoption during a technological transition, but also that widespread adoption is ultimately a more apt metric in determining which nation states benefited from technological transitions the most and translated economic gains into greater power and international influence. As such, creating an AI-ready society may in fact be just as critical a pillar for countries like Mexico that do not necessarily have the resources compete at the cutting edge, but can organize their institutions and society strategically to benefit economically and politically.

Mexico’s AI readiness would benefit from greater government attention. According to UNESCO’s Government AI Readiness Index, Mexico recently fell to 8th place among its peer group in Latin America and The Caribbean in 2023, having previously been a regional leader. The country’s investment in R&D has also consistently lagged that of its peers, amounting to just 0.3 % of GDP in recent years—a figure dwarfed by the 2.27 % spent by the United States and a staggering 5 % invested by South Korea.

However, the talent base and general interest in AI in Mexico provides a strong backbone to build on. Mexico registered the highest number of computer science or related master’s programs graduates in 2022 in the Latin America and Caribbean region, while a third of companies across the country indicated they were actively implementing AI according to a 2022 IBM survey. An infusion of government focus and resources could create dramatic benefits: more substantial R&D investment, expanding education systems and upskilling programs, and standard setting processes that allow interoperability would help establish an AI-ready ecosystem fit for the long term. Investing in AI-related skills is an area where the administration should not wait to act. Of all the building blocks of AI, human talent—and the time it takes to develop—is a critical limiting factor in scaling AI advancement.

***

DeepSeek’s long-term implications may end up being less of a verdict on the state of US-China technology competition, and rather a transformative catalyst that renews sovereign AI efforts worldwide. Indeed, DeepSeek may be better described as a placebo effect: one which reignited the imagination of AI possibility for middle powers, such as Mexico, and reopened a window of opportunity for a broad set of countries to set smart—but ambitious—AI goals that align with each of their resources and structural advantages.

Navigating US-China competition will remain a constant element of AI development and deployment efforts but ultimately need not eclipse the possible benefits that Mexico can reap by taking strategic action. A national AI strategy that pairs establishing a degree of AI autonomy with applied AI leadership and economy-wide readiness will ultimately put Mexico on a path to transforming its latent potential into tangible AI leadership. This would be a transformative feat in a global race currently characterized by a “winner takes all” model for AI success.

 

SIENNA TOMPKINS

Is a geopolitical analyst for Lazard. Her research focuses on the intersection of West-China competition, technology, and economic security.

 

1 Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition, Princeton University Press, 2024.

2 comentarios

  1. Antonio
    junio 4, 2025

    Este artículo puede traducirse fácilmente al español usando los traductores basados en IA que vienen en los navegadores de internet, como Mozilla o Chrome.

    Los sistemas de IA aún no son tan buenos como la publicidad dice, y de hecho se han topado con una pared en su mejoramiento. En segundo lugar, los chips de Nvidia no son indispensables para implementar sistemas de IA basadas en aprendizaje profundo. Incluso podría estarse creando una burbuja de inversiones en IA, puesto que aún no están claros los modelos de negocio con los que van a funcionar y no deben adoptarse tecnologías sin una idea clara de cómo usarlas en las empresas.

    Si tomamos en cuenta que el cerebro humano tiene una masa de 1400 gramos y consume 20 watts de potencia en energía, es obvio que aún hay mucho margen de mejora respecto a los sistemas de IA actuales y si se consigue un mejor diseño, todas las inversiones actuales podrían no generar retornos. ¿cuánto de la inversión se destinará a investigación y desarrollo y cuánta a construir grandes instalaciones con la tecnología actual? No se han resuelto los problemas de consumo de energía, y recordemos que ya se predice una caída en la producciín del fracking en EEUU del 1.1% para el próximo año. Sin mencionar el consumo de agua y que los grandes centros de datos no generan empleos localmente.

    La adopción demasiado rápida de sistema de IA generarán desempleo (si de verdad funcionan como la publicidad dice) debido a la competencia entre empresas para disminuir costos laborales para maximizar sus ganancias.

    Celebro que en el artículo se haga mención de la protección de datos personales, y antes de implementar un plan de IA debemos tener un marco legal adecuado. Trump está usando la empresa Palantir para usar IA y minería de datos para cazar migrantes y opositores políticos usando todos los datos de todas las agencias del gobierno federal.

    Otro uso peligroso de la IA es usar minería de datos para localizar individuos psicológicamente vulnerables en redes sociales y enviarles mensajes personalizados que los vayan radicalizando poco a poco, aún y cuando no estén en cámaras de resonancia de opinión.

    Las IAs de aprendizaje profundo se basan en redes neuronales artificiales, Una neurona se crea con la multiplicación de dos vectores de datos (sumas, restas y multiplicaciones aritméticas) y el resultado se pasa a través de una función no lineal, que puede ser basada en exponenciales, divisiones o trigonométricas.

    Las neuronas se agrupan en capas, a cada neurona se le aplica el mismo vector de datos de entrada que se multiplica por un vector de «pesos» particular de cada neurona. Al final, lo que tenemos es la multiplicación de un vector de datos de entrada por una matriz de «pesos», dando como resultado un vector de salida. A cada dato del vector de salida se le aplica una función no lineal, lo que nos da un vector de datos de salida.

    Podemos usar tantas capas de neuronas como queramos, apiladas en una capa entrada, capas ocultas y una capa de salida. El vector de datos de salida de una capa anterior sirve como los datos de entrada de la capa posterior. No es necesario que todas las capas tengan el mismo número de neuronas.

    Tenemos entonces que la red neuronal consta de la aplicación reiterada de sumas, restas, multiplicaciones, divisiones y funciones exponenciales, son matemáticas de secundaria, pueden implementarse en computadoras de escritorio o en celulares como los primeros filtros que colocaban orejas de perro en las fotos. No es necesario un chip especial. El chip de Nvidia realiza rápidamente multiplicaciones de matrices y aplica las funciones no lineales, y se han usado para generar gráficos en computadora y minar criptomonedas, antes de usarse en IA. La ventaja de los chips de Nvidia es que aceleran el tratamiento de grandes cantidades de datos en la fase de entrenamiento de la red neuronal, que se realiza con el algoritmo de retropropagación. Pero una vez obtenido las distintas matrices de pesos de cada capa, dependiendo de cuánta memoria requieran para su almacenamiento, puede correrse la red neuronal en computadoras más pequeñas.

    • Antonio
      junio 4, 2025

      traduction made using deepl translator. It can´t translate nahuatl as Google translate does.

      This article can be easily translated into Spanish using the AI-based translators that come in web browsers, such as Mozilla or Chrome.

      AI systems are still not as good as the hype claims, and in fact they have hit a wall in their improvement. Secondly, Nvidia’s chips are not indispensable for implementing deep learning-based AI systems. There could even be an AI investment bubble being created, since the business models with which they will work are not yet clear and technologies should not be adopted without a clear idea of how to use them in companies.

      If we take into account that the human brain has a mass of 1400 grams and consumes 20 watts of power in energy, it is obvious that there is still much room for improvement with respect to current AI systems and if a better design is achieved, all current investments may not generate returns. how much of the investment will go to research and development and how much to build large facilities with current technology? The problems of energy consumption have not been solved, and let’s remember that fracking production in the US is already predicted to drop by 1.1% next year. Not to mention water consumption and that large data centers do not generate jobs locally.

      Too rapid adoption of AI systems will generate unemployment (if they really work as advertised) due to competition between companies to reduce labor costs to maximize their profits.

      I welcome the mention of personal data protection in the article, and before implementing an AI plan we must have a proper legal framework. Trump is using the company Palantir to use AI and data mining to hunt migrants and political opponents using all data from all federal government agencies.

      Another dangerous use of AI is to use data mining to locate psychologically vulnerable individuals in social networks and send them personalized messages that gradually radicalize them, even if they are not in opinion echo chambers.

      Deep learning AIs are based on artificial neural networks. A neuron is created by multiplying two data vectors (arithmetic addition, subtraction and multiplication) and the result is passed through a non-linear function, which can be based on exponentials, divisions or trigonometrics.

      Neurons are grouped into layers, each neuron is given the same vector of input data that are multiplied by a vector of “weights” particular to each neuron. In the end, what we have is the multiplication of an input data vector by a matrix of “weights”, resulting in an output vector. A nonlinear function is applied to each data in the output vector, giving us a vector of output data.

      We can use as many layers of neurons as we want, stacked in an input layer, hidden layers and an output layer. The output data vector of a previous layer serves as the input data for the subsequent layer. It is not necessary that all layers have the same number of neurons.

      We have then that the neural network consists of the repeated application of additions, subtractions, multiplications, divisions and exponential functions, are high school mathematics, can be implemented on desktop computers or cell phones as the first filters that placed dog ears on the photos. No special chip is needed. Nvidia’s chip quickly performs matrix multiplications and applies nonlinear functions, and have been used to generate computer graphics and mine cryptocurrencies, before being used in AI. The advantage of Nvidia’s chips is that they speed up the processing of large amounts of data in the neural network training phase, which is done with the backpropagation algorithm. But once the different weight matrices for each layer have been obtained, depending on how much memory they require for storage, the neural network can be run on smaller computers.

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *