Introduction
The relationship between AI-enabled weapons systems and environmental harm grown in significance in contemporary society. These two areas fall within fields that have attracted increasing global attention: on the one hand, the protection of the environment, and on the other, the rapid development of artificial intelligence (AI), meaning the technology that enables machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.
AI-enabled weapons systems are typically defined as weapons that rely on machine learning algorithms, which may include deep learning techniques, to perform critical functions. Examples include a weapon that uses AI for object recognition to inform the targeting process, a system that identifies incoming missiles and potentially engages them autonomously, or a decision-support system that advises a unit leader on possible tactical options. At the same time, autonomous weapons systems are generally described as weapons capable of selecting and applying force to targets without human intervention.
Some AI-enabled weapons systems also fall within the category of autonomous weapons systems, particularly when they are capable of independently selecting and engaging targets, as in the case of lethal autonomous weapons systems (LAWS). It is important to note that while LAWS capable of independent targeting are AI-enabled, not all AI-enabled weapons systems qualify as LAWS; many operate with human oversight or limited autonomy, performing complex functions without fully independent targeting. Some operate using pre-programmed, rule-based logic. For instance, some of the older systems such as the Phalanx Close-In Weapon System function based on reactive feedback within strict, predefined parameters, without the complex decision-making capabilities of AI.
This article, however, focuses on AI-enabled weapons systems: those incorporating machine learning or other algorithms that require extensive data processing and energy-intensive computational training. These systems raise relevant environmental concerns that go beyond those associated with traditional weapon technologies. Indeed, AI-enabled weapons systems can come into conflict with the environment, especially during the extraction of minerals and during the AI training phase, meaning the process by which AI algorithms are refined through simulations and data analysis prior to deployment. Once trained, some AI-enabled weapons systems may potentially be used – directly or indirectly – to cause serious environmental harm, whether intentionally or unintentionally. States, according to Article 36 of Protocol I to the Geneva Conventions, are under an “obligation to determine whether [AI-enabled weapons systems] employment would, in some or all circumstances, be prohibited” by rules of international law.
Given the expanding reliance on such technologies by armed forces worldwide, it is crucial to consider the potential environmental implications arising from mineral extraction, AI training, and subsequent deployment of these systems.
The Extractive Dimension of the AI Arms Race: A Further Concern for the Environment
A first critical aspect of the global race to develop AI-enabled weapons systems lies in the extraction of minerals. The accelerating push by States to develop and deploy such technologies has driven an increasing demand for critical minerals, often sourced from the African continent, that serve as the backbone and building blocks of AI hardware. The African Commission on Human and Peoples’ Rights has underscored the need to respect and protect fundamental human rights in the extraction of resources destined for AI military technology development.
Yet, from an international law perspective, there remains a significant regulatory gap: existing legal frameworks offer limited protection and enforcement mechanisms to prevent the exploitation and expropriation of natural resources, particularly in relation to cobalt, lithium, and rare earth elements. In practice, the extraction and processing of these critical minerals often entail large-scale mining operations, leading to deforestation, water contamination, and long-term ecological pollution.
As noted in a recent expert policy note,
the concept of responsible AI in the military domain must extend beyond mere compliance with IHL standards at the point of deployment. It must also encompass the ethical and legal considerations at the developmental stage.
This developmental stage should be understood as encompassing both the extractive processes that make AI-enabled weapons systems construction possible and also its training. In this regard, the establishment in 2024 of the UN Secretary-General’s Panel on Critical Energy Transition Minerals represents a positive development. Among its guiding principles, the Panel emphasized that “the integrity of the planet, its environment and biodiversity must be safeguarded”.
However, one of the central concerns surrounding AI-enabled weapons systems lies in the powerful interests of States and private corporations driving mineral extraction. Imposing stricter limitations on these activities, ideally in an international treaty, is essential to safeguard both the environmental and human health.
The Environmental Impact of AI-Enabled Weapons Systems Training
The training of AI-enabled weapons systems raises serious environmental concerns due to their significant carbon footprint. This derives primarily from the constant energy consumption required to ensure their operational responsiveness, as well as the extensive use of water (sometimes even potable water) for cooling purposes.
In the civilian sector, studies have already demonstrated the environmental and health impacts of data centers, particularly on vulnerable or disadvantaged communities. A similar risk arises in the military context: as an increasing number of states—and possibly in some cases, paramilitary groups—could gain access to such technologies, invoking deterrence rationales comparable to those used for nuclear weapons, the continuous training of these systems appears to be a permanent necessity.
This ongoing cycle of energy-intensive training could result in environmental and health consequences comparable to, if not greater than, those already observed in the civilian sphere with the proliferation of data centers and associated pollution risks. This presents a paradox in a world where the reduction of carbon emissions is (or, given current resistance, should be) a global priority.
An initial practical step toward addressing this challenge could involve limiting the environmental impact of AI-enabled weapons systems training through both preventive and repressive measures. The first point concerns the prohibition of AI-enabled weapons systems that can also fall into the LAWS category and have a certain degree of autonomy, as suggested by the so-called two-tier approach. Such a measure would help reduce the overall number of systems available and, consequently, the amount of training required, while also decreasing the risk of unintended or unauthorized environmental harm.
The second point involves establishing specific limits on emissions, energy use, and water consumption associated with AI-enabled weapons systems training. Such measures would not only help regulate the use of scarce resources, such as water in certain regions, but also reduce air pollution, which poses risks to the environment and to human health. Ideally, these provisions should be incorporated into a future international treaty on Lethal Autonomous Weapons Systems, to be concluded by the end of 2026 as called for by António Guterres, Secretary-General of the United Nations, and Mirjana Spoljaric, President of the International Committee of the Red Cross. In doing so, it is important to recognize that these limits would specifically target AI-enabled systems, rather than those LAWS that operate without AI, for which alternative regulatory measures would need to be developed.
Finally, an international sanctioning mechanism should be established to penalize actors who exceed these defined thresholds. The absence of effective punitive consequences would make enforcement extremely difficult in practice. This is evident, for instance, in the non-binding nature of the UN Guiding Principles on Business and Human Rights (UNGPs), which apply to business activities. In such cases, companies are often insufficiently deterred from polluting the environment, leading (among other factors) to continued environmental pollution. Consequently, a future treaty should not only include emission limits for AI-enabled weapons systems training but also provide for clear accountability measures in cases of non-compliance, rather than relying solely on formal thresholds. It should further ensure meaningful consultation with communities directly affected by AI-enabled weapons systems training and guarantee that, whenever pollution or environmental harm occurs, appropriate remediation processes are promptly undertaken.
The Deployment of AI-Enabled Weapons Systems and the Environmental Impact Linked to the Training
A second area of concern, both in the present and especially for the future, relates to the use of AI-enabled weapons systems once they have been trained and are ready for real-world deployment. Within this context, two distinct scenarios can be identified in which environmental harm may occur.
The first involves situations where the primary targets are combatants, infrastructure, or military assets, whether on land, at sea, or in the air. In such cases, the environment may suffer damage as a secondary or collateral consequence, rather than as the intended object of attack. The second scenario, however, represents what may be regarded—particularly from the standpoint of environmental law—as an even more serious threat: the deliberate use of AI-enabled weapons systems to destroy the environment, or parts thereof, within a given territory. In such instances, the motivations may vary widely and may even include economic considerations, such as the intentional devastation of a particular area to enable its subsequent exploitation for activities yielding substantial financial gain.
Regardless of which of the two scenarios occur, an important common factor to consider is who conducts the training for AI-enabled weapons systems. If the training process is outsourced to private actors—even in collaboration with the state—these entities could conduct it in line with their own interests, potentially biasing AI-enabled weapons systems toward specific outcomes. Among such interests might be the deliberate degradation or destruction of certain environments, with the intention of repurposing those areas for future economic or industrial activities capable of generating substantial profits. This scenario is not purely speculative, as numerous cases of AI training in other sectors have already led to ethically questionable outcomes. In such a context, AI-enabled weapons systems could be trained with parameters that implicitly or explicitly include environmental destruction once deployed, particularly in situations where human oversight is limited or entirely absent.
This raises the relevance of two key concepts. The first concerns the UNGPs, which, although non-binding, should serve as a framework for responsible private business conduct. The challenge, however, lies in reconciling these non-binding standards with powerful economic—and, at times, military—interests that are often mistakenly prioritized over environmental protection. The second concerns the need, not only in relation to environmental use but to all forms of deployment, to establish clear restrictions in any future international treaty on the use of AI-enabled weapons systems (especially some types of LAWS) that operate with minimal or no human oversight.
In this regard, the Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems, as reflected in the Rolling Text of 12 May 2025 following the March 2025 session, and in subsequent discussions during the September 2025 session, emphasized that States should ensure context-appropriate human judgment and control over the use of LAWS. A revised version of the Rolling Text, expected soon, will integrate the proposals and areas of convergence identified in September. The GGE, in March 2025, underscored that
LAWS are operated under a responsible chain of command and control. This includes ensuring assessment of legal obligations and ethical considerations by a human, in particular, with regard to the effects of the selection and engagement functions,
a standard of conduct that can and should encompass consideration of potential environmental damage.
Conclusions
What emerges from this analysis is that, as of 2025, the extraction process and the training of AI-enabled weapons systems represent a factor that must be taken seriously in relation to both environmental well-being and human health. Ignoring these issues or failing to incorporate limits in an international treaty regarding mineral extraction, energy consumption, water use, and resulting emissions, risks contributing to a global increase in pollution, directly contradicting ongoing efforts to reduce environmental harm.
History has shown that environmental concerns are often addressed only after damage has occurred, frequently resulting in significant consequences for both populations and ecosystems. For this reason, it is essential in the coming years to develop regulations capable of achieving broad consensus on threshold limits to be included in such a treaty. The risk of omitting such limits solely due to the inability to reach agreement is unacceptably high.
Moreover, the treaty should take into account the risks associated with outsourcing the training of AI-enabled weapons systems to private actors. Establishing clear rules, again with broad consensus, would help prevent opaque or improper practices that are difficult to detect after the fact and, most importantly, extremely challenging to remediate once environmental and human health harm has occurred.


Leave a Reply