The debate at the United Nations over bans on military uses of artificial intelligence is no longer an abstract ethics seminar. It has become a diplomatic battleground where humanitarian imperatives collide with geopolitical calculation and cold military interest. Last year the General Assembly took a visible step when it adopted a text on lethal autonomous weapons systems that underscored the international community’s alarm about machines making life and death decisions. That resolution passed with overwhelming numbers, 161 in favor, 3 against and a number of abstentions, signaling broad concern even as the details of any prohibition remain fiercely contested.

But votes and moral rhetoric are not the same thing as enforceable law. At the heart of the contest is process. For more than a decade member states have debated autonomous weapons within the Convention on Certain Conventional Weapons, an experts forum whose consensus rules let a very small group of powerful or determined countries slow walk or bottle up decisive outcomes. The result is that civil society and many smaller states have shifted part of their pressure to the General Assembly, which is less vulnerable to single-state vetoes but still produces only nonbinding resolutions unless states choose to negotiate a treaty.

Antonio Guterres has not been coy about where he stands. The secretary-general has repeatedly called lethal autonomous weapons systems that operate without human control politically unacceptable and morally repugnant, and he has urged the conclusion of a legally binding instrument by 2026 to prohibit the most dangerous systems and regulate others. That deadline may sound ambitious, but it also captures the anxiety driving the diplomatic push: technology can outrun treaties faster than bureaucrats can write them.

What is on the table is not a binary ban on all AI in the military. The dominant negotiating concept is the so called two tier approach, which aims to prohibit systems that cannot comply with international humanitarian law while regulating those that can be used with meaningful human control. That nuance matters in legal terms, but it also opens a huge political seam. Defining “meaningful human control” is technically and ethically fraught. Does a human signing off on a mission plan count? What about supervisory control over a swarm that can retask itself within mission parameters? These are not academic distinctions. They determine whether tomorrow’s weapons fall into a prohibited category or into a heavily regulated, but still deployable, one.

The calendar matters too. The CCW has scheduled expert work and preparatory meetings in 2025 that are intended to feed any future negotiations, and the Implementation Support Unit circulated an aide memoire in late January laying out logistics for the next GGE sessions slated for March and September. Those sessions will be the place where technical definitions, operational constraints and legal obligations are hashed out in exhausting detail. If states want a treaty, they must show up with negotiating text, not just soundbites.

Civil society and human rights organizations have been relentless in applying pressure. Campaigns like Stop Killer Robots and Human Rights Watch have kept the moral and rights-based argument in front of delegates, framing fully autonomous lethal systems as a threat not only to civilians in conflict zones but to the very notion of human dignity and accountability. Their activism helps explain the discrepancy between the political rhetoric from many capitals and the continued reluctance of certain military developers to accept hard red lines.

Put bluntly, the fight over an AI ban is a fight over advantage. States investing in autonomy see battlefield efficiencies, reduced troop risk and a potential asymmetric edge. Other states fear proliferation to nonstate actors and the lowering of thresholds for violence. The geopolitical overlay is thick. As analysts have pointed out, awaiting universal consensus guarantees delay; insisting on a treaty that excludes major military powers risks creating rules that lack enforcement power. That is why some diplomats argue for a pragmatic middle course: lock in prohibitions where moral and legal consensus exists, and build robust verification and transparency measures around regulated systems. Others say that anything less than a clear prohibition on autonomous targeting of humans will be insufficient.

There is another uncomfortable truth. International law already applies to weapons. International humanitarian law and human rights obligations do not evaporate when software is added to a munition. Yet relying solely on existing legal frameworks is inadequate when the technology changes the mechanics of decision making and accountability. Who is responsible when an algorithm misclassifies a civilian? Who is culpable when a chain of machine-assisted decisions leads to unlawful harm? These are precisely the gaps that advocates want the UN to fill with clear prohibitions and obligations.

So where does that leave us on January 30, 2025? The UN debate has moved the needle. It has turned a previously niche technology concern into a question that populates General Assembly agendas, expert reports and public campaigns. It has also exposed the limits of current diplomacy. If the goal is a binding instrument that prevents machines from deciding to kill without human judgment, negotiators must convert broad moral consensus into precise legal language and practical verification regimes. That takes political will, compromise and above all clarity on what must never be allowed. The danger now is that talk of a “ban” becomes a rhetorical device rather than a firm commitment to a treaty that can be implemented and enforced.

My sense from the margins of these debates is that energy and moral clarity are plentiful. What is scarce is the willingness among certain states to accept constraints that bite into perceived military advantage. Civil society can keep the pressure on, and technical experts can help translate ethics into enforceable clauses. But ultimately the choice rests with states: will they choose a future where human judgment is preserved at the center of decisions to use force, or one in which legality and morality are treated as optional filters behind opaque code? The UN can be the venue for that choice. The question is whether member states will take the difficult step of turning indignation into law.