US has ‘moral imperative’ to pursue AI weapons, panel says

A US government-appointed panel has concluded that the nation should not agree to an outright ban on the development of use of autonomous weapons, following a public discussion on the issue.

 The National Security Commission on Artificial Intelligence, which is led by former Google CEO Eric Schmidt, held a two-day public discussion on how the US should handle the issue of AI in the context of national security and technological development, such as through its application to autonomous weapons systems.

For almost a decade, a coalition of NGOs and prominent figures in science and technology have been campaigning for a treaty banning “killer robots” analogous to treaties banning other inhumane weaponry. The Campaign to Stop Killer Robots demands that humans are always given the final say regarding lethal attacks.

Although fully autonomous lethal weapons have not been confirmed to have been deployed, the technology already exists. For instance, the Samsung SGR-A1 system – which can track and follow targets – incorporates a sentry gun, a thermal camera, three optical cameras, laser rangefinder, and video recorder. These systems could require human intervention before it acts on its target, or it could act on its target unless there is a human intervention; the latter introduces difficult questions about responsibility for war crimes.

At least 30 countries want these types of weapons systems banned, according to the campaign. The UN holds meetings to discuss a ban every year, but the world’s largest military powers have avoided or rejected signing a potential treaty.

During the meetings, members raised the risks of autonomous weapons. For instance, a Microsoft representative warned of the pressure to develop autonomous systems which react quickly and which could cause escalations of violence; Microsoft president Brad Smith spoken publicly about the potential dangers of autonomous weapons.

US Army Futures Command General John Murray said that rules intended to maintain human control over autonomous systems may not always be possible to apply, such as with regards to drone systems which move too quickly for humans to track and target.

The panel agreed that nuclear warheads should never be launched without human consent.

However, the committee concluded that anti-proliferation is preferable to a treaty banning the weapons. It said that such a treaty would be hard to enforce and would not be in the interests of the US.

“It is a moral imperative to at least pursue this hypothesis,” said vice-chair Robert Work, referring to the hypothesis that autonomous weapons may be less likely to make mistakes than humans do. This could include misidentifying and firing at a target.

Campaign to Stop Killer Robots coordinator Mary Wareham criticised the commission for its focus on staying abreast with rivals’ technological advances, which “only serves to encourage arms races”.

The report considered other AI technologies in national security and defence. Among other recommendations, it backed the use of AI in intelligence gathering and analysis and the establishment of a “digital corps” analogous to the Medical Corps. The final report will be submitted to Congress in March.