Defense AI and Arms Control Network
Defense AI and Arms Control Network is a platform for monitoring, aggregating, and analyzing world wide advances and policies on defense AI and arms control. We search and collect reports and analysis related to military AI ethics and governance, AI arms control, and update our services on a daily basis. While this does not imply we agree on any of the opinions expressed in these resources. Recommending sources and reports for Defense AI and Arms Control Network is highly welcome and appreciated.

Filter 252 Publications

policy

Position Paper of the People's Republic of China on Strengthening Ethical Governance of Artificial Intelligence (AI)

China, based on its own policies and practices and with reference to useful international experience, published the position paper in the aspects of regulation, research and development, utilization and international cooperation.

policy

Principles on Military Artificial Intelligence [Draft for Comments]

The military applications of Artificial Intelligence (AI) have already introduced great risks and challenges to the world. As such, we should be vigilant about lowering the threshold of war due to the development of military AI , and actively work to prevent avoidable disasters. "Defense Artificial Intelligence and Arms Control Network" published the principles that the design, research, development, use and deployment of military AI throughout the whole life cycle should comply with.

speech

Responsible AI to promote World Peace and Sustainable Development

The United Nations Office for Disarmament Affairs (UNODA) and the European Commission co-hosted a workshop on "Ethics and Emerging Technologies in Weapons Systems" in April 2022. The director of Center for Long-term AI, Prof. Yi Zeng was invited as a speaker. The following is a recording of his speech.

report

Artificial Intelligence and Nuclear Command, Control, & Communications: The Risks of Integration

The increasing autonomy of nuclear command and control systems stemming from their integration with artificial intelligence (AI) stands to have a strategic level of impact that could either increase nuclear stability or escalate the risk of nuclear use.

interview

How Artificial Intelligence Affects International Security: an interview with Fatima Roumate

World Geostrategic Insights interview with Fatima Roumate on the main opportunities, challenges, and concerns related to the application of Artificial Intelligence (AI) in international relations and global governance, as well as the malicious uses of AI and the impact of AI in the Russia-Ukraine war. Fatima Roumate Ph.D. is a Full Professor of International Law …

press release

CSIS Launches AI Council

The Center for Strategic and International Studies (CSIS) is pleased to announce the formation of the CSIS AI Council.

article

A Manifesto on Enforcing Law in the Age of "Artificial Intelligence"

"A Manifesto on Enforcing Law in the Age of 'Artificial Intelligence'" was recently presented at a gathering in Rome, with a focus of design of ...

commentary

The US Navy wants swarms of thousands of small drones

Budget documents reveal plans for the Super Swarm project, a way to overwhelm defenses with vast numbers of drones attacking simultaneously.

report

Artificial Intelligence and Arms Control

Advances in artificial intelligence (AI) pose immense opportunity for militaries around the world. With this rising potential for AI-enabled military systems, some activists a...

commentary

Who’s going to save us from bad AI?

About damn time. That was the response from AI policy and ethics wonks to news last week that the Office of Science and Technology Policy, the White House’s science and technology advisory agency, had unveiled an AI Bill of Rights.

policy

Blueprint for an AI Bill of Rights

To advance President Biden’s vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practice—a handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the public’s rights, opportunities, or access to critical needs.

report

Retaining Human Responsibility in the Development and Use of Autonomous Weapon Systems: On Accountability for Violations of International Humanitarian Law Involving AWS

It is undisputed that humans must retain responsibility for the development and use of autonomous weapon systems (AWS) because machines cannot be held accountable for violations of international humanitarian law (IHL). However, the critical question of how, in practice, humans would be held responsible for IHL violations involving AWS has not featured strongly in the policy debate on AWS. This report aims to offer a comprehensive analysis of that very question.

research article

“Autonomous weapons” as a geopolitical signifier in a national power play: analysing AI imaginaries in Chinese and US military policies

“Autonomous weapon systems” (AWS) have been subject to intense discussions for years. Numerous political, academic and legal actors are debating their consequences, with many calling for strict regulation or e...

report

Managing the risks of US-China war: Implementing a strategy of integrated deterrence

If the United States is to maintain a constructive role in preventing the outbreak of a cross-Strait war, it will need to implement a strategy to deter Chinese aggression against Taiwan that is consistent with U.S. interests and capabilities, and that provides clarity around the existentially important matter of preventing nuclear escalation, in the event a conflict does occur.

original paper

Artificial intelligence and responsibility gaps: what is the problem?

Recent decades have witnessed tremendous progress in artificial intelligence and in the development of autonomous systems that rely on artificial intelligence. Critics, however, have pointed to the difficulty ...

original paper

The Challenge of Ethical Interoperability

Defense organizations are increasingly developing ethical principles to ... the design, development, and use of responsible AI, most notably for defense, security, and intelligence uses. While these ... lead to m...

statement

Autonomous weapons: The ICRC calls on states to take steps towards treaty negotiations

ICRC statement following the final 2022 session of government experts on lethal autonomous weapons systems of the UN Convention on CCW from 25 to 29 July.

commentary

Can we Bridge AI’s responsibility gap at Will?

Artificial intelligence (AI) increasingly executes tasks that previously only humans ... medical operation. However, as the very best AI systems tend to be the least controllable ... longer be morally responsible...

article

What you need to know about autonomous weapons

Autonomous weapons are an immediate cause of humanitarian concern. Senior scientific and policy adviser at the ICRC, Neil Davison, explains.

review

“Ethically contentious aspects of artificial intelligence surveillance: a social science perspective”

Artificial intelligence and its societal and ethical implications are complicated and conflictingly interpreted. Surveillance is one of the most ethically challenging concepts in AI. Within the domain of artifici...

commentary

Why business is booming for military AI startups

The invasion of Ukraine has prompted militaries to update their arsenals—and Silicon Valley stands to capitalize.

research article

Imaginaries of omniscience: Automating intelligence in the US Department of Defense

The current reanimation of artificial intelligence includes a resurgence of investment in automating military intelligence on the part of the US Department of Defense. A series of programs set forth a technopolitical imaginary of fully integrated, ...

commentary

Is It Too Late to Stop the Spread of Autonomous Weapons?

If autonomous weapons are the future of warfare, then the United States has no choice but to grapple with their complexities.

commentary

Focus on the Human Element to Win the AI Arms Race

The United States must refine its investments to incorporate a deliberate and sustained campaign of mission engineering to accelerate and improve the delivery of trustworthy AI.

commentary

DOD Is Updating Its Decade-Old Autonomous Weapons Policy, but Confusion Remains Widespread

In November 2012, the Department of Defense (DOD) released its policy on autonomy in weapons systems: DOD Directive 3000.09 (DODD 3000.09). Despite being nearly 10 years old, the policy remains frequently misunderstood, including by leaders in the U.S. military. For example, in February 2021, Colonel Marc E. Pelini, who at the time was the division chief for capabilities and requirements within the DOD’s Joint Counter-Unmanned Aircraft Systems Office, said, “Right now we don't have the authority to have a human out of the loop. Based on the existing Department of Defense policy, you have to have a human within the decision cycle at some point to authorize the engagement."

statement

High Representative’s statement to the Human Rights Council on the topic of lethal autonomous robotics

Below is the statement of the High Representative for Disarmament Affairs to the 23rd session of the Human Rights Council, on the topic of lethal autonomous robotics. It was delivered on behalf of the High Representative by Mr. Jarmo Sareva, Director of the Geneva Branch of UNODA

commentary

‘Collaborative, Portable Autonomy’ Is the Future of AI for Special Operations

Creating autonomous teams in contested environments will be a challenge of technology—and policy.

original research

Meaningful human control of drones: exploring human–machine teaming, informed by four different ethical perspectives

A human-centric approach to the design and deployment of AI systems aims to support and augment human ... But what could this look like in a military context? We explored a human-centric approach...

commentary

Shared Responsibility: Enacting Military AI Ethics in U.S. Coalitions

America needs to enlist its oldest allies and new partners to build a safer and freer world for the AI era.

commentary

In Defence of Principlism in AI Ethics and Governance

It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to a...

commentary

Sitting Out of the Artificial Intelligence Arms Race Is Not an Option

The race to build autonomous weapons will have as much impact on military affairs in the twenty-first century as aircraft did on land and naval warfare in the twentieth century.

statement

Autonomous weapons: The ICRC remains confident that states will adopt new rules

The International Committee of the Red Cross (ICRC) welcomes the continued work of the Group of Governmental Experts (GGE) and urges the High Contracting Parties to the CCW to take their important work forward in line with one of the main purposes of this Convention, namely "the need to continue the codification and progressive development of the rules of international law

original research

Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D

Artificial Intelligence (AI) seems to be impacting all industry sectors ... a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attent...

original paper

The Dawn of the AI Robots: Towards a New Framework of AI Robot Accountability

Business, management, and business ethics literature pay little attention to the topic of AI robots. The broad spectrum of potential ethical issues pertains to using driverless cars, AI robots in care homes, a...

commentary

Fully autonomous weapon systems

Presentation by Kathleen Lawand, head of the arms unit, ICRC. Seminar on fully autonomous weapon systems, Mission permanente de France, Geneva, Switzerland.

commentary

The challenges raised by increasingly autonomous weapons

On June 24, 2014, the ICRC Vice-President, Ms Christine Beerli, opened a panel discussion on...

commentary

Autonomous weapons: What role for humans?

Geneva (ICRC) – Addressing a meeting of experts at the United Nations in Geneva this week, the International Committee of the Red Cross (ICRC) will urge governments to focus on the issue of human control over the use of force in their deliberations on autonomous weapons.

commentary

Autonomous weapons: ICRC addresses meeting of experts

The ICRC spoke at the meeting of experts on lethal autonomous weapons systems held in the framework of the Conventional Weapons Convention in Geneva from 13 to 16 May 2014.

article

The Techno-Military-Industrial-Academic Complex

The Harvard Strike in the spring of 1969 emerged out of what we students perceived as the university’s complicity in the Vietnam War. After Harvard ...

essay

A necessary step back?

A few years back, the rapid progress of international efforts to ban lethal autonomous weapon systems (LAWS) left arms controllers amazed: only five years after the founding of the International Committee for ...

analysis

A new Solferino moment for humanitarians

This year marks the 160th anniversary of the publication of Henri Dunant’s classic text, ‘A Memory of Solferino’, in 1862. Dunant’s powerful book ...

report

Innovation-Proof Governance for Military AI? How I Learned to Stop Worrying and Love the Bot

Amidst fears over artificial intelligence ‘arms races’, much of the international debate on governing military uses of AI is still focused on preventing the use of lethal autonomous weapons systems...

original research

Responsibility assignment won’t solve the moral issues of artificial intelligence

Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial in...

analysis

Shifting the narrative: not weapons, but technologies of warfare

Debates concerning the regulation of choices made by States in conducting hostilities are often limited ...

open forum

Optimising peace through a Universal Global Peace Treaty to constrain the risk of war from a militarised artificial superintelligence

This article argues that an artificial superintelligence (ASI) emerging in a world where war is still normalised constitutes a catastrophic existential risk, either because the ASI might be employed by a nation–s...

commentary

How Does China Aim to Use AI in Warfare?

AI in particular is seen as a “game-changing” critical strategic technology.

statement

The ICRC urges States to achieve tangible results next year towards adopting new legally binding rules on autonomous weapons

ICRC Head of the Arms and Conduct of Hostilities Unit Laurent Gisel on humanitarian concerns raised by the use of certain conventional weapons at the 6th Review Conference of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons

commentary

Australia Could Be Arming Its Unmanned Aircraft

With Boeing's ATS, the operating air force would enlarge an enemy’s risk in entering airspace within the ATS’s radius. Either the enemy’s fighter force would be burdened with more escort work or vulnerable aircraft might just have to be kept out of the area.

policy

Position Paper of the People’s Republic of China on Regulating Military Applications of Artificial Intelligence (AI)

The rapid development and wide applications of AI technology has profoundly changed the way people work and live, bringing great opportunities as well as unforeseeable security challenges to the world. One particular concern is the long-term impacts and potential risks of military applications of AI technology in such aspects as strategic security, rules on governance, and ethics.

statement

Peter Maurer: "Autonomous weapon systems raise ethical concerns for society"

Responsible choices about the future of warfare are needed, including clear and legally binding boundaries to prohibit autonomous weapons systems that are unpredictable or designed to target humans, and strict regulation of the design and use of all others.

research article

Military autonomous drones (UAVs) - from fantasy to reality. Legal and Ethical implications

Autonomous drones raise important judicial and ethical issues about responsibility for unintentional harm which will be discussed in this paper.

perspective

Innovation and opportunity: review of the UK’s national AI strategy

The publication of the UK’s National Artificial Intelligence (AI) Strategy represents a step-change in the...signalling’ document. Indeed, we read the National AI Strategy as a vision for innovation and... We pro...

commentary

Artificial Intelligence Is the F-16's New Secret Weapon

The F-16 may soon operate within a complex digital ecosystem. 

commentary

The Department of Defense is issuing AI ethics guidelines for tech contractors

The controversy over Project Maven shows the department has a serious trust problem. This is an attempt to fix that.

original research

Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential har...

analysis

Autonomous weapon systems: what the law says – and does not say – about the human role in the use of force

Intergovernmental discussions on the regulation of emerging technologies in the area of (lethal) autonomous weapon ...

article

Views and recommendations of the ICRC for the Sixth Review Conference of the Convention on Certain Conventional Weapons

The Sixth Review Conference of the Convention on Certain Conventional Weapons (CCW), in December 2021 in Geneva, is a key moment for High Contracting Parties to take stock of, and build on, the important role the CCW has played in minimizing suffering in armed conflict.

research article

Truth, Lies and New Weapons Technologies: Prospects for Jus in Silico?

This article tests the proposition that new weapons technology requires Christian ethics to dispense with the just war tradition (JWT) and argues for its development rather than dissolution. Those working in the JWT should be under no illusions, however, ...

commentary

Russia Looks to Combat Drones with Marker Robots

The Marker is expected to become the foundation for testing the interaction between ground robots, unmanned aerial vehicles and special operations forces.

research article

Ethical Principles for Artificial Intelligence in National Defence

Defence agencies across the globe identify artificial intelligence (AI) as a key technology to maintain an ... a result, efforts to develop or acquire AI capabilities for defence are growing on a global scale. Un...

analysis

Engaging with the industry: integrating IHL into new technologies in urban warfare

Alongside the urbanization of armed conflict lies a second trend: the increase in the use ...

commentary

An Autonomous Robot May Have Already Killed Humans

Here is how the weapons could be more destabilizing than nukes. 

analysis

Autonomy in weapons systems: playing catch up with technology

For almost eight years now, the international community at the United Nations (UN) has been ...

policy

The Ethical Norms for the New Generation Artificial Intelligence, China

The National Governance Committee for the New Generation Artificial Intelligence published the “Ethical Norms for the New Generation Artificial Intelligence”. It aims to integrate ethics into the entire lifecycle of AI, to provide ethical guidelines for natural persons, legal persons, and other related organizations engaged in AI-related activities.

report

Code of conduct on artificial intelligence in military systems

This draft Code of Conduct for AI-enabled military systems is the product of a two-year consultation process among Chinese, American,…

original research

Mapping global AI governance: a nascent regime in a fragmented landscape

The rapid advances in the development and rollout of artificial intelligence (AI) technologies over the past years have triggered a frenzy of regulatory initiatives at various levels of government and the priv...

open forum

Professional ethics and social responsibility: military work and peacebuilding

This paper investigates four questions related to ethical issues associated with the involvement of engineers and scientists in 'military work', including the influence of ethical ... )-centred systems perspectiv...

analysis

The value (and danger) of ‘shock’ in regulating new technology during armed conflict

The rules and standards of war are not self-correcting. Contradictions, gaps, and ambiguities often endure until an external pressure makes them salient. This ...

statement

Autonomous weapons: The ICRC recommends adopting new rules

The ICRC recommends that states adopt new, legally binding rules to regulate autonomous weapon systems to ensure that sufficient human control and judgement is retained in the use of force. It is the ICRC's view that this will require prohibiting certain types of autonomous weapon systems and strictly regulating all others.

analysis

Responsible and Ethical Military AI

Allies of the United States have begun to develop their own policy approaches to responsible military use of artificial intelligence. This issue brief looks at key allies with articulated, emerging, and nascent views on how to manage ethical risk in adopting military AI. The report compares their convergences and divergences, offering pathways for the United States, its allies, and multilateral institutions to develop common approaches to responsible AI implementation.

analysis

Military AI Cooperation Toolbox

The Department of Defense can already begin applying its existing international science and technology agreements, global scientific networks, and role in multilateral institutions to stimulate digital defense cooperation. This issue brief frames this collection of options as a military AI cooperation toolbox, finding that the available tools offer valuable pathways to align policies, advance research, development, and testing, and to connect personnel–albeit in more structured ways in the Euro-Atlantic than in the Indo-Pacific.

analysis

Future developments in military cyber operations and their impact on the risk of civilian harm

Over the past decade, several States have begun to develop military cyber elements capable of ...

commentary

US Needs to Defend Its Artificial Intelligence Better, Says Pentagon No. 2

AI safety is often overlooked in the private sector, but Deputy Secretary Kathleen Hicks wants the Defense Department to lead a cultural change.

analysis

Stepping into the breach: military responses to global cyber insecurity

As the global geo-political landscape continues to experience increasing fragmentation, cyberspace grows in importance as ...

analysis

Avoiding civilian harm during military cyber operations: six key takeaways

In today’s armed conflicts, cyber operations are increasingly used in support of and alongside kinetic ...

policy

Norway’s Policy on Emerging Military Technologies: Widening the Debate on AI and Lethal Autonomous Weapon Systems

Stai, Nora Kristine & Bruno Oliveira Martins (2021) Norway’s Policy on Emerging Military Technologies: Widening the Debate on AI and Lethal Autonomous Weapon Systems, PRIO Policy Brief, 11. Oslo: PRIO.

report

Autonomous Weapon Systems and International Humanitarian Law: Identifying Limits and the Required Type and Degree of Human–Machine Interaction

Compliance with international humanitarian law (IHL) is recognized as a critical benchmark for assessing the acceptability of autonomous weapon systems (AWS). However, in certain key respects, how and to what extent existing IHL rules provide limits on the development and use of AWS remains either subject to debate or underexplored.

research article

Presidential use of diversionary drone force and public support

During times of domestic turmoil, the use of force abroad becomes an appealing strategy to US presidents in hopes of diverting attention away from internal conditions and toward a foreign policy success. Weaponized drone technology presents a low cost and ...

position paper

ICRC Position on Autonomous Weapon Systems [position and background paper]

The International Committee of the Red Cross (ICRC) has, since 2015, urged States to establish internationally agreed limits on autonomous weapon systems to ensure civilian protection, compliance with international humanitarian law, and ethical acceptability. With a view to supporting current efforts to establish international limits on autonomous weapon systems that address

commentary

Red Cross Calls for More Limits on Autonomous Weapons

Experts said the group’s unique stature might get governments to the negotiating table at last.

position paper

ICRC position on autonomous weapon systems [position on autonomous weapon systems paper]

With a view to supporting current efforts to establish international limits on autonomous weapon systems that address the risks they raise, ICRC recommends that States adopt new legally binding rules, in this position and background paper.

statement

Peter Maurer: “We must decide what role we want human beings to play in life-and-death decisions during armed conflicts”

Speech given by Mr Peter Maurer, President of the International Committee of the Red Cross (ICRC), during a virtual briefing on the new ICRC position on autonomous weapon systems.

report

Principles for the Combat Employment of Weapon Systems with Autonomous Functionalities

These seven new principles concentrate on the responsible use of autonomous functionalities in armed conflict in ways that preserve human judgment and responsibility over the ...

analysis

Ethics and Artificial Intelligence

The law plays a vital role in how artificial intelligence can be developed and used in ethical ways. But the law is not enough when it contains gaps due to lack of a federal nexus, interest, or the political will to legislate. And law may be too much if it imposes regulatory rigidity and burdens when flexibility and innovation are required. Sound ethical codes and principles concerning AI can help fill legal gaps. In this paper, CSET Distinguished Fellow James E. Baker offers a primer on the limits and promise of three mechanisms to help shape a regulatory regime that maximizes the benefits of AI and minimizes its potential harms.

commentary

China Is ‘Danger Close’ to US in AI Race, DOD AI Chief Says

JAIC leader stresses that AI ethics guidelines don’t slow down the United States. In fact, they are essential.

report

Challenges in Regulating Lethal Autonomous Weapons Under International Law

Since 2017, the United Nations (UN) has regularly convened a group of government experts (GGE) to explore the technical, legal, and ethical issues surrounding the deployment of lethal autonomous...

report

Explaining the Nuclear Challenges Posed by Emerging and Disruptive Technology: A Primer for European Policymakers and Professionals

This paper is a primer for those seeking to engage with current debates on nuclear risk in Europe. It demystifies and contextualizes the challenges posed by emerging and disruptive technologies in the nuclear realm. It looks in detail at five significant and potentially disruptive technological developments—hypersonic weapons, missile defence, artificial intelligence and automation, counterspace capabilities, and computer network operations (cyber)—to highlight often-overlooked nuances and explain how some of the challenges presented by these developments are more marginal, established and manageable than is sometimes portrayed. By emphasizing the primacy of politics over technology when it comes to meeting nuclear challenges, this paper also seeks to provide a basis for targeted risk reduction and arms control, as well as normative recommendations for policymakers and professionals working across Europe.

position paper

ICRC Position Paper: Artificial intelligence and machine learning in armed conflict: A human-centred approach

At a time of increasing conflict and rapid technological change, the International Committee of the Red Cross (ICRC) needs both to understand the impact of new technologies on people affected by armed conflict and to design humanitarian solutions that address the needs of the most vulnerable.

book chapter

Applying AI on the Battlefield: The Ethical Debates

Reichberg, Gregory M. & Henrik Syse (2021) Applying AI on the Battlefield: The Ethical Debates, in von Braun, Joachim; Margaret S. Archer; Gregory M. Reichberg; & Marcelo Sánchez Sorondo, eds, Robotics, AI, and Humanity: Science, Ethics, and Poli...

commentary

Illiteracy, Not Morality, Is Holding Back Military Integration of Artificial Intelligence

A data-illiterate culture in the military is widening the gap between the United States and its competitors. Success will require deeper and more direct congressional action.

commentary

Morality Poses the Biggest Risk to Military Integration of Artificial Intelligence

Waiting to act on AI integration into our weapons systems puts us behind the technological curve required to effectively compete with our foes.

analysis

AI Verification

The rapid integration of artificial intelligence into military systems raises critical questions of ethics, design and safety. While many states and organizations have called for some form of “AI arms control,” few have discussed the technical details of verifying countries’ compliance with these regulations. This brief offers a starting point, defining the goals of “AI verification” and proposing several mechanisms to support arms inspections and continuous verification.

commentary

Meet the U.S Navy’s Unmanned Ships of the Future

The service will need newer, better high-tech drones to help fight future conflicts.

book review

AI ethics – a review of three recent publications

In recent years, AI has become a hotly debated topic across different disciplines and fields of society. Rapidly advancing technological innovations, especially in areas such as machine learning (as well as increasingly widespread uses of AI-based systems), have brought about a growing awareness of the need for AI ethics, whether in politics, industry, science, or in society at large.

report

Responsible Artificial Intelligence Research and Innovation for International Peace and Security

In 2018 the United Nations Secretary-General identified responsible research and innovation (RRI) in science and technology as an approach for academia, the private sector and governments to work on the mitigation of risks that are posed by new technologies.

report

Responsible Military Use of Artificial Intelligence: Can the European Union Lead the Way in Developing Best Practice?

The military use of artificial intelligence (AI) has become the focus of great power competition. In 2019, several European Union (EU) member states called for greater collaboration between EU member states on the topic of AI in defence. This report explores why the EU and its member states would benefit politically, strategically and economically from developing principles and standards for the responsible military use of AI. It maps what has already been done on the topic and how further expert discussions within the EU on legal compliance, ethics and technical safety could be conducted. The report offers concrete ways for the EU and its member states to work towards common principles and best practices for the responsible military use of AI.

research article

Artificial intelligence and rationalized unaccountability: Ideology of the elites?

In this Connexions essay, we focus on intelligent agent programs that are cutting-edge solutions of contemporary artificial intelligence (AI). We explore how these programs become objects of desire that contain a radical promise to change organizing and ...

commentary

Pentagon Hosts Meeting on Ethical Use of Military AI With Allies and Partners

This comes in the backdrop of growing interest in global technology cooperation.

research article

Dreaming with drones: Palestine under the shadow of unseen war

This article discusses how the first-person genre, especially a Gazan wartime diary, allows both writer and reader to imagine new possibilities for understanding contemporary colonial drone warfare, which is instrumental in the strategic silencing and ...

original paper

Operations of power in autonomous weapon systems: ethical conditions and socio-political prospects

The purpose of this article is to provide a multi-perspective examination of one of the most important contemporary security issues: weaponized, and especially lethal, artificial intelligence. This technology ...

commentary

Should Drones and AI Be Allowed to Kill by Themselves?

It’s a simple question, should robots kill by themselves?  The technology is here. Unmanned systems, both ground and air robots, can autonomously seek, find, track, target and destroy enemies without human intervention. 

original paper

The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation

In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence (AI), entitled ‘New Generation Artificial Intelligence Development Plan’ (新一代人工智能发展规划). This strategy ...

publication

Autonomous Weapon Systems: Implications of Increasing Autonomy in the Critical Functions of Weapons

The development of autonomous weapon systems raises the prospect of the loss of human control over weapons and the use of force.

publication

Expert Meeting: Autonomous Weapon Systems, Technical, Military, Legal and Humanitarian Aspects

The ICRC convened an international expert meeting on autonomous weapon systems from 26 to 28 March 2014. It brought together government experts from 21 States and 13 individual experts, including roboticists, jurists, ethicists, and representatives from the United Nations and non-governmental organizations.

article

Limits on Autonomy in Weapon Systems

Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control

report

Artificial Intelligence, Strategic Stability and Nuclear Risk

This report aims to offer the reader a concrete understanding of how the adop­tion of artificial intelligence (AI) by nuclear-armed states could have an impact on strategic stability and nuclear risk and how related challenges could be addressed at the policy level. The analysis builds on extensive data collection on the AI-related technical and strategic developments of nuclear-armed states. It also builds on the authors’ conclusions from a series of regional workshops that SIPRI organized in Sweden (on Euro-Atlantic dynamics), China (on East Asian dynamics) and Sri Lanka (on South Asian dynamics), as well as a transregional workshop in New York. At these workshops, AI experts, scholars and practitioners who work on arms control, nuclear strategy and regional security had the opportunity to discuss why and how the adoption of AI capabilities by nuclear-armed states could have an impact on strategic stability and nuclear risk within or among regions.

report

Limits on Autonomy in Weapon Systems: Identifying Practical Elements of Human Control

There is wide recognition that the need to preserve human control over weapon systems and the use of force in armed conflict will require limits on autonomous weapon systems (AWS).

research article

Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance

Achieving the global benefits of artificial intelligence (AI) will require international cooperation on many areas of governance and ethical standards, while allowing for diverse cultural perspectives and prio...

report

Military Applications of AI Raise Ethical Concerns

Artificial intelligence offers great promise for national defense. For example, a growing number of robotic vehicles and autonomous weapons can operate in areas too hazardous for soldiers. But what are the ethical implications of using AI in war or even to enhance security in peacetime?

research article

How to translate artificial intelligence? Myths and justifications in public discourse

Automated technologies populating today’s online world rely on social expectations about how “smart” they appear to be. Algorithmic processing, as well as bias and missteps in the course of their development, all come to shape a cultural realm that in turn determines what they come to be about. It is our contention that a robust analytical frame could be derived from culturally driven Science and Technology Studies while focusing on Callon’s concept of translation. Excitement and...

report

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume III, South Asian Perspectives

This edited volume is the third in a series of three. The series forms part of a SIPRI project that explores regional perspectives and trends related to the impact that recent advances in artificial intelligence could have on nuclear weapons and doctrines, as well as on strategic stability and nuclear risk. This volume assembles the perspectives of eight experts on South Asia on why and how machine learning and autonomy may become the focus of an arms race among nuclear-armed states. It further explores how the adoption of these technologies may have an impact on their calculation of strategic stability and nuclear risk at the regional and transregional levels.

original paper

Knowledge in the grey zone: AI and cybersecurity

Cybersecurity protects citizens and society from harm perpetrated through computer networks. Its task is made ever more complex by the diversity of actors—criminals, spies, militaries, hacktivists, firms—opera...

research article

Ethics of autonomous weapons systems and its applicability to any AI systems

Most artificial intelligence technologies are dual-use. They are incorporated into both peaceful civilian applications and military weapons systems. Most of the existing codes of conduct and ethical principles on artificial intelligence address the former while largely ignoring the latter.

commentary

How Far Are We From Developing AI-Powered Tanks?

A remote control tank could be in the near future, but a killer robot tank is likely still many years away.

report

Killer robots: fact or fiction? autonomous weapon systems within the framework of International Humanitarian Law (Killer Robots: Reality or Fiction? Autonomous Weapons Systems in the Context of International Humanitarian Law)

Autonomous weapons systems have presented an accelerated development in recent years. The use of this type of weapon in scenarios of armed conflict is not expressly regulated...

commentary

US Department of Defense Adopts Artificial Intelligence Ethical Principles

The Pentagon adopted a set of ethical guidelines on the use of AI.

commentary

Pentagon to Adopt Detailed Principles for Using AI

Sources say the list will closely follow an October report from a defense advisory board.

analysis

A New Year’s resolution: bringing IHL home

As the old year bids farewell and the new year takes shape, we tend to ...

commentary

Elsa B. Kania on Artificial Intelligence and Great Power Competition

On AI’s potential, military uses, and the fallacy of an AI “arms race.”

commentary

AI for Peace

The United States should apply lessons from the 70-year history of governing nuclear technology by building a framework for governing AI military technology. An AI for Peace program should articulate the dangers of this new technology, principles to manage the dangers, and a structure to shape the incentives for other states.

analysis

‘Act today, shape tomorrow’: the 33rd International Conference

Today we launch the 33rd International Conference of the Red Cross and Red Crescent, a ...

statement

States must address concerns raised by autonomous weapons

Convention on prohibitions or restrictions on use of certain conventional weapons which may be deemed to be excessively injurious.

commentary

DeepMind’s AI has now outcompeted nearly all human players at StarCraft II

AlphaStar cooperated with itself to learn new strategies for conquering the popular galactic warfare game.

original research

Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability

This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already r...

commentary

Military artificial intelligence can be easily and dangerously fooled

AI warfare is beginning to dominate military strategy in the US and China, but is the technology ready?

statement

Military needs can never justify using inhumane or indiscriminate weapons

Statement to UN General Assembly First Committee: General debate on all disarmament and international security agenda items

commentary

Ethics of AI and Cybersecurity When Sovereignty is at Stake

Sovereignty and strategic autonomy are felt to be at risk today, being threatened by the forces of rising international tensions, disruptive digital transformations and explosive growth of cybersecurity incide...

report

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume II, East Asian Perspectives

This edited volume is the second of a series of three. They form part of a SIPRI project that explores regional perspectives and trends related to the impact that recent advances in artificial intelligence could have on nuclear weapons and doctrines, as well as on strategic stability and nuclear risk. This volume assembles the perspectives of 13 experts from East Asia, Russia and the United States on why and how machine learning and autonomy may become the focus of an arms race among nuclear-armed states. It further explores how the adoption of these technologies may have an impact on their calculation of strategic stability and nuclear risk at the regional and transregional levels.

analysis

Autonomous Weapons Systems: When is the right time to regulate?

Those wishing to control the spread and use of autonomous weapons systems generally favour pre-emptive ...

report

The Militarization of Artificial Intelligence: A Wake-Up Call for the Global South

The militarization of artificial intelligence (AI) is well under way and leading military powers have been investing large resources in emerging technologies. Calls for AI governance at...

commentary

The Role of the United Nations in Addressing Emerging Technologies in the Area of Lethal Autonomous Weapons Systems

It is only natural that advances in the intelligent autonomy of digital systems attract the attention of Governments, scientists and civil society concerned about the possible deployment and use of lethal autonomous weapons. What is needed is a forum to discuss these concerns and construct common understandings regarding possible solutions. ...

commentary

Responsible Innovation for a New Era in Science and Technology

Today we are at the dawn of an age of unprecedented technological change. In areas from robotics and artificial intelligence (AI) to the material and life sciences, the coming decades promise innovations that can help us promote peace, protect our planet and address the root causes of suffering in our world. ...

position paper

Responsible AI: requirements and challenges

This position paper discusses the requirements and challenges for responsible AI with respect to two interdependent objectives: (i) how to foster research and development efforts toward socially beneficial app...

analysis

Black magic, zombies and dragons: a tale of IHL in the 21st Century

As we marked the 70th anniversary of the Geneva Conventions last month, I want to ...

article

IHL session in Viet Nam: Experts tackle tough questions on cyber warfare and autonomous weapons

As harsh as it may sound, what do you think is "better"? Being killed by a human being or by a robot? If international humanitarian law (IHL) applies to humans and they are obliged to respect it, what body of law prohibits armed drones or robots from killing people? In the context of cyber warfare and autonomous weapons, is IHL still relevant? Or, is it too old to adapt to the

q&a

Intel, Ethics, and Emerging Tech: Q&A with Cortney Weinbaum

Cortney Weinbaum studies topics related to intelligence and cyber policy as a senior management scientist at RAND. In this interview, she discusses challenges facing the intelligence community, the risks of using AI as a solution, and ethics in scientific research.

commentary

The Navy Will Soon Have a New Weapon to Kill 'Battleships' or Submarines

The future is now? 

policy

Governance Principles for the New Generation Artificial Intelligence--Developing Responsible Artificial Intelligence

In order to promote the healthy development of the new generation of AI, better balance between development and governance, ensure the safety, reliability and controllability of AI, support the economic, social, and environmental pillars of the UN sustainable development goals, and to jointly build a human community with a shared future, all stakeholders concerned with AI development should observe the following principles

commentary

The US Air Force is enlisting MIT to help sharpen its AI skills

The Air Force Artificial Intelligence Incubator aims to develop technologies that serve the “public good,” not weapons development.

analysis

Legal regulation of AI weapons under international humanitarian law: A Chinese perspective

Arguably, international humanitarian law (IHL) evolves with the development of emerging technologies. The history of ...

report

The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume I, Euro-Atlantic perspectives

This edited volume focuses on the impact on artificial intelligence (AI) on nuclear strategy. It is the first instalment of a trilogy that explores regional perspectives and trends related to the impact that recent advances in AI could have nuclear weapons and doctrines, strategic stability and nuclear risk. It assembles the views of 14 experts from the Euro-Atlantic community on why and how machine learning and autonomy might become the focus of an armed race among nuclear-armed states; and how the adoption of these technologies might impact their calculation of strategic stability and nuclear risk at the regional level and trans-regional level.

analysis

Safety net or tangled web: Legal reviews of AI in weapons and war-fighting

Editor’s note: For those interested in the topic of legal reviews of weapons, it is ...

analysis

The viability of data-reliant predictive systems in armed conflict detention

Editor’s note: In this post, Tess Bridgeman continues the discussion on detention and the potential use of ...

analysis

Enhanced distinction: The need for a more focused autonomous weapons targeting discussion at the LAWS GGE

The meeting of the Lethal Autonomous Weapon Systems (LAWS) Group of Governmental Experts (GGE) has been taking place in Geneva this week. This ...

analysis

The need for clear governance frameworks on predictive algorithms in military settings

Editor’s note: In this post, as part of the AI blog series, Lorna McGregor continues the discussion on ...

commentary

Why AI researchers should reconsider protesting involvement in military projects

One Defense Department advisor suggests that “constructive engagement” will be more successful than opting out.

analysis

Detaining by algorithm

Editor’s note: As part of this AI blog series, several posts focus on detention and the ...

analysis

Legal reviews of weapons, means and methods of warfare involving artificial intelligence: 16 elements to consider

What are some of the chief concerns in contemporary debates around legal reviews of weapons, ...

analysis

Expert views on the frontiers of artificial intelligence and conflict

Recent advances in artificial intelligence have the potential to affect many aspects of our lives ...

report

Bio Plus X: Arms Control and the Convergence of Biology and Emerging Technologies

Technological advances in the biological sciences have long presented a challenge to efforts to maintain biosecurity and prevent the proliferation of biological weapons. The convergence of developments in biotechnology with other, emerging technologies such as additive manufacturing, artificial intelligence and robotics has increased the possibilities for the development and use of biological weapons.

commentary

The next ‘Deep Blue’ moment: Self-flying drone racing

In 1997, IBM’s “Deep Blue” computer defeated grandmaster Gary Kasparov in a match of chess. It was an historic moment, marking the end of an era where humans could defeat machines in complex strategy games. Today, artificial intelligence (AI) bots can defeat humans in not only chess, but nearly every digital game that exists.…

commentary

China’s military is rushing to use artificial intelligence

A new report shows that a more literal AI arms race is also under way.

commentary

China's Olive Branch to Save the World from AI Weapons

Is China open to arms control over AI weapons development? The United States should find out.

analysis

Is arms control over emerging technologies just a peacetime luxury? Lessons learned from the First World War

At the turn of the twentieth century, many engineers with fertile imaginations—from France’s Gustave Gabet to America’s Orville Wright—hoped that their inventions would ...

commentary

Does the United States Face an AI Ethics Gap?

Instead of worrying about an artificial intelligence “ethics gap,” U.S. policymakers and the military community could embrace a leadership role in AI ethics. This may help ensure that the AI arms race doesn't become a race to the bottom.

commentary

Never mind killer robots—here are six real AI dangers to watch out for in 2019

Last year a string of controversies revealed a darker (and dumber) side to artificial intelligence.

analysis

Machine autonomy and the constant care obligation

The debate about the way the international community should deal with autonomous weapon systems has ...

commentary

Autonomous Weapons Are Coming, This is How We Get Them Right

Fully autonomous weapons are not only inevitable; they have been in America’s inventory since 1979.

news release

Autonomous weapons: States must agree on what human control means in practice

Should a weapon system be able to make its own “decision” about who to kill?

commentary

AI is not “magic dust” for your company, says Google’s Cloud AI boss

Andrew Moore says getting the technology to work in businesses is a huge challenge.

commentary

Autonomous Weapons: The Ultimate Military Game Changer?

Know this: if autonomous weapons are developed and introduced into the world’s arsenals, then they are unlikely to immediately revolutionize warfare.

discussion

Retaining Meaningful Human Control of Weapons Systems

A panel discussion entitled Retaining Meaningful Human Control of Weapons Systems was held on the side of the First Committee on Disarmament and International Security.

statement

Weapons: Statement of the ICRC to the United Nations, 2018

United Nations General Assembly, 73rd Session, First Committee. Statement delivered by Ms. Kathleen Lawand, Head of Arms Unit, ICRC.

report

The Lawful Use of Autonomous Weapon Systems for Targeted Strikes (Part 2): Targeting Law & Practice

Lethal Autonomous Weapon Systems (LAWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently...

commentary

The Pentagon is putting billions toward military AI research

DARPA, the US Defense Department’s research arm, will spend $2 billion over the next five years on military AI projects.

analysis

The (im)possibility of meaningful human control for lethal autonomous weapon systems

This week, the Group of Governmental Experts (GGE) on lethal autonomous weapon systems (LAWS) is holding their third meeting at the UN Certain ...

analysis

The impact of gender and race bias in AI

Automated decision algorithms are currently propagating gender and race discrimination throughout our global community. The ...

analysis

The human nature of international humanitarian law

International humanitarian law (IHL) regulates the use of force in armed conflict. It inherently provides ...

analysis

Autonomous weapons: Operationalizing meaningful human control

For the second time this year, States will come together in the UN Convention on ...

commentary

Why AI researchers shouldn’t turn their backs on the military

The author of a new book on autonomous weapons says scientists working on artificial intelligence need to do more to prevent the technology from being weaponized.

report

Autonomous Weapon Systems: The Possibility and Probability of Accountability

This paper addresses the challenge of accountability that arises in relation to autonomous weapon systems (AWS), a challenge which focuses on the hypothesis that AWS will make it impossible to...

analysis

Autonomous weapons and human control

Concerns about ensuring sufficient human control over autonomous weapon systems (AWS) have been prominent since ...

analysis

New types of weapons need new forms of governance

The existing national and international tools used to control the emergence and use of weapons that may contravene international humanitarian law (IHL) have ...

commentary

China In Race To Overtake U.S. Military in AI Warfare

AI Weapons: China and America Are Desperate to Dominate This New Technology

commentary

The Army's Next Super Weapon: Robot Tanks?

Yes, this is coming. 

research article

“The Computer Said So”: On the Ethics, Effectiveness, and Cultural Techniques of Predictive Policing

In this paper, I use The New York Times’ debate titled, “Can predictive policing be ethical and effective?” to examine what are seen as the key operations of predictive policing and what impacts they might have in our current culture and society.

report

Preventing Autonomous Weapon Systems from Being Used to Perpetrate Intentional Violations of the Laws of War

Autonomous Weapon Systems (AWS) are essentially weapon systems that, once activated, can select and engage targets without further human intervention. While these are neither currently fielded nor...

analysis

Human judgment and lethal decision-making in war

For the fifth year in a row, government delegates meet at the United Nations in ...

analysis

Autonomous weapon systems: A threat to human dignity?

In the opening scene of Christopher Nolan’s Dunkirk, six British soldiers, looking for food and ...

statement

Towards limits on autonomy in weapon systems

Convention on Certain Conventional Weapons (CCW) Group of Governmental Experts on Lethal Autonomous Weapons Systems, statement of the ICRCThe International Committee of the Red Cross (ICRC) is pleased to contribute its views to this second meeting of the Group of Governmental Experts on “Lethal Autonomous Weapon Systems”.

commentary

Here’s how the US needs to prepare for the age of artificial intelligence

Government indifference toward AI could let the US lose ground to rival countries. But what would a good AI plan actually look like?

commentary

Big organizations may like killer robots, but workers and researchers sure don’t

Tech firms and universities interested in building AI-powered weapons for lucrative military contracts are, predictably, facing some significant pushback.

article

Ethics and autonomous weapon systems: An ethical basis for human control?

As part of continuing reflections on the legal and ethical issues raised by autonomous weapons systems, the ICRC convened a round-table meeting in Geneva from 28 to 29 August 2017 to explore the ethical aspects.This report - "Ethics and autonomous weapon systems: An ethical basis for human control?" - summarizes discussions and highlights the ICRC's main conclusions:

analysis

Autonomous weapon systems: An ethical basis for human control?

The requirement for human control The risks of functionally delegating complex tasks—and associated decisions—to sensors ...

commentary

The Army Wants a New Tank to Take On Russia and China

The Army is massively speeding up its early prototyping of weapons and technology for its Next-Gen Combat Vehicle.

report

The Roboticization of Warfare with Lethal Autonomous Weapon Systems (Laws): Mandate of Humanity or Threat to It?

LAWS are a threat to humanity, and after an objective analysis without a preconceived attachment to a particular outcome they are prohibited by the lex lata. The analysis is not conducted in a...

article

Autonomous weapon systems under international humanitarian law

The United Nations Office for Disarmament Affairs published a collection of articles: "Perspectives on Lethal Autonomous Weapon Systems"

video

India’s Expertise and Influence Essential to Address Challenges of Autonomous Weapons

Dr Hugo Slim, Head of Policy and Humanitarian Diplomacy at the ICRC, visited New Delhi this week to speak at the Raisina Dialogue organised by the Ministry of External Affairs of India and the Observer Research Foundation 16-18 January 2018.

report

Article 36 Reviews: Dealing with the Challenges posed by Emerging Technologies

Article 36 of the 1977 Additional Protocol to the 1949 Geneva Conventions imposes a practical obligation on states to determine whether ‘in the study, development, acquisition or adoption of a new weapon, means or method of warfare’ its use would ‘in some or all circumstances be prohibited by international law’. This mechanism is often colloquially referred to as an ‘Article 36 review’.

perspective

UNODA Occasional Papers – No. 30, November 2017

Perspectives on Lethal Autonomous Weapon Systems

statement

Expert Meeting on Lethal Autonomous Weapons Systems

The ICRC welcomes this first meeting of the Group of Governmental Experts on "Lethal Autonomous Weapons Systems".

analysis

Ethics as a source of law: The Martens clause and autonomous weapons

Ethics evolves, the law changes. In this way, moral progress may occur. Yet the relation ...

report

Mapping the Development of Autonomy in Weapon Systems

The Mapping the Development of Autonomy in Weapon Systems report presents the key findings and recommendations from a one-year mapping study on the development of autonomy in weapon systems.

discussion

Pathways to Banning Fully Autonomous Weapons

On 16 October 2017, the Permanent Mission to the United Nations of Mexico partnered with the International Committee for Robot Arms Control, Human Rights Watch, Seguridad Humana en Latinoamérica y el Caribe and the Campaign to Stop Killer Robots to host a panel discussion entitled “Pathways to Banning Fully Autonomous Weapons” as part of the First Committee side event series for the 72nd Session General Assembly.

discussion

Autonomous Weapon Systems: Understanding Learning Algorithms and Bias

On 5 October 2017, the United Nations Institute for Disarmament Research (UNIDIR) hosted a side event, “Autonomous Weapons Systems: Learning Algorithms and Bias” at the United Nations Headquarters in New York.

analysis

Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

In this blog post, I look at the ethical and legal ramifications of distance in ...

analysis

Introduction to Mini-Series: Autonomous weapon systems and ethics

Autonomous weapon systems & the dictates of public conscience:  An ethical basis for human control? On 28–29 August 2017, the ICRC convened a ...

report

Disarmament: A Basic Guide – Fourth Edition (2017)

Conceived as a comprehensive introduction to a field central to the work of the United Nations, Disarmament: A Basic Guide aims to provide a useful overview of the nuanced challenges of building a more peaceful world in the twenty-first century.

commentary

These are the Weapons China Needs to Crush America in a War

Will Beijing build them? 

research article

Robot Wars: US Empire and geopolitics in the robotic age

How will the robot age transform warfare? What geopolitical futures are being imagined by the US military? This article constructs a robotic futurology to examine these crucial questions. Its central concern is how robots – driven by leaps in artificial ...

research article

When AI goes to war: Youth opinion, fictional reality and autonomous weapons

This paper relates the results of deliberation of youth juries about the use of autonomous weapons systems (AWS). The discourse that emerged from the juries centered on several key issues. The jurors expressed the importance of keeping the humans in the decision-making process when it comes to militarizing artificial intelligence, and that only humans are capable of moral agency.

commentary

'Terminator' Robots: The U.S. Military's Ultimate Weapon or Ultimate Nightmare?

“Lethal autonomous weapons threaten to become the third revolution in warfare.”

report

Defending the Boundary: Constraints and Requirements on the Use of Autonomous Weapon Systems Under International Humanitarian and Human Rights Law

The focus of scholarly inquiry into the legality of autonomous weapon systems (AWS) has been on compliance with IHL rules on the conduct of hostilities. Comparably little attention has been given...

commentary

The Dark Secret at the Heart of AI

No one really knows how the most advanced algorithms do what they do. That could be a problem.

commentary

Autonomous weapon systems: Is a space warfare manual required?

The legalities for the use of Autonomous Weapon Systems (AWS) in space warfare are examined. Currently, there are manuals for air and missile warfare, naval warfare and cyber warfare, a clear gap in the literature is that there is no manual for space warfare.

commentary

How America's Mighty F-15, F-16 or F-35s Could Soon Be Firing Lasers

A big development is almost here.

analysis

The evolution of warfare: Focus on the Law

How has warfare changed over the past 100 years?  Is the international community still sufficiently equipped to reasonably minimize its negative effects on ...

commentary

The U.S. Military Might Be on the Verge of the Ultimate Naval Weapon

Thanks to DARPA and BAE Systems. 

report

Legality of Lethal Autonomous Weapons AKA Killer Robots

Automated warfare including aerial drones that are extensively used in ongoing armed conflicts is now an established part of military technology worldwide. It is only logical to assume that the...

report

Autonomous weapon system: Law of armed conflict (LOAC) and other legal challenges

The legality of autonomous weapon systems (AWS) under international law is a swiftly growing issue of importance as technology advances and machines acquire the capacity to operate without human control. This paper argues that the existing laws are ineffective and that a different set of laws are needed. This paper examines several issues that are critical for the development and use of AWS in warfare.

report

Mapping the Development of Autonomy in Weapon Systems: A Primer on Autonomy

Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). The discussion is still at an early stage, with most states parties still in the process of understanding the issues at stake—beginning with the fundamental questions of what constitutes ‘autonomy’ and to what extent it is a matter of concern in the context of weapon systems and the use of force. A number of states parties have stressed that future discussions could usefully benefit from further investigation into the conceptual and technical foundations of the meaning of ‘autonomy’.

report

Mapping the Innovation Ecosystem Driving the Advance of Autonomy in Weapon Systems

Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed internationally under the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). Thus far, the discussion has remained at the informal level. Three informal meetings of experts (held in 2014, 2015 and 2016) have been convened under the auspices of the CCW to discuss questions related to emerging technologies in the area of LAWS. Several delegations have, however, already indicated that they have concerns as to the impact that a new protocol on LAWS could have on innovation, particularly in the civilian sphere, since, arguably, much of the technology on which LAWS might be based could be dual use.

commentary

Killer Robots: Moral Concerns vs. Military Advantages

The U.S. military should balance Americans' ethical concerns over computers making life and death decisions with the need to maintain an edge in the face of rapid advances in artificial intelligence and machine learning across the globe.

commentary

Killer Robots: Moral Concerns Vs. Military Advantages

Ethical concerns over computers making life and death decisions are real, and they’re important

article

Impact of new technologies and weapons on international humanitarian law

International humanitarian law, its applicability to new weapons, means and methods of warfare and the influence of remote-controlled and autonomous weapon systems on international humanitarian law were among the topics on the agenda for a recent seminar in Seoul.

analysis

Legal review of new weapons: Scope of the obligation and best practices

Article 36 of the Additional Protocol I to the Geneva Conventions (AP I) states that each State Party is required to determine whether ...

commentary

The Ethics of Artificial Intelligence in Intelligence Agencies

The defense community has already begun a healthy dialogue about the ethics of AI in combat systems

analysis

War crimes without criminal accountability? The case of Active Protection Systems

As weapon systems take over more and more functions in the targeting cycle that used to be fulfilled by humans, it is increasingly ...

article

Autonomous weapons systems: Profound implications for the future of warfare

In April 2016, head of ICRC arms unit Kathleen Lawand was invited to London to give the keynote address at the annual meeting of the International Committee for Robot Arms Control (ICRAC) Summit, held at Goldsmith University. In her presentation, Lawand presented the ICRC's views on autonomous weapon systems, i.e. weapons that can select and fire upon targets on their own,

article

Views of the ICRC on autonomous weapon systems

As a contribution to ongoing discussions in the CCW, this paper highlights some of the key issues on autonomous weapon systems from the perspective of the ICRC, and in the light of discussions at its recent expert meeting.

article

Why autonomous weapon systems matter in Africa

Autonomous weapon systems is one emerging category of weapon of particular concern in Africa. As rapid advances continue to be made in new and emerging technologies of warfare, notably those relying on information technology and robotics, it is important to ensure informed discussions of the many and often complex challenges raised by these new developments.

article

Autonomous weapons: Decisions to kill and destroy are a human responsibility

Convention on Certain Conventional Weapons - statement of the ICRC, read at the Meeting of Experts on Lethal Autonomous Weapons Systems.

analysis

e-Briefing: New technologies and the modern battlespace

In recent years, a wide array of new technologies have entered the modern battlefield, giving rise to new means and methods of warfare, ...

report

Autonomous Weapons and Human Control

Nations from around the world met at the United Nations in Geneva, Switzerland to discuss autonomous weapons, potential future weapons that would select and engage targets on ...

report

Accountability Gap, Autonomous Weapon Systems and Modes of Responsibility in International Law

In most circumstances AWS are incapable of complying with rules of International Humanitarian Law and International Human Rights Law leading to violations of important rights like the right to...

report

'Mapping the Debate on LAWS at the CCW: Taking Stock and Moving Forward'

Since 2013 the governance of lethal autonomous weapon systems (LAWS) has been discussed within the framework of the 1980 United Nations Convention on Certain Conventional Weapons (CCW). The discussion is at an early stage, with most states still in the process of understanding the issues at stake. Extended discussions will be necessary to resolve contentious issues and generate a constructive basis for any potential formal negotiation.

report

Autonomous Weapons and Operational Risk

20YY Future of Warfare Initiative Director Paul Scharre examines the risks in future autonomous weapons that would choose their own targets and the potential for catastrophic ...

research article

Public opinion and the politics of the killer robots debate

The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders, from Stephen Hawking to Elon Musk, recently signed a letter urging the world to ban ...

report

Autonomous Weapons: Regulation Tolerant or Regulation Resistant?

This paper applies the author's previously published model for evaluating weapons' susceptibility to attempts to generate international regulations on autonomous weapons. The paper concludes that...

report

Implementing Article 36 Weapon Reviews in the Light of Increasing Autonomy in Weapon Systems

Article 36 of Additional Protocol I of the 1949 Geneva Conventions requires states to conduct legal reviews of all new weapons, means and methods of warfare in order to determine whether their use is prohibited by international law. However, reviewing the legality of weapons with automated and autonomous features presents a number of technical challenges. Such reviews demand complex procedures to test weapon performance and to evaluate the risks associated with unintended loss of control. As such assessments require significant technical and financial resources, there is a strong incentive for deepening cooperation and information sharing between states in the area of weapon reviews. Increased interaction can facilitate the identification of best practices and solutions to reduce costs associated with test and evaluation procedures.

research article

Political accountability and autonomous weapons

Autonomous weapons would have the capacity to select and attack targets without direct human input. One important objection to the introduction of such weapons is that they will make it more difficult to identify and hold accountable those responsible for ...

commentary

Military Robots: Armed, but How Dangerous?

The debate over using artificial intelligence to control lethal weapons in warfare is more complex than it seems.

article

Australia: Q+A with Professor Chris Jenks on autonomous weapons

Technological advances in weaponry mean that decisions about the use of force on the battlefield could increasingly be taken by machines operating without human intervention. A recent event in Canberra, Crossing the Rubicon: the path to offensive autonomous weapons, focused on the range of issues associated with the potential use of these types of systems. Following the event,

commentary

Should 'Killer Robots' Be Banned?

Autonomous weapons could be a military game changer that many want banned. Before considering such a move, we need to refine the debate—and America must demonstrate leadership. 

video

A licence to kill for autonomous weapons?

Autonomous weapons are an emotive subject, with the potential to change the whole nature of warfare.Could machines one day be able to carry out killing without human control and what should we do about the legal and moral implications of that possibility?A five-day long conference in Geneva has been looking at just such issues. Kathleen Lawand, Head of the ICRC's arms unit and

statement

Autonomous weapon systems: Is it morally acceptable for a machine to make life and death decisions?

Convention on Certain Conventional Weapons (CCW), Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), 13 - 17 April 2015, Geneva. Statement of the ICRC

report

Autonomous Weapons at the UN: A Primer for Delegates

CNAS experts Paul Scharre, Michael Horowitz, and Kelley Sayler provide a pimer for UN delegates on autuomous weapons....

report

An Introduction to Autonomy in Weapon Systems

20YY Warfare Initiative Director Paul Scharre and Adjunct Senior Fellow Michael Horowitz discuss future military systems incorporating greater autonomy....

commentary

The United Nations and Disarmament Treaties

The very first resolution of the General Assembly of the United Nations, in January 1946, addressed the problems raised by the discovery of atomic energy. Despite civil society's efforts, led by scientists and women's peace organizations, leaders of the United States and the Soviet Union rejected measures to curb nuclear ambitions. ...

commentary

Take Note, America: 5 Weapons of War China Should Build Now

China's military is certainly developing some deadly capabilities. Here are five ways it could become even deadlier. 

commentary

Autonomous Weapon Systems: The Military's Smartest Toys?

"We are standing at the cusp of a momentous upheaval in the character of warfare, brought about by the large-scale infusion of robotics into the armed forces."

report

Public Opinion and the Politics of the Killer Robots Debate

The possibility that today’s drones could become tomorrow’s killer robots has attracted the attention of people around the world. Scientists and business leaders from Stephen Hawking to Elon Musk...

article

Autonomous weapon systems - Q & A

A challenge to human control over the use of force. Technological advances in weaponry mean that decisions about the use of force on the battlefield could increasingly be taken by machines operating without human intervention. Here, we examine the potential implications of such a profound change in the way war is waged, and caution against the use of such weapons unless

report

Autonomous Weapon Systems at the United Nations

CNAS experts Michael Horowitz, Paul Scharre and Kelley Sayler examine the issues facing U.N. delegates, along with recommendations for action....

report

Autonomous weapon systems technical, military, legal and humanitarian aspects

Expert meeting report The ICRC convened an international expert meeting on autonomous weapon systems from 26 to 28 March 2014. It brought together government experts from 21 States and 13 individual experts, including roboticists, jurists, ethicists, and representatives from the United Nations and non-governmental organizations. The aim was to gain a better understanding of

article

Lethal Autonomous Weapons: Issues for the International Community

Sandvik, Kristin Bergtora & Nicholas Marsh (2014) Lethal Autonomous Weapons: Issues for the International Community, Security & Defence Agenda, 9 May.

article

Autonomous weaponry and armed conflict

On April 10th 2014, the American Society of International Law and the International Law Association organized a joint Conference in Washington DC on autonomous weaponry and armed conflict. The panel addressed the legal, ethical and political challenges posed by the development of increasingly autonomous weapons systems. Analyzing automated weapons systems through the lenses of

policy

Defining the Scope of Autonomy

Marsh, Nicholas (2014) Defining the Scope of Autonomy, PRIO Policy Brief, 2. Oslo: PRIO.

conference paper

Unmanned Aerial Vehicles and Autonomous Weapons

Sandvik, Kristin Bergtora (2013) Unmanned Aerial Vehicles and Autonomous Weapons-, presented at PhD Course: Emerging Military Technologies - New Normative Challenges, Oslo, 13/11/13 – 15/11/13.

report

Autonomous Weapons Systems: Taking the Human Out of International Humanitarian Law

Once confined to science fiction, killer robots will soon be a reality. Both the USA and the UK are currently developing weapons systems that could be capable of autonomously targeting and killing...

report

Arms and Artificial Intelligence: Weapons and Arms Control Applications of Advanced Computing

The impact of information technology in the field of military decision-making is superficially less visible than that of a number of other weapon developments, but its importance has grown steadily since the beginning of the 1980s. It is now the focus of special interest and efforts because of its potential role in modern weapon systems and the prospect of its inclusion as an essential ingredient in many military projects such as the Strategic Defense Initiative.

report

The Uncertain Course: New Weapons, Strategies and Mind-sets

What is the present likelihood of war? Which elements need to be taken into account when making such an assessment? As the world enters the post-nuclear, or 'second' nuclear age, it has to take account of a new revolution in military affairs.