UNSC collaborated with advanced tech like AI to ensure security and peace all over the world.
The United Nations charter was established in 1945, the Security Council is one of the UN’s six primary organs. As outlined in the UN’s founding document, the committee has four main objectives: (1) to ensure security and peace internationally, (2) to maintain relations between countries, (3) to solve international conflicts and protect human rights, and (4) to create harmony among the actions of member nations. Since its first meeting in 1946, the Security Council has worked to uphold this 1 mission while satisfying all countries represented. The committee has fifteen representatives at any given time, five of whom are permanent and ten who rotate. The five permanent members, with the power to veto any decision, are the United States, the United Kingdom, France, China, and the Russian Federation. The ten rotating members currently are: Belgium, Côte d’Ivoire, Dominican Republic, Equatorial Guinea, Germany, Kuwait, Peru, Poland, and South Africa. Non-permanent members are elected by the General Assembly and serve two-year terms. The fifteen-member UN Security Council seeks to address threats to international security. Its five permanent members, chosen in the wake of World War II, have veto power. The Security Council fosters negotiations, imposes sanctions, and authorizes the use of force, including the deployment of peacekeeping missions.
AI has the potential to improve the health and well-being of individuals, communities, and states, and help meet the UN’s Sustainable Development Goals. However, certain uses of AI could also undermine international peace and security by raising concerns about safety and security of the technology, accelerating the pace of armed conflicts, or loosening human control over the means of war.
In 2019, the United Nations Office for Disarmament Affairs, the Stanley Center and the Stimson Center partnered in a workshop and series of papers to facilitate a multistakeholder discussion among experts from Member States, industry, academia, and research institutions, with the aim of building understanding about the peace and security implications of AI. This publication captures that conversation and shares assessments of the topic from US, Chinese, and Russian perspectives. It is intended to provide a starting point for more robust dialogues among diverse communities of stakeholders as they endeavor to maximize the benefits of AI while mitigating the misapplication of this important technology.
With the 21st century, the international community has entered into the 5th domain of warfare: warfare from the Kalashnikov to the Keyboard. It was asserted that ‘Guns don’t kill people: people kill people.’ Today, weapons make the decisions. When artificial intelligence (hereinafter, referred to as AI) and robotics come together, there are two different outcomes that can occur. On the one hand, one can see immeasurable social, economic and political improvements to our society. On the other hand, the military uses these tools to create new weapons of mass destruction (hereinafter, lethal autonomous weapon systems or LAWS) rendering nuclear obsolete. Recognizing the threat to international peace and security caused by lethal autonomous weapons, 116 founders of robotics and artificial intelligence companies from 26 countries released an open letter urging the United Nations to ban lethal autonomous weapons systems.
As such, in 2016, under the auspices of the United Nations Conference of the Convention on Certain Conventional Weapons (CCW), a Group of Governmental Experts (GGE) on Lethal Autonomous Weapons Systems was established. The GGE is mandated with examining emerging technologies in the area of lethal autonomous weapons systems. Yet, further measures are called for to restrict and ban the use of LAWS. Thus, it is for the UN Security Council to mandate a new international law on fully autonomous weapons.
Do the sharing thingy
More info about author