用户名/邮箱
登录密码
验证码
看不清?换一张
您好,欢迎访问! [ 登录 | 注册 ]
您的位置:首页 - 最新资讯
A Code of Conduct for AI in Defense Should Be an Extension of Other Military Codes
2019-09-11 00:00:00.0     美国兰德公司-赛博战专栏     原网页

       by Cortney Weinbaum

       An AI code of conduct for defense should look a lot like all other defense codes of conduct.

       Since 1948, all members of the United Nations have been expected to uphold the Universal Declaration of Human Rights, which protects individual privacy (article 12), prohibits discrimination (articles 7 and 23), and provides other protections that could broadly be referred to as civil liberties.

       Then, in 1949, the Geneva Convention created a framework for military activities and operations. It says that weapons and methods of warfare must not “cause superfluous injury or unnecessary suffering” (article 35), and “In the conduct of military operations, constant care shall be taken to spare the civilian population, civilians and civilian objects” (article 57).

       An AI code of conduct for defense could be a natural extension of these two foundational documents. Like other military programs, AI programs should aim to reduce the number of casualties in warfare and reduce the hardships to civilian populations by seeking to minimize effects on humanitarian infrastructure (like hospitals), critical infrastructure (like bridges, dams, and power grids), natural resources, and so on.

       Meanwhile, the algorithms themselves should not have been created with training data that discriminates against (or for) any particular race, ethnicity, gender, religious group, or other demographic. Society has already seen how algorithms can become unintentionally biased or can be based on unethical training data, and we can learn from these lessons.

       A global society that would create the Geneva Convention is a society that believes in a moral code for warfare, and this same moral code could extend into its weaponized algorithms.

       Cortney Weinbaum is a management scientist specializing in intelligence topics at the nonprofit, nonpartisan RAND Corporation.

       This commentary originally appeared on Friends of Europe's Debating Security Plus 2019 Programme on September 10, 2019. Commentary gives RAND researchers a platform to convey insights based on their professional expertise and often on their peer-reviewed research and analysis.

       Share on Facebook Share on Twitter Share on LinkedIn

       


标签:综合
关键词: algorithms     society     conduct     defense     warfare     Cortney WeinbaumAn     article    
滚动新闻