用户名/邮箱
登录密码
验证码
看不清?换一张
您好,欢迎访问! [ 登录 | 注册 ]
您的位置:首页 - 最新资讯
Labelling initiatives, codes of conduct and other self-regulatory mechanisms for artificial intelligence applications
2022-04-27 00:00:00.0     美国兰德公司-赛博战专栏     原网页

       Research Questions In the context of AI applications, what labelling initiatives, codes of conduct and other voluntary, self-regulatory mechanisms are being developed globally? What are the main opportunities and challenges associated with the development and implementation of these mechanisms? What are the key learnings for the future when discussing voluntary, self-regulatory mechanisms?

       Artificial intelligence (AI) is recognised as a strategically important technology that can contribute to a wide array of societal and economic benefits. However, it is also a technology that may present serious challenges and have unintended consequences. Within this context, trust in AI is recognised as a key prerequisite for the broader uptake of this technology in society. It is therefore vital that AI products, services and systems are developed and implemented responsibly, safely and ethically.

       Through a literature review, a crowdsourcing exercise and interviews with experts, we aimed to examine evidence on the use of labelling initiatives and schemes, codes of conduct and other voluntary, self-regulatory mechanisms for the ethical and safe development of AI applications. We draw out a set of common themes, highlight notable divergences between these mechanisms, and outline anticipated opportunities and challenges associated with developing and implementing them. We also offer a series of topics for further consideration to best balance these opportunities and challenges. These topics present a set of key learnings that stakeholders can take forward to understand the potential implications for future action when designing and implementing voluntary, self-regulatory mechanisms. The analysis is intended to stimulate further discussion and debate across stakeholders as applications of AI continue to multiply across the globe and particularly considering the European Commission's recently published draft proposal for AI regulation.

       Key Findings

       We identified and analysed a range of self-regulatory mechanisms — such as labelling initiatives, certification schemes, seals, trust/quality marks and codes of conduct — across diverse geographical contexts, sectors and AI applications.

       The initiatives span different stages of development, from early stage (and still conceptual) proposed mechanisms to operational examples, but many have yet to gain widespread acceptance and use.

       Many of the initiatives assess AI applications against ethical and legal criteria that emphasise safety, human rights and societal values, and are often based on principles that are informed by existing high-level ethical frameworks.

       We found a series of opportunities and challenges associated with the design, development and implementation of these voluntary, self-regulatory tools for AI applications.

       We outlined a set of key considerations that stakeholders can take forward to understand the potential implications for future action when designing, implementing and incentivising the take-up of voluntary, self-regulatory mechanisms, and to help contribute to the creation of a flexible and agile regulatory environment.

       Involving an independent and reputable organisation (for example, to carry out a third-party audit) could strengthen trust in an initiative, ensure effective oversight, and promote credibility and legitimacy. Actively engaging multiple interdisciplinary stakeholders to integrate a diversity of views and expertise in the design and development of AI self-regulatory tools could increase buy-in and adoption. The use of innovative approaches can help to address the perceived costs and burden associated with implementing self-regulatory mechanisms and also provide flexibility and adaptability in assessing AI systems. It is important to share learnings, communicate good practice, and for self-regulatory initiatives to be evaluated to track impacts and outcomes over time. There is a growing need for coordination and harmonisation of different initiatives to avoid the risk of a fragmented ecosystem and to promote clarity and understanding in the market. Rather than a one size fits all approach, it will be important to consider using a combination of different self-regulatory tools for diverse contexts and use cases to incentivise their voluntary adoption.

       Related Products

       Project

       Exploring ways to regulate and build trust in AI Apr 25, 2022

       Table of Contents Chapter One

       Introduction and overview

       Chapter Two

       The role of labelling initiatives, codes of conduct and other self-regulatory mechanisms in AI development and use

       Chapter Three

       Concluding remarks and reflections on the future

       Annex A

       Methodological approach

       Annex B

       Longlist of initiatives

       Annex C

       Detailed descriptions of some of the initiatives

       Research conducted by RAND Europe

       The research described in this report was prepared for Microsoft and conducted by RAND Europe.

       This report is part of the RAND Corporation Research report series. RAND reports present research findings and objective analysis that address the challenges facing the public and private sectors. All RAND reports undergo rigorous peer review to ensure high standards for research quality and objectivity.

       This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited; linking directly to this product page is encouraged. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial purposes. For information on reprint and reuse permissions, please visit www.rand.org/pubs/permissions.

       The RAND Corporation is a nonprofit institution that helps improve policy and decisionmaking through research and analysis. RAND's publications do not necessarily reflect the opinions of its research clients and sponsors.

       


标签:综合
关键词: development     challenges     initiatives     AI applications     labelling     stakeholders     learnings     self-regulatory mechanisms    
滚动新闻