A continuous journey towards an appropriate governance framework for AI As part of its European strategy for Artificial Intelligence (AI), and as a response to the increasing ethical questions raised by this technology, the European Commission established an independent High-Level Expert Group on Artificial Intelligence (AI HLEG) in June 2018. The group was tasked to draft two deliver-ables: AI Ethics Guidelines and Policy and Investment Recommendations. Nine months later, its first deliverable was published, putting forward a comprehensive framework to achieve “ Trustworthy AI ” by offering ethical guidance to AI practitioners. This paper dives into the work carried out by the group, focusing in particular on its AI Ethics Guidelines. First, this paper clarifies the context that led to the creation of the AI HLEG and its mandate (I.). Subsequently, it elaborates on the Guidelines ’ aim and purpose (II.), and analyses the Guidelines ’ drafting process (III.). Particular focus is given to the questions surrounding the respective role played by ethics and law in the AI governance landscape (IV.), as well as some of the challenges that had to be overcome throughout the process (V.). Finally, this paper places the Guidelines in an international context, and sets out the next steps (VI.) ahead on the journey towards an appropriate governance framework for AI (VII.).
The EU Approach to Ethics Guidelines for Trustworthy Artificial Intelligence
Published 2019 in Computer Law Review International
ABSTRACT
PUBLICATION RECORD
- Publication year
2019
- Venue
Computer Law Review International
- Publication date
2019-08-01
- Fields of study
Law, Philosophy, Computer Science, Sociology
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-6 of 6 references · Page 1 of 1