|Language||English, Polish and other|
|Number of pages||41|
Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission has published the Ethics Guidelines for Trustworthy AI.
In its Communication of 25 April 2018 and 7 December 2018, the European Commission set out its vision for artificial intelligence (AI), which supports “ethical, secure and cutting-edge AI made in Europe”. Three pillars underpin the Commission’s vision:
(i) increasing public and private investments in AI to boost its uptake,
(ii) preparing for socio-economic changes, and
(iii) ensuring an appropriate ethical and legal framework to strengthen European values.
To support the implementation of this vision, the Commission established the High-Level Expert Group on Artificial Intelligence (AI HLEG), an independent group mandated with the drafting of two deliverables:
1) AI Ethics Guidelines and
2) Policy and Investment Recommendations.
The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system’s entire life cycle:
1) it should be lawful, complying with all applicable laws and regulations,
2) it should be ethical, ensuring adherence to ethical principles and values, and
3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.
Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.
Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission, Ethics Guidelines for Trustworthy AI (08.04.2019)
Click on a star to rate it!
Average rating 5 / 5. Vote count: 3