Orgalim welcomes yesterday’s publication of the ‘Ethics Guidelines for Trustworthy AI’ developed by the High-Level Expert Group on Artificial Intelligence (HLEG on AI), a group of 52 representatives from academia, civil society and industry appointed by the European Commission to advise on implementation of the European strategy on AI.
As a member of the Group, Orgalim believe that these guidelines provide a general framework for stakeholders in the European AI eco-system to apply a set of consensual ethical principles. The guidelines aim to support the development of AI systems that, in addition to fully complying with all applicable laws, are also ethical and technically and socially robust (‘trustworthy AI’).
“This engagement with ethical questions is central to promoting a human-centric approach to AI in Europe,” underlined Malte Lohan, Orgalim Director General. “It will help foster the continued trust of citizens and businesses as these technologies become ever more broadly integrated into our society and economy.”
While the publication of these guidelines is a positive first step, a stronger sectoral approach will be needed in the forthcoming piloting process if they are to deliver on their objectives. The HLEG has proposed that this piloting process will be carried out in the second half of 2019, and Orgalim fully supports this initiative. The process will cover both the governance structure to be put in place and the assessment list to be used at company level, as proposed in Chapter III of the Ethics Guidelines. “We in Orgalim stand ready to contribute to the piloting phase,” commented Malte Lohan. “We can provide the consensual view of a very large sector of Europe’s industry in the ‘qualitative’ process, which we believe will be a necessary complement to individual companies’ contributions in order to make the future approach workable. We look forward to the European Commission’s guidance on how these processes will be organised.”
Orgalim represents an industrial branch which for many years has been producing and using AI applications (often referred to as ‘industrial AI’ or ‘embedded AI’) that have proven extremely promising for the global competitiveness of Europe’s economy and the welfare of our society. While discussions around the ethics of AI are necessary – not least to address misunderstandings about the exact nature and functions of AI – not all such discussions are equally relevant for every use case in every sector of industry or services.
In order to create favourable conditions for the development of trustworthy AI, it will also be necessary to shape a pro-innovation policy framework and boost investment: this will be essential to ensure European industry maintains a competitive edge in the global AI race. The second deliverable from the HLEG on AI will be to provide a set of policy and investment recommendations to this end, and Orgalim will continue to actively support this work over the coming weeks and months.
For further information, please consult Orgalim’s detailed Position Paper on Ethics Guidelines for Trustworthy AI. Additional queries should be directed to Eugenia Forcat, Director of Communications.
09 Apr 2019