The European Commission has published seven essentials for ethical, lawful, robust and trustworthy AI.
The Commission has decided to launch a pilot phase to set out ethical AI guidelines which could potentially be enforced and used globally.
These guidelines aim to address the problems that could potentially come from AI especially with its integration into certain industries such as education and health care.
The EU hired a group of 52 experts to come together and devise seven requirements with regards to future AI systems and how companies and governments should apply them in an ethical manner.
They seven guidelines are as follows:
Human agency and oversight- AI systems must “empower” humans and allow them to use their fundamental rights to make informed decisions which can be done through “human-in-the- loop, human-on-the-loop, and human-in-command approaches.”
Technical robustness and safety- “AI systems need to be resilient and secure” in order to prevent and minimize unintentional harm. This is to ensure that if something goes wrong, there will always be a fallback plan such as in the case of adversarial situations.
Privacy and data governance- The personal data of other should be kept safe, secure, and private.
Transparency- Traceability mechanisms could be put in place to ensure that the stakeholder involved understands the way in which the AI system makes decisions.
Diversity, non-discrimination, and fairness- “Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination.”
Environmental and societal wellbeing- AI systems need to be “sustainable and environmentally friendly” and should account for the environment as well as “social and societal impacts”.
Accountability- The systems should be auditable and should ensure “responsibility”, “accountability”, as well as “adequate and accessible redress”.
The Commission also set up a group comprised of AI experts who from the tech industry, academia and civil society.
Andrus Ansip, VP for the Digital Single Market stated: “the ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can full benefit from technologies.
“Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
The EU plans to take a three-step approach. As a first step, they plan to outline the key requirements for AI to be trustworthy, release the pilot which they have devised for feedback from stakeholders, and to work on reaching an internationally-confirmed consensus for human-centric AI.
Organizations and public administrations are encouraged to sign up to the European AI Alliance and upon doing so, they are expected to receive a notification once the pilot starts.
The Commission also aims to meet with like-minded partners like Singapore, Canada and Japan to continue their discussions on the topic.
In the meantime, Google recently closed down its advisory council which was previously set up to contribute to the responsible development of AI. This was then followed by a resignation from one of the key members of the Advanced Technology External Advisory Council (ATEAC).
They released a statement which reads: “It’s become clear that in the current environment, ATEAC can’t function as we wanted. So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”
By autumn, the Commission expects to have their plans ready to launch AI research excellence centers in order to begin setting up innovation hubs with stakeholders and other member states as well as developing and implementing a common data sharing model.