EU has initiated a discussion of member states to settle some of the critical areas of confusion still open on the draft AI Act
The world is rushing towards a universally acceptable legislation for artificial intelligence (AI). This haste in working out a practical framework for AI regulationsmay remind us of another insane race to save the world,which we had recently witnessed – the race for a COVID vaccine! The urgency in formulating a functional AI regulation is understandable. As a fast-emerging technology at the cutting-edge of research, a new AI innovation is hitting the news almost every other day. Especially after the advent of Generative AI with sensational products like DALL-E and ChatGPT, AI solutions are gaining a “life” of their own at incredible speed.
Just a fortnight ago, in mid-February 2023, the European Union (EU) has initiated a discussion to settle some of the most critical questions still open on the draft AI Act – a proposal to regulate Artificial Intelligence based on its capacity to cause harm. The new deliberations cover the areas of definition,scope,high-risk categorisation,prohibited practices, database registration,general principles, and AI literacy. Clearing the air on these points would be a big step in making the draft legislation more universal as well as exhaustive.
EU takes the lead
The EU had always been a frontrunner in the quest for AI regulations. Starting with the GDPR (General Data Protection Regulation) in 2016, and now with the latest Artificial Intelligence Act, EU is empowering its oversight bodies with legal weapons to crack down on use of AI that it deems to be harmful.
In 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published the “Ethics Guidelines for Trustworthy AI” and the “Policy and investment recommendations for trustworthy Artificial Intelligence”. This second publication covers four principal subjects: humans and society at large, research and academia, the private sector, and the public sector.
The GDPR provides for the data controller to be responsible for – and be able to demonstrate compliance with – the safeguarding principles relating to the processing of personal data. Coupled with the GDPR legislation on data protection and individual privacy, the EU appears to be becoming the global standard bearer on laws governing AI.While GDPR defines data privacy, the AI Act gives it teeth by laying down what companies can do with that data when developing AI models.
Some of the toughest proposed regulations on AI models developed to extract actionable intelligence from data, gives enormous powers to EU oversight bodies to order Big Tech and any company, using AI, to retrain their models if deemed high risk.
Critical questions still open
The latest discussion agenda for a shadow meeting was announced in February by the European Parliament’s rapporteurs on the Artificial Intelligence Act – Brando Benifei and DragoșTudorache. A rapporteur is a person appointed by an organisation to report on the proceedings of its meetings. It is an eminent role in the legislative process of the European Parliament.They are responsible for handling a legislative proposal on behalf of the European Commission, the Council of the European Union or the European Parliament. Based on the relevant proposal, the rapporteur is appointed by the relevant Committees of the European Parliament charged with drawing up a legislative recommendation for the EP to vote on. Thus, the rapporteur has a substantial influence in the process of adoptingany EUlegislation.
Currently, Benifei from Italy serves as the European Parliament’s lead rapporteur on the Artificial Intelligence Act. Tudorache from Romania is a rapporteur on the Special Committee on Artificial Intelligence in a Digital Age, among others.
Let us cast a quick glance at theopen issuesbeing debated:
- Definition amended and frozen: To define Artificial Intelligence,the EU lawmakers had recommended using the definition set by the US National Institute of Standards and Technology – where AI is defined as “an engineered or machine-based system that can, for a given set of objectives, generate output such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy”.
In a prime move, the EU draft legislation moved the AI definition from the annexure section to the main body of the law. This implies that the definition is frozen, and no future amendment to itis possible.
Additionally, against the above definition, the draft set down it its preamble the following three attribute of AI:
- it should be able to act with a minimum level of independence from human control,
- it may possess learning capabilities (machine learning), and
- it does not cover fully traceable and predictable systems.
Also, the text now clarifies that whenever an AI solution is integrated into a more extensive system, all the components interacting with the new solution should be considered part of the system.
- Scope expanded for providers: Regarding scope, the co-rapporteurs raised a query whether the AI regulation under preparation should prevent EU providers from deploying prohibited AI solutions like social scoring systems in the single market but also from exporting them abroad.A partial exemption was proposed on open-source AI systems.
- Stricter high-risk categorisation: As per the Act, some AI systems are defined as having a high risk of causing harm. Such high-risk areas and use cases are listed in Annex III. This category is now being saddled withstricter compliance requirements.
If developers do not consider their AI system to be high-risk, even if it falls under Annex III, they can notify the national authority or AI office if more than one EU country is involved. The proposed amendment suggest including a tacit consent clause whereby the exemption will be justified if the authority does not reply within three months after the notification.
- Prohibited practices elaborated: The amendments have added AI systems that use biometric traits to categorise people to the list of prohibited practices. Under the GDPR, such protected information includes race, sexual and religious orientation. Many EU lawmakers are in favour of banning AI models that fill in facial recognition databases by indiscriminately lifting images from social media DPs, security cameras and any other public source asincluded in the list of high-risk categories.
- EU database requirements widened: According to the original draft of the Act, high-risk AI system providers were required to register in an EU-wide database. In the new recommendation, this obligation is being extended to alsoinclude AI deployers that are public bodies or private companies designated as gatekeepers under the Digital Markets Act.
- General principles made more exhaustive: A new article with general principles applying to all AI systems has been introduced as a voluntary basis for all algorithms not falling under the high-risk category.It is proposed that the principles include human oversight, technical robustness, compliance with data protection rules, explainability, non-discrimination, fairness, and social and environmental well-being.
- AI literacyclause added: A new requirementstipulatesthat the EU and its member states should promote media literacy among the general public. AI providers and deployers will have to ensure AI literacy for their staff, including how to comply with the AI regulation.
Know more about the syllabus and placement record of our Top Ranked Data Science Course in Kolkata, Data Science course in Bangalore, Data Science course in Hyderabad, and Data Science course in Chennai.