Developments on AI act: Overview of the Intersection with Data Protection Regulations
by Maria Carolina Moraes
The advent of artificial intelligence (“AI”) systems as a set of technologies capable to generate content, make predictions or take decisions in an automated manner, will have extent applicability in many fields, ranging from health to mobility, or from public administration to education.
However, these promised advances do not come without especially relevant risks considering that the individual and societal effects are, to a large extent, unexperienced. Since AI systems rely on massive amounts of data (personal and non-personal) as the key premises for autonomous decisions, the Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) has important data protection implications.
This article addresses adjustments proposed in the AI act by the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS), as data protection authorities, on their Opinion 05/2021 and the amendments made in the Draft Report by the Committee on the Internal Market and Consumer Protection and Committee on Civil Liberties, Justice and Home Affairs regarding data protection framework alignment.
From the data protection perspective, the EDPB and the EDPS raised questions regarding the scope of the proposal, the risk-based approach, prohibited uses of AI, high-risk AI systems, governance and the interaction with data protection framework.
Amendments on the key points of the Proposal
The EDPB and the EDPS recalled the need for the Proposal to reassure the rights to private life and to protection of personal data as pillar EU values recognized in the Universal Declaration of Human Rights (article 12), the European Convention of Human Rights (article 8) and the Charter of Fundamental Rights of the EU (articles 7 and 8). Following this recommendation, the co-rapporteurs agreed to include the amendments 3 and 4 to address the AI systems reliance on the processing of personal data and to explain the consequent legal basis on Article 16 of the TFEU. At the same stage, the co-rapporteurs clarified that the Union’s legislation for the protection of personal data, in particular the GDPR, EUDPR, ePrivacy Directive and the LED, shall apply to any processing of personal data falling within the scope of the Proposal.
While pursuing for consistency with key definitions laid down by the GDPR, the co-rapporteurs included amendments 9 and 63 regarding the notion of biometric data, amendment 66 regarding special categories of personal data and amendment 71 regarding personal data concept. Amendment 30 and 100 covers the principles of data minimisation and data protection by design and by default, and amendment 306 highlight the need to high-risk AI systems to have a statement of compliance with data protection regulations.
When it comes to the exhaustive list of high-risk AI systems, this choice concerns the EDPB and the EDPS as could create a black-and-white effect, with weak attraction capabilities of highly risky situations, undermining the overall risk-based approach underlying the Proposal. Also, the list of high-risk AI systems detailed in annexes II and III of the Proposal lacks some types of use cases which involve significant risks as those annexes would need to be regularly updated to ensure that their scope is appropriate. To this reason, amendments 78 to 80 tried to extend the scope of delegated acts under Article 7 and 84, as they allow AI Act to adapt to unforeseen uses of AI that pose a high risk.
Whilst the EDPB and the EDPS welcome the risk-based approach underpinning the Proposal, the authorities consider that the requirement to ensure compliance with the GDPR and EUDPR should be included in Chapter 2 of Title III. Such recommendation was not introduced in the Draft Report, although the amendment 90 comprises the analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose to fundamental rights, which would include protection of personal data.
The EDPB and the EDPS recommendation that societal/group risks posed by AI systems should be equally assessed and mitigated was brought by the amendment 85. Likewise, amendment 144 ensures the obligation of the data protection impact assessment to be conducted by users of high-risk AI systems, in compliance with article 35 of the GDPR.
Furthermore, one of the greatest advances on the Draft is the delimitation of forbidden practices, given the great risk of discrimination, the Proposal prohibits “social scoring” when performed ‘over a certain period of time’ or ‘by public authorities or on their behalf’. The amendment 16 created a new recital to add predictive policing among the prohibited practices as it violates the presumption of innocence as well as human dignity. However, private companies, such as social media and cloud service providers, also can process vast amounts of personal data and conduct social scoring. Consequently, while the recommendation of the EDPB and EDPS is that the future AI Regulation should prohibit any type of social scoring, the Draft Report only included such practices among the high-risk AI systems.
Likewise, automated recognition of human features in publicly accessible spaces and the use of AI to infer emotions of natural person are highly undesirable and should be prohibited, according with the EDPB and the EDPS. These practices only remain in the list of high-risk AI system in the Draft Report.
Because the obligations imposed on actors vis-a-vis the affected persons should emanate more concretely from the protection of the individual and her or his rights, the EDPB and the EDPS urge the legislators to explicitly address in the Proposal the rights and remedies available to individuals subject to AI systems. To this end, amendments 268 to 270 created a new chapter on remedies.
Regarding the scope of the Proposal, the EDPB and EDPS strongly welcome the fact that it extends to the provision and use of AI systems by EU institutions, bodies or agencies. However, the exclusion of international law enforcement cooperation from the scope set of the Proposal raises serious concerns for the EDPB and EDPS, as such exclusion creates a significant risk of circumvention, and this concern remains unaddressed in the Draft Report.
Finally, in terms of the authority to be designated and the European Artificial Intelligence Board role, amendments 36 to 38 aligned changes to ensure a more harmonized regulatory approach and contribute to the consistent interpretation of data processing provisions and avoid contradictions in its enforcement among Member States. However, the Draft Report still does not fulfill the recommendation of the EDPB and EDPS that data protection authorities (DPAs) should be designated as national supervisory authorities.
Even though the Draft Report published in 20th April was capable to address some of the concerns and recommendations of the Data Protection Bodies, the Proposal still have many discussion rounds ahead and each political group already pointed out similar issues in the submitted amendments to the European Parliament in 1st June, see here a brief summary.
The consistency with GDPR must be straightened given the relevance of the Proposal to set the bedrock to the upcoming regulations on the use of Artificial Intelligence around the World. Therefore, keeping up with privacy rights should be crucial for the legislators while setting their agendas around AI.