The charter includes five principles:
Principle of respect of fundamental rights: ensuring that the design and implementation of artificial intelligence tools and services are compatible with fundamental rights;
The CEPEJ establishes that preference should be given to AI systems designed following an ethical by design or human rights by design approach so that respectful fundamental rights is embedded in the design of the AI system.
Principle of non-discrimination: specifically preventing the development or intensification of any discrimination between individuals or groups of individuals;
Principle of quality and security: with regard to the processing of judicial decisions and data, using certified sources and intangible data with models conceived in a multi-disciplinary manner, in a secure technological environment.
Principle of transparency, impartiality and fairness: making data processing methods accessible and understandable, authorizing external audits;
Principle “under user control”: precluding a prescriptive approach and ensuring that users are informed actors and in control of their choices.
The toolkit wants to provides judicial professionals with the knowledge to approach the benefits and risks, raising from the use of AI technologies in their professional life while assisting justice professionals in mitigating harm that AI systems could result in when used by court systems.
The toolkit is structured through four different modules:
Module 1 focuses on the introduction to AI and the rule of law provides an introduction to the main concepts of algorithmic governance, human rights, algorithms and algorithmic systems and discusses the relevance of data and cybersecurity.
Module 2 focuses on the main issues arising from the adoption of AI in the judiciary from a procedural perspective, such as a discovery, document review, assistance to draft documents, predictive analysis, dispute, resolution, language, recognition, or case management.
Module 3 presents the key legal and ethical challenges of AI in the judiciary. This module presents legal issues regarding to biometric identification and facial recognition technology.
Module 4 relates on human rights and AI focusing on (a) the right to access to court, fair trial, and due process, (b) the right to an effective remedy, (c) the rights to protection against discrimination, (d) the freedom of expression and access to information and (e) the right to privacy and data protection.
In light of the potential impact on democracy, rule of law, individual freedoms and the right to an effective remedy and to a fair trial, the AI act qualifies the use of AI systems by judicial systems as high-risk systems (Annex III point 8 considering AI systems used by the administration of justice and democratic processes as high-risk systems as defined by article 6.2 of the AI Act) and require a human decision (Recital [61] of the AI Act).
Mandatory obligations imposed on high-risk systems
·Article 8 - Compliance with the requirements
Article 9 - Risk management system
Article 10 - Data and data governance
Article 11 - Technical documentation
Article 12 - Record-keeping
Article 13 - Transparency and provision of information to deployers
Article 14 - Human oversight
Article 15 - Accuracy, robustness and cybersecurity
Obligations of Providers and Deployers of High-risk AI systems (Section III of the AI Act)
Article 16: Obligations of Providers of High-Risk AI Systems
Article 22: Authorised Representatives of Providers of High-Risk AI Systems
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
When using AI tools in judicial systems, Justice systems will surely be deployers but if they design and develop those systems justice systems would also be considered providers of AI systems.
Some questions arise regarding data protection of court data and its use for training AI systems.
Who are the controllers of personal data held by judicial systems?
Principles for processing judicial data under the GDPR:
1) Processing should be Lawful, fair and transparent (article 5 of the GDPR)
2) Lawful ground for processing judicial data (article 6 of the GDPR). AI systems could be justified under consent, legal requirement - if any - for the use of technologies in judicial proceedings or performing a task in the public interest
3) Processing judicial data with sensitive personal data - article 9 of the GDPR contains a general prohibition for the processing of special categories of data such as data relating to racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation.
Article 9 of the GDPR provides for an exclusion of the prohibition of the processing of sensitive data that is “necessary for the establishment, exercise or defense of legal claims or whenever courts are acting in their judicial capacity.”
4) Comply data subject information rights - When the controller collects personal data from the data subject, the data subject has the
right to have information regarding the identity and contact of the controller, the purpose and the legal basis of the processing of
data, the legitimate interests pursued by the controller or the third party, if there is any recipient of the personal data and whether
the controller intends to transfer the personal data to a third country or international organization.
The design and development of the AI system will require considering (1) how these rights will be made effective and the consent of the data subjects meets the conditions provided by article 7 of the GDPR and (2) ensure that the information rights of the data subject are compatible with the right to an effective judicial remedy under article 47 of the CFREU.
5) Eventually anonymize judicial personal data - In some cases it will be reasonable to demand that court data is anonymized.
6) Conduct a data protection impact assessment (DPIA) specifying the nature of the system, scope of the processing of data, context of
data processing and ultimately the goals of the AI system that could present risks for fundamental rights of the individuals.
The DPIA should include the mandatory information provided by article 35 of the GDPR as well as the information required under
article 13 of the AI act.
The AI act and the GDPR are two separate legal norms with different goals and different scope of application but they are interrelated and overlap.
Data is the prime material of AI models and the design and development of AI models conditions compliance with privacy rights of data subjects. From that perspective the GDPR has a crucial role in regulating data processing of AI systems particularly with respect to how data is collected, used and stored, which is one of the main essential functions of AI systems.
The GDPR, in a way, functions as the limit, restrictions, and basis for data protection by design and by default in the design and development of AI systems[2], that have to consider the GDPR provisions to ensure that AI systems comply with privacy rates of data subject and are safe.
AI systems need to comply with three major data protection elements in their design and development:
Consent (article 6 GDPR) – Designers, developers, providers and systems should obtain the explicit consent of data subjects particularly with respect to sensitive categories of data.
Right to explanation – the data subjects have the right to know whether an AI system is being used for taking decisions the sessions that will affect them and an explanation on how these decisions were adopted.
Data minimization - data processing AI systems need to use the minimum amount of data required for performing the task the system is meant to perform.
Data protection impact assessment - under the GDPR, data controllers (article 24 of the GDPR) must conduct a data protection impact assessment (DPIA) when the processing of personal data is “likely to result in a high risk to the rights and freedoms of natural persons” (article 35 of the GDPR).
At the same time, the AI act provides that deployers – users - of AI systems will be responsible for conducting a DPIA with respect to the use of high-risk AI systems.
It should be noted that it is possible that deployers of AI systems will be classified as data controllers under the GDPR whenever the employer decides to use an AI system and at the same time the determent the purpose and means of data processing. In this case overlapping obligations under the GDPR and the AI act will be applicable.
DPIA should include the mandatory information provided by article 35 of the GDPR as well as the information required under article 13 of the AI act.
Fundamental Rights Impact Assessment (FRIA) (article 27 of the AI Act) –deployers of high-risk AI systems should conduct a FRIA and notify authorities of its results.