01
Sun, Sep
107 New Articles

Personal Data Regime Compliance for AI Systems: How the Law on Personal Data Protection Impacts Development and Deployment of Artificial Intelligence in the Republic of Moldova

Personal Data Regime Compliance for AI Systems: How the Law on Personal Data Protection Impacts Development and Deployment of Artificial Intelligence in the Republic of Moldova

Moldova
Tools
Typography
  • Smaller Small Medium Big Bigger
  • Default Helvetica Segoe Georgia Times

The Law on Personal Data Protection (LPDP) significantly influences the development and deployment of Artificial Intelligence (AI) systems in the Republic of Moldova. As companies increasingly adopt AI technologies, awareness and understanding of LPDP requirements application is crucial to ensure compliance and protect the rights of individuals.

AI and big data innovations integration into the global data-processing infrastructure offers significant advantages, such as enhanced transparency, worldwide knowledge transfer, progressive consultancy services, effective information accessibility, generation of new employment opportunities, increased productivity and financial efficiency.

Although the LPDP does not specifically regulate any AI operations, most provisions are relevant. Article 3 defines the terms of personal data, processing, special categories of personal data and consent of the personal data subject. It also defines the standard of consent, which has to be freely given, specific, informed and unambiguous. The unambiguity aspect poses a challenge for AI compliance with the LPDP, since the personal data operations performed by AI are unpredictable. The prevention measures currently in force do not assure the precise compliance of AI operations with the LPDP requirements. While the Law establishes a strong regulatory framework for the processing of personal data, it is technically challenging to monitor compliance with all these provisions when personal data is processed by AI systems.

 There are, in fact, contradictions between standard data safeguarding guidelines and the full utilization of AI and big data, which involves large-scale data collection and processing for undetermined purposes. Notwithstanding, to mitigate this challenge data protection principles can be interpreted and thoroughly analysed to align with the beneficial uses of AI and big data.

In this regard, companies must address the implications of AI on data protection by implementing internal policies to prevent LPDP infringements. Data Protection Impact Assessments (DPIAs), defined by Article 23 of the LPDP are designed to prevent any violations and must include an evaluation of potential risks associated with processing personal data. Notably, due to the unpredictability of AI operations, DPIAs have limited applicability, highlighting the need for a more reliable mechanism.

LPDP preventive measures may not seem to restrict AI development if correctly implemented. However, the rapid evolution of AI goes beyond the regulatory framework of personal data protection measures, since the LPDP does not give solutions for most AI-related data-protection issues. The penalties for LPDP infringements, combined with the uncertainty in meeting compliance standards, may generate a risk factor. This could deter companies from exploring the technological advancements, rather than encouraging adequate compliance measures.

The implications of the LPDP on AI development and deployment become more stringent when analysing Article 17 of the Law, which provides that any person shall have the right to request the annulment, in whole or in part, of any individual decision which produces legal effects on his/her rights and freedoms and which is based solely on automated processing of personal data intended to evaluate certain aspects of his/her personality, such as professional competence, reliability, conduct and other aspects.

This results in AI developers being required to ensure the fairness, transparency, and human oversight of such decisions. As a consequence, companies may need to implement additional safeguards and review internal procedures to guarantee compliance with the Law.

Overall, the LPDP requirements applicable to AI do not seem as easily achievable at this stage of use of the technology.

LPDP compliance should not be an obstacle to innovation, rather, it must ensure responsible and ethical AI development. Despite the LPDP creating limitations, companies must navigate these requirements to build AI systems that respect privacy and protect individuals’ rights. At the same time, the legal framework must keep up with technological progress and encourage it. The successful application of LPDP to AI depends on the guidance provided by the National Centre for Data Protection to controllers and data subjects.

By Iulian Pasatii, Partner, and Liviana Frunza, Junior Associate, Gladei & Partners