crossorigin="anonymous"> EU offers guidance on how AI devs can comply with privacy laws – Subrang Safar: Your Journey Through Colors, Fashion, and Lifestyle

EU offers guidance on how AI devs can comply with privacy laws


The European Data Protection Board has published a opinion Addressing data protection in AI models. It covers AI de-identification, the legal basis for processing data, and measures to mitigate the impact on data subjects for tech companies operating in the bloc.

It was published in response to a request from the Data Protection Commission of Ireland, which is the lead supervisory authority under it. GDPR For many multinationals.

What were the main points of guidance?

The DPC asked for more information on:

  1. When and how can an AI model be considered “anonymous” – one that is unlikely to identify the individuals whose data was used in its creation, and is therefore exempt from privacy laws?
  2. When companies can say they have a “legitimate interest” in processing individuals’ data for AI models and therefore don’t need to seek their consent.
  3. Consequences of unlawful processing of personal data in the development phase of the AI ​​model.

EDPB Chair Anu Talos said in a Press release: “AI technologies can bring many opportunities and benefits to various industries and walks of life. We need to ensure that these innovations are done ethically, safely and in a way that benefits everyone.

“The EDPB wants to support responsible AI innovation by ensuring the protection of personal data and full respect of the General Data Protection Regulation.”

When an AI model can be considered ‘anonymous’

An AI model can be considered anonymous if there is a detectable chance that the personal data used for training will be returned to any individual – either directly or indirectly, such as through a prompt – Considered “unimportant”. Anonymity is assessed by supervisory authorities on a “case-by-case” basis and requires a “thorough assessment of the likelihood of identification”.

However, the opinion provides a list of ways model developers can remain anonymous, including:

  • Taking steps during source selection to avoid or limit the collection of personal data, such as omitting irrelevant or inappropriate sources.
  • Implementation of strong technical measures to prevent re-identification.
  • Ensuring that data is sufficiently anonymised.
  • Applying data minimization techniques to avoid unnecessary personal data.
  • Regularly review re-identification risks through testing and audits.

These requirements will make it harder for AI companies to claim anonymity, said Kathryn Wynne, a data protection lawyer at Pinsent Masons.

“The potential harm to the privacy of the person whose data is being used to train the AI ​​model may be relatively small depending on the circumstances and can be further mitigated by security and anonymization measures,” he said. said in a statement. Company Essay.

“However, the way the EDPB is interpreting the law will require organizations to meet burdensome, and in some cases, impractical, compliance obligations, particularly around purpose limitation and transparency.”

When AI companies can process personal data without individuals’ consent.

The EDPB’s opinion states that AI companies can process personal data without consent on the basis of “legitimate interest” if they can demonstrate that their interest, such as improving models or services, overrides the individual’s rights and More than freedoms.

This is particularly important for tech firms, as it is neither trivial nor economically viable to obtain consent for the vast amounts of data used to train models. But to qualify, companies must pass these three tests:

  1. Legality Test: A legitimate, legitimate reason for processing personal data must be identified.
  2. Required Tests: Data processing must be necessary for the purpose. There can be no other alternative, less intrusive ways to achieve the company’s objective, and the amount of data processed must be proportionate.
  3. Balance Test: The legitimate interest in data processing must outweigh the impact on the rights and freedoms of individuals. This takes into account whether individuals would reasonably expect their data to be processed in this way, such as if they have made it publicly available or have a relationship with the company.

Even if a company fails the balancing test, it is not required to obtain data subjects’ consent if it applies mitigation measures to limit the effects of processing. Such measures include:

  • Technical Considerations: Implementing security measures that reduce security risks, such as encryption.
  • Nickname: Altering or removing identifiable information to prevent data from being linked to an individual.
  • Data Masking: Replacing genuine personal data with fake data when the original content is not necessary.
  • Procedure for exercising your rights for data subjects: Making it easier for individuals to exercise their data rights, such as opt-out, request erasure, or claim data rectification;
  • Transparency: Publicly disclosing data processing practices through media campaigns and transparency labels.
  • Steps specific to web scraping: Implementation of restrictions to prevent unauthorized personal data scraping, such as offering data subjects an opt-out list or excluding sensitive data.

Malcolm Dowden, a technology lawyer at Pinsent Masons, said in the company’s article that the definition of “legitimate interest” has been controversial recently, particularly UK Data (Use and Access) Bill.

“Proponents of AI suggest that data processing in an AI context drives innovation and brings inherent social good and benefits that would constitute a ‘legitimate interest’ for the purposes of data protection law,” he said. are.” “Opponents argue that the theory does not address AI-related risks, such as privacy, discrimination or the potential spread of ‘deepfakes’ or misinformation.”

Advocates for the charity Privacy International have raised concerns that AI models such as OpenAI’s GPT series cannot be properly tested under the three tests because they are lacking. Specific reasons for processing personal data.

Consequences of illegally processing personal data in the development of AI

If a model is developed by processing data in a way that violates the GDPR, this will affect how the model is allowed to operate. The relevant authority reviews the “circumstances of each individual case” but provides examples of possible safeguards:

  1. If the same company maintains and processes personal data, the legality of both the development and deployment stages must be assessed on a case-by-case basis.
  2. If another firm processes personal data during the deployment, the EDPB will consider whether that firm had previously adequately assessed the legality of the model.
  3. If data is anonymized after unlawful processing, subsequent non-personal data processing is not liable to GDPR. However, any subsequent personal data processing will still be subject to regulation.

Why AI Firms Should Focus on Mentoring

The EDPB’s guidance is important for tech firms. Although it has no legal force, it influences how privacy laws are enforced in the EU.

Indeed, companies can be fined up to €20 million or 4% of their annual turnover – whichever is greater – for GDPR violations. They may even need to change how their AI models work or delete them altogether.

See: The EU’s AI Act: Europe’s new rules for artificial intelligence

AI companies struggle to comply with the GDPR because of the large amount of personal data required to train models, often obtained from public databases. This creates challenges in ensuring lawful data processing and addressing data subject access requests, rectifications, or erasure issues.

These challenges have manifested in numerous legal battles and fines. For example:

Additionally, in September, the Dutch Data Protection Authority Clearview AI fined 30.5 million euros Illegally collecting facial images from the Internet without the user’s consent, in violation of the GDPR. That same month, the Irish DPC was asked to produce an opinion only after Elon Musk successfully persuaded X. Stop using public posts from European users to train its AI chatbot, Grok.without obtaining their consent.



Source link

Leave a Reply

Translate »