"The advent of AI has disrupted the traditional silos between different areas of law such as IP and data protection."

In an ever-evolving digital landscape, the intersection of Artificial Intelligence (AI) and Intellectual Property (IP) law has become a focal point for legal experts and stakeholders alike. University of Luxembourg scholar Angelica Fernandez sheds light on the current concerns and debates surrounding AI-driven content management, the potential human rights challenges, and the future of AI-generated content such as deepfakes.

Posted lundi, juin 19 2023
"The advent of AI has disrupted the traditional silos between different areas of law such as IP and data protection."

LEADERS LEAGUE: As an AI and IP law expert, can you share the top concerns in the current debate on AI-driven content management?

Angelica Fernandez: The advent of AI has disrupted the traditional silos between different areas of law, such as IP law and data protection law, making navigation more challenging and necessitating a more integrated approach to legal matters. Companies must now navigate a complex regulatory landscape and strike a balance to avoid overburdening obligations, which can be difficult for smaller size companies. For example, the connection between AI and IP is evident in the use of automated decision-making systems and data and text mining which necessitates an examination of this relationship. Text and data mining raises significant legal issues relating to copyright laws and the sharing of personal and non-personal data, necessitating clarification of how various obligations will relate to other instruments of the European Union for practical implementation by companies, for example GDPR. An upcoming landmark case in the Court of Justice of the European Union, Case C-634/21, promises to address many questions and spark academic debates about the nature of Article 22 GDPR on automated decision-making. The relevance of this case goes beyond data protection and will likely impact companies’ activities in regard to automated analytics and various forms of automated scoring.

 

Can you elaborate on how the nuances of copyright regulations might be tailored or adapted to address AI-generated content?

The use of AI in developing algorithmic tools for content regulation and compliance has evolved from dealing with copyright infringement, which was their initial use, to dealing with other problematic online content-related areas such as hate speech, harassment, and bullying. However, each application of these tools requires different regulatory frameworks and contextual criteria. The interplay between the Digital Services Act (DSA) and the Copyright Directive, particularly in regard to Art. 17 of the Copyright Directive, demonstrates the challenges of applying content moderation systems across various domains, which has sparked discussions about the connection between AI and copyright. The broadened applications of AI in content regulation have made a significant impact, but there is still much to figure out regarding how these technologies can be effectively and compliantly implemented by companies.

 

The Digital Services Act aims to provide users with a more robust safeguards to publish and upload material, and in particular, it compels platforms to offer swift resolutions to users to avoid infringing on their freedom of speech.

 

What challenges do AI-generated content pose to intellectual property and freedom of speech legislation?

Exceptions and limitations for copyright law that allow, among others, for caricature, pastiche, and parody must be allowed in any AI content regulation system, as they are integral to copyright law and user’s rights and context-dependent. However, current AI-based tools struggle to differentiate between infringing and legal content, potentially leading to the infringement of users’ fundamental rights. There is a power imbalance between individual platform users and large tech platforms which is of concern, as the process for users to defend their rights against an erroneous take-down of their content can be challenging and time-consuming. The Digital Services Act aims to provide users with a more robust safeguards to publish and upload material, and in particular, it compels platforms to offer swift resolutions to users to avoid infringing on their freedom of speech.

 

Are current regulations like the Digital Services Act and Copyright Directive adequate for governing AI and machine learning in content creation and moderation?

The current copyright regulation does not cover the entire possible uses of AI landscape and significant questions, such as granting copyright protection to works by generative AI are being discussed in academia. The DSA is definitely an improvement and a step in the right direction in the issues relating to online content moderation, which includes algorithmic content moderation but also recommender systems. Additionally, with the DSA, there is a new revised version of the Code of Practice on Disinformation. The last one dated from 2018. Signatories of this code are the main online platforms, such as Google and Twitter, and they target to address, among other things, targeted online advertisement, recommender systems, and safe online design practices, for example, to counter online manipulation practices such as dark patterns. The DSA along with the Code, provide a framework for reporting to the EU Commission, assessing the impact on fundamental rights, and determining the extent of AI systems involved. Regulators will supervise the process and may implement additional measures to mitigate potential risks from companies using AI technologies. Finally, the forthcoming Artificial Intelligence Act will add complexity to the regulatory landscape and will be the legal framework for AI regulation.

 

If adopted this year, the AI Act will be a major significant change for the regulation of all AI-based products and services.

 

What developments in regulating the AI and deepfake landscape can we expect this year?

If adopted this year, the AI Act will be a major significant change for the regulation of all AI-based products and services. Deepfakes, however, are not considered a high-risk AI application under the AI Act. So, obligations on providers of this technology are not severe. Deepfakes pose a particular threat during election periods and may require action to mitigate risks. The European Democracy Action Plan and the 2022 Code of Practice of Disinformation address risk mitigation measures for election periods, including targeted disinformation and automated disinformation, but may lack clarity and fail to regulate, for example, deepfakes used to create non-consensual fake pornography, which mainly affects women.

 

What do we know about the AI Act’s provisions on intellectual property rights for AI-generated content and inventions?

We must wait for the final version to see what types of provisions can relate or potentially impact AI-generated content and inventions. However, recent academic debates have focused more on ownership of the training data needed for these creations, rather than attributing ownership to the AI for its work, in the cases of AI-generated art.

As it stands, it’s uncertain whether the AI Act will include provisions that could impact intellectual property rights holders directly.

 

the potential extraterritorial reach of European AI regulation can be seen as a positive development.

 

Does the reach of European AI regulations extend beyond Europe?

I would say that European AI regulations do have an extraterritorial reach, primarily because many companies offering AI products and services originate and are based in the US. However, as long as these companies provide products or services within the European Union, they are subject to its regulations. Consequently, they may need to adopt new practices, such as increased transparency in algorithmic processes, which, ideally, would be implemented across all jurisdictions in which these companies operate.

This phenomenon, known as the "Brussels Effect," was also observed with the GDPR, where major corporations set their global compliance benchmark to EU standards, and they applied those same standards across the world. In this sense, the potential extraterritorial reach of European AI regulation can be seen as a positive development.

 

Which European countries do you consider to be at the forefront of address ing AI and related regulatory topics?

The Digital Services Act (DSA) is an EU regulation and requires all European countries to implement it. Member states must appoint a national Digital Services Coordinator to enforce its provisions. Germany, and France are likely to be more advanced as they already have well-established media authorities and related institutions and can start the implementation phase of the DSA. Digital Services Coordinators need to be designated by February 2024.

 

In terms of compliance with AI regulations and respecting fundamental rights, authoritarian governments pose a significant risk.

 

 

Beyond Europe, which regions do you believe pose the greatest risk in terms of complying with AI regulations and not use the technology to abuse fundamental rights?

In terms of compliance with AI regulations and respecting fundamental rights, authoritarian governments pose a significant risk. However, the issue of language models is often overlooked. While some systems work well for certain popular languages, others lack sufficient training data, leading to mistakes and even harm. For instance, Facebook’s failure to detect hate speech in 2017, in Myanmar due to insufficient moderators able to understand the language and not enough training data in the local language highlights the damages some algorithms relying on language models can do. Online violence encouraged real-life violence that contributed to generalized ethnic violence in the country. Bridging this gap requires new techniques, such as translating language back and forth and obtaining more commercially viable training data, but it also depends on the production of data within specific cultures and societies.

 

I believe it is important to consider the overlapping obligations of small and medium-sized companies when it comes to complying with the legal framework around AI.

 

Setting aside the potential risks and dangers, what opportunities do AI-generated content and deepfakes bring?

I believe it is important to consider the overlapping obligations of small and medium-sized companies when it comes to complying with the legal framework around AI. These companies may not have the resources to hire legal counsel to navigate compliance, making it crucial to clarify their obligations. One area where AI, specifically deepfakes, can have a positive impact is in e-learning videos. The gaming industry also shows promise in using custom voice avatars. The regulatory landscape is complex, as it breaks down the silos between different fields of law, which I find fascinating in my research.

 

 

Aude Ghespière.