AI on the verge of becoming the greatest source of innovation, with the legal challenges it bears

Annie Elfassi, Partner of the Practice Group Litigation and Employment and Kyllian Talbourdet, Associate of the same Practice Group, are both part of one of the world's largest international law firm, Baker McKenzie Luxembourg. After holding a conference on AI and its impact on companies in November 2023, Ms. Elfassi and Mr. Talbourdet discuss with Leaders League the impacts implied by an AI regulation and more generally in our lives.

Posted vendredi, janvier 19 2024
AI on the verge of becoming the greatest source of innovation, with the legal challenges it bears

Annie Elfassi et Kyllian Talbourdet, Baker McKenzie

Leaders league: AI is a trendy yet obscure notion. Could you briefly elaborate on what it entails ?

Annie Elfassi and Kyllian Talbourdet:  AI should not be seen as a single, mysterious entity that could not be understood. Different types of AI exist and can be grouped into 4 branches: reflective AI, predictive AI, creative AI and investigative AI. To give a brief overview, reflective AIs have a learning system and reasoning skill for real-time analysis and improved problem-solving. Predictive AIs, as their name suggest, will be able to detect patterns using predictive models based on large datasets. Creative AIs are often spectacular and well-known, such as ChatGPT, or Dall.e. Such AIs produce content on user's request, and draw their response from a vast dataset. Finally, investigative AIs are used to optimize search, notably by enabling the discovery of AI-enhanced knowledge, which includes contextually inferred information from both structured and unstructured data. These categories should be treated with caution, as they are not mutually exclusive. For example, an AI can be both creative and reflective.

AI should not be always seen as hyper-futuristic technologies that are beyond our reach. In fact, our daily lives are already populated by AIs. That is for example the translation software you use daily, which improves as you provide it with content to translate.

Leaders League:  Why is AI regulation such a hot topic ?

Annie Elfassi and Kyllian Talbourdet: AI is difficult to regulate as its recent development convey uncertainties regarding the ever changing impacts it has on our daily lives. As a result, it is difficult to make any predictions about the risks it may raise in the near future. In addition to its recent increase in terms of use and applicability, AI is not only evolving rapidly but participates to the speed development of its evolution. It is therefore complex for legislators and/or regulators to carve-out a comprehensive legislation that can withstand the passing of time.

In addition AI, or at least certain aspects of its functioning already impact our daily lives, and consequently with well in place legislation. Indeed, the notion of data set which is key when it comes to AI, already interacts with data protection laws or copyright laws.

AI must not be allowed to become a means of circumventing the law in any way whatsoever, especially given the diversity of sectors affected by this new technology. The involvement of AI in our private and professional lives is a clear indication of the need for effective and prudent control. It is a subject that may seem worrying/intimidating, but which calls for major legal constructions: AI is on the verge of becoming the greatest source of innovation, with the legal challenges it bears.

This compliance control of AI is made all the more necessary by the automation and speed of operation of this technology. On the scale of a company, an AI is a remarkable force that does not need to sleep, eat or rest. Without human control, it would become difficult if not impossible to catch up with it.

Leaders League:  A new EU legislation appears to be on the rise. Could you tell us more about it ?

Annie Elfassi and Kyllian Talbourdet: For the time being, the contemplated E.U. AI Regulation, the so-called AI Act, is remarkably similar to the GDPR in its making. It indeed foresees a risk-based approach that tries to be pragmatic in order to balance both the protection of the citizens and the need for freedom in the development of such technologies. It should be noted that the scope of application of the AI Act excludes AI systems intended exclusively for scientific research and development.

A provisional agreement on the EU AI Act has now been reached on December 9, 2023. This marks a huge step to the AI legislation path, which is now taking a turn that no one can afford to miss.

The AI Act follows a risk-based approach, according to which stricter transparency requirements and obligations are required for advanced AI models.

For example AIs deemed to pose "unacceptable" danger (e.g. biometric identification and systems using manipulative techniques or social scoring) would be fully prohibited whereas "high-risk" AIs (e.g. remote facial recognition or AI systems in education) would need to comply with heavy restrictions whilst still being authorized in the E.U. market. Prohibited technologies include biometric identification and systems using manipulative techniques or social scoring. Recognizing the potential threat to citizens’ rights and democracy posed by certain applications of AI, the EU legislator also addresses deep fakes and bans untargeted scraping of facial images which have been the source of growing concerns over the past years.

It is contemplated, that an AI Office in the European Commission will oversee the regulation of advanced AI models.

Whilst trying to balance everyone's interest, the AI Act remains unfortunately another piece of technical legislation bearing a certain weight on the actors of this market in the E.U., which will have no choice but to resort to experts to navigate not only the AI regulation but also the interaction of such regulation with others, already in place or to come.

The very strong and dissuasive penalties (fines up from €35 million or 7% of global turnover to 7.5 million or 1.5% of turnover) renders the resort to experts all the more necessary for providers and deployers of "high risk" and "limited risk" AIs who will face challenging transparency and safety constraints.

Similarly to the GDPR, the AI Act does not apply outside EU, per se, but affects all AI systems that will enter the EU market. The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. It is contemplated that such regulation will certainly come into force in the course of  2025, which will give little time for all the players of the AI market to adjust to a brand new regulation, without the proper guidance.

While some fear a slowdown in innovation, EU representatives are in favor of a balanced approach between innovation and responsible technology. Companies are encouraged to conduct risk assessments to comply with EU AI law and to consider broader responsible AI governance, aligning with AI principles and monitoring global AI laws and standards.

This new element of the European digital legislative package raises new concerns for companies, which would be better advised to use legal professionals to answer any concerns or queries they should have regarding that topic.