Exploring Liability Concerns for AI-enabled Autonomous Vehicle Systems in the EU and Germany, by Heuking
Posted on Jun 16, 2024

In the era of rapid technological advancement, AI-driven autonomous vehicles are becoming increasingly common. As these technologies gain traction, the legal landscape surrounding their use, especially regarding liability for damages, is evolving. Emerging frameworks in Germany and the EU are crucial for tackling the complex questions of liability distribution among stakeholders.
Germany has been a pioneer within the EU in the regulation of autonomous vehicles (AVs), notably with the enactment of the German Autonomous Driving Act (Gesetz zum autonomen Fahren), in 2021. This landmark legislation was integrated by way of amendment to the German Road Traffic Act (Straßenverkehrsgesetz ‘StVG’), establishing a clear legal framework for the deployment and operation of AVs on public roads.
The StVG differentiates between Level 3 and Level 4 AVs, with distinct rules and liabilities for each. Level 3 vehicle users are referred to as “drivers” under Section 1a(4) StVG, as is the case for conventional vehicles, whereas level 4 vehicle users are termed “technical supervisors” under Section 1d(3) StVG. The presumed fault liability for Level 3 drivers follows from Section 18(1)(1) StVG, placing the burden of proof on the driver in the event of an accident. For Level 4 vehicles on the other hand, the liability of the technical supervisor is governed by Section 823(1) of the German Civil Code, shifting the burden of proof to the injured party.
The obligations placed on drivers and technical supervisors are also distinct. Level 3 drivers must take control of the vehicle when prompted by the system or when they recognize situations where automated functions may fail. In contrast, Level 4 technical supervisors are required to evaluate alternative driving maneuvers, deactivate the vehicle if necessary, and implement traffic safety measures when requested by the autonomous system. This delineation acknowledges the higher autonomy of Level 4 vehicles and the reduced human intervention required, which is essential for establishing a framework for allocating liability in cases where accidents or incidents involving AVs occur.
An essential aspect of the StVG is the notion of “obvious situations” requiring intervention. However, the legislation lacks a precise definition, leaving it to case law to provide clarity. Both Level 3 and Level 4 vehicles issue warnings for user intervention, but the requirements differ significantly. Level 3 drivers must take full control upon warning, whereas Level 4 supervisors may evaluate or deactivate the vehicle.
Timely and clear warnings are critical, as delays or ambiguities can impact user response, potentially leading to accidents and liability disputes. The autonomous system’s data recording capability is pivotal in determining incident causality, whether due to user error or system malfunction.
In addition to the StVG, liability for accidents involving AVs may also fall within the scope of the German Product Liability Act (Produkthaftungsgesetz, ‘ProdHaftG’). The ProdHaftG holds manufacturers liable for damages caused by defects in their products, including vehicles. For liability to apply under the ProdHaftG, certain conditions must be met, such as the product being defective and causing harm. This means that if an AV malfunctions due to a defect in its design or manufacturing, resulting in an accident, the manufacturer could be held liable for any resulting damages. The ProdHaftG thus provides an additional avenue for seeking compensation in cases involving AV accidents, alongside the provisions of the StVG.
Emerging AI legislation
On May 21, 2024, the Council of the European Union approved the groundbreaking EU Artificial Intelligence Act (AI Act). This landmark legislation marks the world’s first horizontal and standalone law governing AI. The EU aims for the Act to achieve a 'Brussels effect' akin to the GDPR, significantly influencing global markets and potentially serving as a blueprint for other jurisdictions implementing AI legislation.
Providers, as outlined in Art. 3(2) of the Act, are individuals or entities who develop an AI system or commission its development to place it on the market or put it into service. Deployers, on the other hand, are those who use the AI system under their authority, except when the AI system is used for personal, non-professional activities, as defined in Art. 3(4). Notably, the Act excludes non-professional users from its scope, leaving AV drivers generally outside its purview.
While the Act does not directly govern AVs, its forthcoming delegated regulations will significantly impact them. The Act, a broadly applicable law spanning various industries, includes exemptions to prevent conflicts with existing sector-specific regulations, including those governing the automotive sector and AVs. For instance, the Type-Approval Framework Regulation, mandates a comprehensive type-approval process for vehicles and their components, ensuring compliance with administrative and technical standards before market entry. Consequently, specific vehicle-related components, including those related to AI safety as defined by the Act, shall fall under the purview of the Type-Approval Framework Regulation rather than the Act, even if they qualify as high-risk AI. In light of this, both original equipment manufacturers (OEMs) and traditional automotive suppliers, as well as software companies central to AV development, can expect significant upcoming delegated legislations tailored to the technical and regulatory specifics of the automotive sector and AVs.
Further, on March 12, 2024, alongside the AI Act, the European Parliament formally adopted the revamped Product Liability Directive (‘Directive’). The Directive introduces targeted changes to EU rules for consumers seeking compensation for damage caused by defective products, including new technologies like AI. It establishes that manufacturers of defective products are the first point of redress for consumers. Furthermore, the Directive clarifies that the provider of an ‘AI system’, as defined in the AI Act, is considered a ‘manufacturer’ under the Directive, making them primarily liable for damages caused by AI systems. Significantly, the Directive also simplifies the burden of proof for consumers dealing with complex or ‘blackbox’ AI, where technical or scientific complexity makes it challenging to prove a defect.
Finally, in September 2022 the European Commission introduced the proposal for the AI Liability Directive (‘AILD’) to complement the AI Act. The AILD introduces procedural provisions for victims of damage damage caused by AI systems under the AI Act. However, it is worth noting in this regard that the AILD and the AI Act do not extend to non-professional AV users, and the AILD does not provide comprehensive tort liability coverage for them.
Consequently, determining the liability of personal AV users currently falls under the jurisdiction of individual Member States' national laws. The effective date for the AILD is yet to be determined.
Outlook
Establishing streamlined rules for the liability of personal users of AVs is paramount to ensure fair distribution of responsibility among all involved parties. This would not only provide personal users with a framework for effective damages liability but also safeguard the rights of victims.
Looking ahead, a key question that arises is that whether EU Member States other than Germany, shall follow suit and introduce separate AV liability regulations and whether there will be a differentiation of obligations and liability for level 3 and level 4 vehicle users. Currently, the absence of EU-wide legislation means that AV non-professional users' obligations and liabilities are determined by individual Member States, leading to a patchwork of disparate practices across the EU.
While there is hesitation towards establishing EU-wide liability regulation due to the diverse legal traditions among Member States, efforts are underway to promote AI development and usage while harmonizing AI systems within the EU internal market. Considering the prominent role of AVs in AI application, adopting analogous policies to establish liability regulations would mitigate inconsistencies in AI implementation and usage across the EU.
ABOUT THE AUTHORS
Dr. Thomas Jansen is a partner in Heuking's Munich office and a member of the IP, Media & Technology Practice Group. He has over 25 years’ experience as a technology transactions lawyer and advises corporate clients across diverse technology-driven industries on data protection and cybersecurity related issues.
Dr. Hans Markus Wulf is a partner in Heuking’s Hamburg office. A 23-year veteran of IT and data protection law, Dr. Wulf advises on all IT-Projects, Software, Big Data, Data Protection, Open Source, M2M-Communication, Internet of Things, IT-Security, Smart Logistics, Cloud-Computing and Mobility.
