CONFIDENCE IN AI: EU REVISES PRODUCT LIABILITY REGIME
A keystone regulation for the development and deployment of artificial intelligence (AI) across the European Union (EU) and around the world, the EU’s AI Act aims to ensure that AI systems developed and used in the bloc are safe, transparent, traceable, non-discriminatory and environmentally friendly.
Alongside the introduction of the AI Act, which entered into force across all 27 EU member states on 1 August 2024, the EU is also revising its product liability regime so that, in circumstances where AI systems cause injury to users, there are appropriate mechanisms for recompense in place.
“As with any new area of technology, EU regulators are concerned with ensuring fundamental rights are respected, as well as the cornerstones of transparency and accountability,” says Katie Simmonds, a managing associate at Womble Bond Dickinson. “Companies utilising AI could face a number of risks during implementation and use, including privacy violations, discrimination and misuse of AI for malicious purposes, all of which the EU is aiming to reduce.”
The EU AI Act
The AI Act (Regulation (EU) 2024/1689) is the first-ever comprehensive legal framework on AI worldwide. Its aim is to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety and ethical principles, and by addressing the risks posed by very powerful and impactful AI models.
To achieve this, the Act lays down harmonised rules on AI that provide developers and deployers with clear requirements and obligations regarding specific uses of AI across the EU. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises.