The AI Liability Directive will be published on 28 September, and it is meant to complement the Artificial Intelligence Act, an upcoming regulation that introduces requirements for AI systems based on their level of risk. It will create a liability regime targeted to damage originating from Artificial Intelligence (AI) that would put causality presumption on the defendant.
“This directive provides in a very targeted and proportionate manner alleviations of the burden of proof through the use of disclosure and rebuttable presumptions,” the draft reads.
“These measures will help persons seeking compensation for damage caused by AI systems to handle their burden of proof so that justified liability claims can be successful .”
The proposal follows the European Parliament’s own-initiative resolution adopted in October 2020 that called for facilitating the burden of proof and a strict liability regime for AI-enabled technologies.
Liabilities related to criminal law or the field of transport are excluded from the AI rules, the provisions would also apply to national authorities.A potential claimant may request the providers of a high-risk system to disclose the information the provider will have to keep as part of its obligations under the AI Act. The AI regulation mandates the retention of documentation for ten years after an AI system has been placed on the market.
The information requested would entail the datasets used to develop the AI system, technical documentation, logs, the quality management system and any corrective actions. If a provider refuses to comply with a disclosure order, the court can assume that the provider was non-compliant with the relevant obligations unless they can prove otherwise.