Artificial Intelligence in the Life Sciences: Regulation of AI Technologies and Product Liability Implications

Regulatory developments

There has been some form of regulation of artificial intelligence (AI) technologies in many life science uses in the EU and UK for several years. The EU regulatory initiative for AI started in the area of ​​medical devices with the introduction of the Medical Devices Regulation (“MDR”) and the In Vitro Diagnostics Regulation (“IVDR”) ) (2017/745 and 746) which, belatedly, entered into force on May 26, 2021.

More recently, he led the charge in proposing the first-ever comprehensive regulatory framework to govern the risks posed by emerging digital technologies, including AI. Following the publication of the European Commission’s (‘the Commission’) white paper on AI and a series of consultations and expert group discussions, on 21 April 2021 the Commission published its proposal long-awaited regulation establishing harmonized rules on AI, also known as the “Artificial Intelligence Act”. It is designed to complement existing European legislation, such as the General Data Protection Regulation. It also aims to extend the applicability of existing sectoral product safety legislation to certain high-risk AI systems to ensure consistency.

The proposed regulations take a risk-based approach and impose strict controls and extensive risk management for the riskiest forms of AI, including the requirement to undergo compliance assessments; writing and maintaining technical documentation; the establishment of quality management systems; and affixing CE markings to indicate compliance with the Commission’s proposed regulation before products are placed on the market. It has wide applicability and will affect AI providers and users inside and outside the EU. While this is familiar territory for life sciences companies, it is important that resources are put in place to respond to and deal with this additional regulatory burden, if and when it comes into effect.

If the proposed regulation comes into force, it will not be implemented in the UK due to Brexit. However, UK companies offering AI technologies to the EU will be directly affected when selling their products in the EU and will need to comply with the regulation.

The EU’s drive to implement global standards for new technologies has also had a domino effect in the UK:

  • On September 16, 2021, the Medicines and Healthcare Products Regulatory Agency (“MHRA”) issued a “Consultation on the future regulation of medical devices in the UKwhich ran until 25 November 2021. The consultation presented proposed changes to the UK medical device regulatory framework with the aim of “develop a future world-class regime for medical devices that prioritizes patient safety while fostering innovation.
  • Along with the consultation, the MHRA has also issued guidelines, “Software and AI as a medical device change program“, which is committed to making bold changes to provide a regulatory framework that offers a high degree of protection for patients and the public, while ensuring that the UK is the home of responsible innovation for device software medical.
  • On September 22, 2021, the UK launched its first national artificial intelligence (AI) strategy to “help it strengthen its position as the world’s scientific superpower and seize the potential of modern technology to improve people’s lives and solve global problems such as climate change and public health”. The strategy includes plans for a white paper on AI governance and regulation.

Product Liability Risks

Although there is a human hand behind AI technologies, the immaterial nature of many AI applications raises questions about who or what will be responsible for the consequences of their use, especially when the development of such applications involves a myriad of people, including software developers. and data analysts.

In the United Kingdom, and depending on the specific circumstances, product liability claims may be brought for negligence, breach of contract or under the Consumer Protection Act 1987 (CPA), UK legislation. application which transposed the European Product Liability Directive 85/374/EEC (PLD) into UK law. The CPA imposes liability on the producer for damage caused by a defective product, often referred to as “no-fault liability”.

Article 3 of the CPA provides that a product is defective if the harmlessness of the product “is not such as people are generally entitled to expect”. When assessing the safety of a product, the court will take into account all circumstances that it considers factually and legally relevant to the safety assessment, on a case-by-case basis. These factors may include safety marks, regulatory compliance and warnings. A plaintiff making a claim under the CPA must prove that a defect existed and that the defect caused the damage.

The unique features and characteristics of AI technologies present challenges to the existing accountability framework. For example, questions are being raised about whether AI-based software or data is considered a “product,” as defined by the CPA, or a service. This distinction is particularly relevant in the context of AI technologies that include physical hardware and cloud-based software, such as a smart medical device, and where such software is often subject to automated changes. Similarly, questions may be asked to find out which person(s) is (are) considered a producer for the purposes of the CPA. Is the software developer, engineer or user responsible for updating the software?

The EU is investigating whether the PLD is fit for purpose and whether, and if so, how and to what extent, it needs to be adapted to meet”the challenges posed by emerging digital technologies, thus ensuring a high level of effective consumer protection, as well as legal certainty for consumers and businesses”. Draft legislative amendments could be available by the third quarter of 2022.

The UK is taking similar steps to assess whether its existing product safety and liability regimes meet the challenges of AI technologies. The UK government has opened a consultation via the UK Product Safety Review to explore possible changes to existing product safety laws to ensure the framework is fit for the future, acknowledging that the CPA provisions do not reflect technologies emerging technologies such as AI. In addition, potential reform of the LPC is being mooted by the Law Commission as part of its 14th legislative reform programme, after seeking views on whether the LPC should be extended to cover technological developments.

In the final article in our Artificial Intelligence in the Life Sciences – Revolution, Risk and Regulation series, we’ll look at the steps life science companies can take to “future-proof” themselves in the face of some of the top issues and risks.

Comments are closed.