BACK

Hot topics – Fit-for-purpose regulation and policy-making in AI

Key themes and emerging challenges: A balancing act for a complex problem

In a white paper focused on artificial intelligence (AI), the European Commission highlights the opportunity and risk that AI presents by describing the need to build an ‘ecosystem of excellence’ to accelerate innovation, and an ‘ecosystem of trust’ to protect the fundamental rights of EU citizens.1 For AI to thrive in European healthcare, it must be supported by dedicated policy and legislation at national and EU levels. However, the balance between opportunity and risk presents a challenge to policymakers and regulators – how can this be managed?

“It’s easy to say, but hard to do… AI regulation must prioritise patient safety, but also stimulate innovation at the same time.”

Tom Dutilh, Compleye BV

Regulatory ambiguity surrounds AI

Expert participants across the seven European Round Table Meetings agreed that fit-for-purpose regulation is vital to creating the right environment for appropriate, safe and effective AI solutions to be adopted, while minimising risk to healthcare professionals (HCPs) and patients.2 However, there is ambiguity as to whether current regulation, for example the existing or new EU regulation for medical devices, is applicable to all AI solutions.

Currently, a CE mark is a basic requirement for a medical device, to ensure it is safe and performs as intended, but further regulation must be developed.2 To address this, the European Medicines Agency (EMA) has outlined the need for dedicated AI test laboratories. Test laboratories, or ‘regulatory sandboxes’, would provide developers with an end-to-end space to build and test their AI systems outside of an authorisation process set out by regulators – a recommendation welcomed by Round Table Meeting participants.2 The EMA has also highlighted the importance of engagement with academia and innovators to keep abreast of new AI developments – to help identify solutions to regulatory challenges at the same time as the new technology itself is being developed.2

Medical devices are assessed by national bodies in EU Member States. Some Member States are developing processes to evaluate AI solutions, for example, by increasing their digital and AI capabilities and educating their staff and leadership teams about the technology. But the lack of pan-European processes means assessment and regulation of AI is not uniform across Europe and there is a lack of knowledge sharing.2 Together, this can complicate the journey to market for AI developers and investors who often lack familiarity with country-specific regulatory processes.2


Key insights – Spain

The quality of an algorithm must be certified at European level. Three basic characteristics of an algorithm need to be guaranteed: it should have no bias, be predictable and be explainable.

Icon of an open book

It is essential that the regulatory environment for AI is as simple as possible. Added complexity could present a barrier to implementation. A practical implementation guide, developed by the EU, that outlines ethical, legal and operational models to be followed for AI applications would be beneficial.


Adaptive AI – a further challenge for regulators

Overall, participants across the Round Tables concurred that existing systems used for the approval of medical devices are not suitable in the case of adaptive AI. New policies and regulatory processes must acknowledge the unique features of adaptive AI, for example its ability to learn and change how it behaves with time, and standards should be developed accordingly. Participants questioned how frequently an adaptive AI solution might require re-assessment or re-approval given it will improve and evolve as it consumes more data. Many also suggested that regulators focus on developing approval pathways that monitor and manage risks associated with adaptive AI algorithms across the lifecycle of the technology.3

If you allow AI algorithms to be optimized for a certain setting using local data, it usually gets better. But how can we efficiently deal with this dynamic learning and adaptation from the regulatory perspective? And how do we ensure that algorithms can still be used responsibly?

Wiro Niessen, Erasmus University Medical Centre and Quantib BV

Data access and protection

Data protection and privacy was also widely discussed at the Round Table Meetings during the discussion on regulation. While healthcare systems generate vast quantities of data that could be used to feed algorithms, access to patients’ sensitive health data presents another regulatory challenge due to personal data protection and privacy laws. Conversations must be had about data ownership and how access can be regulated and monitored to enable AI algorithms to be developed, trained and tested effectively, while safeguarding privacy and maintaining data security.2 Of course, the EU General Data Protection Regulation (GDPR) was designed to ensure personal data protection while enabling free flow of personal data across the EU. Nonetheless, there remains regional variation in standards on data access, governance and privacy, resulting in uncertainty around the regulation of patient data used for AI.2

There is a need to create a common playing field across Europe, with common standards so data is interoperable, that is, it can be transferred between and used by multiple healthcare systems and the technology they use. Participants across the seven European Round Table Meetings agreed that improving access to publicly funded data will be important in enabling the adoption of AI in healthcare to create and enable access to the large datasets that are needed to train, test and improve AI technology.


Key insights – Germany

Due to national data protection regulations, hospitals that wish to research or implement AI applications need the consent of their patients, or they have to ensure that the data records are anonymised so that they cannot be traced. Hospitals have to ensure data security themselves and are liable for the consequences associated with an AI application.

Icon of gears working together

It was suggested that the introduction of AI applications, for example in the hospital sector, must be accompanied by regular monitoring. Post-approval studies, such as those that are undertaken in the pharmaceutical sector (Phase IV studies) could serve as a model. This would offer the possibility of testing AI under real-world conditions to demonstrate its benefit.


Navigating an enabling regulatory environment for AI

While challenges exist, experts at the Round Table Meetings discussed possible recommendations to support an enabling regulatory environment:

  • Clarification is needed from the EMA as to what AI is from a regulatory point of view, as well as what evidence is required to determine its efficacy and safety – there needs to be a pan-European framework for this.
  • As proposed by the EMA, test laboratories – so-called ‘sandboxes’ – could provide spaces in which developers can build and test systems safely. Discussions at the Round Table Meetings also proposed a framework similar to the ‘Temporary Authorisation for Use’ model for adaptive AI (where the technology is able to be used whilst it undergoes evaluation for marketing authorisation), or ‘fast-track’ implementation of AI, as well as post-authorisation monitoring.2
  • The development of regulatory centres of excellence, whereby national regulatory agencies with specific capabilities are assigned pan-European responsibility for assessing AI technologies. This would enable common regulatory practice (processes and standards for the approval and regulation of AI, as well as data access and protections) across Europe while maximising on regional expertise and avoiding each agency having to ‘do it all’.
  • Create safe data sharing spaces – collaborate with patient groups and obtain their trust and support for accessing health data to create the large data sets that AI needs. National independent bodies or institutions to regulate the use of health data may also be required.

What is clear is that AI development is rapid and growing worldwide, with the US and China both placing competitive pressure on Europe to keep up.3 Member states and the EU must work together to navigate these complexities and develop regulation and policies for AI. This will empower Europe to become leaders in AI innovation and harness its full potential to improve health, while simultaneously protecting European citizens and the HCPs who care for them.

Let us know your thoughts on how to create fit-for-purpose regulation for AI in healthcare by tagging @EITHealth on our Twitter or Facebook channels and share your thoughts using the hashtag #EITHealthAI

References

1 European Commission. White paper on artificial intelligence: a European approach to excellence and trust. 2020. Available from: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (accessed January 2021).

2 EIT Health and McKinsey & Company. Transforming healthcare with AI: The impact on the workforce and organisations. 2020. Available from: https://eithealth.eu/wp-content/uploads/2020/03/EIT-Health-and-McKinsey_Transforming-Healthcare-with-AI.pdf (accessed January 2021).

3 Cohen IG, Evgeniou T, Gerke S, Minssen T. The European artificial intelligence strategy: implications and challenges for digital health. Lancet Digital Health 2020; 2: e376–e379.