BACK

Editorial – Digital healthcare regulation: paving the way for innovation

Headshot of Jesper Kjaer

During the Round Table Series, participants identified some of the challenges facing regulators when it comes to digital healthcare. Here, Jesper Kjær, Director of DKMA Data Analytics Centre, Denmark, provides his expert opinion on these regulatory challenges and the potential way forward for Europe.

When it comes to innovation in the AI space, the speed at which change has happened has surprised many people. This has made it more difficult to plan ahead and set out regulatory processes in advance of the technology being developed and implemented. That’s why it has so far been dealt with under existing regulations. For example, at the Danish Medical Agency where I head up the Data Analytics Centre, we currently include AI under medical device regulations, but it does not really belong there.

Over the coming years, it will be imperative that we create appropriate and specific regulatory processes for AI in healthcare to ensure it is both safe and effective.1 Within healthcare, regulatory processes aim to minimise any risks to patients and practitioners, while maximising the potential benefits. As a minimum, this means AI must adhere to regulatory requirements around privacy and data confidentiality, as well as the Hippocratic Oath to “first do no harm.”2 But AI comes with some new risks that current regulations around medical devices and software do not consider. What are these risks and where do we go from here?

 

Limited real-world experience of AI in healthcare settings

Current regulations for medical devices have been developed alongside the technology.2 They are based on shared knowledge gained from real-world experience of how they work and the risks they present.2 This experience has enabled us to develop well-defined rules and processes for approving medical devices, with stringent criteria regarding the robust evidence required to prove their safety and efficacy.


Unfortunately, we don’t yet have the luxury of real-world experience of AI in healthcare that is extensive enough to shape regulation. This lack of experience means there is real uncertainty around predicting the potential risks, particularly for data-driven AI, which can change in an automated way as it incorporates new data.2,3


Current regulatory processes are not well suited to such uncertainty and it may require a cultural shift among all stakeholders to be able to accept a certain amount of unpredictability when it comes to AI that learns, adapts and changes.

We may not yet have a well-defined solution to this challenge within healthcare, but there are lessons we can learn from other sectors that traditionally have had a greater focus on technology. For example, the car and aviation industries are ahead of healthcare in terms of implementing AI. They can provide useful lessons for healthcare, particularly regarding how version control can be successfully incorporated into regulation to handle some of the uncertainty of data-driven AI. It can  also inform debate around some of the ethical dilemmas associated with AI, for example who is liable when things go wrong. These can form a foundation that we can start to build upon in healthcare.

Moreover, I wonder why if we have allowed regulation to develop alongside advances in medical device technology, it seems to me that we are less comfortable with this happening when it comes to AI. Likely, this relates to how difficult it is to predict risks, as well as some hype around exactly what AI is capable of. Some confusion around the different types of AI, and the different challenges they therefore present for regulation, may also play a role.

Static rule-based systems versus data-driven AI

AI is a broad term that covers rule-based systems, as well as data-driven tools that have a greater potential for autonomy. Rule-based systems are those that give predictable results from previously validated protocols.2 In these cases, the systems are essentially performing automated pre-programmed tasks, and therefore current medical device standards may be sufficient to regulate their development and use.2

For AI tools that use machine learning and have the potential for autonomy, things become trickier.4 In these types of systems, large datasets are used to conduct complex statistical computations and the use of heuristics means they can learn and make judgements or predictions without outside influence. This presents a challenge for regulation because we must consider what evidence can be used to permit marketing of something that is designed to change, and at what point an update represents a new version requiring re-approval.2,4,5 This creates a conflict between the benefits of agile systems that can change and improve, and the necessary checks and approvals needed to ensure safety and effectiveness as algorithms learn and change.

This kind of ‘version control’ is a novel challenge for regulators. Medical interventions have traditionally been approved according to a very specific formulation and manufacturing process, with incredibly strict requirements for quality control between and within batches. Any small change to this requires re-approval. In contrast, software development generally doesn’t work in this way – feature enhancements are included regularly without approval from a regulatory body. This risks introducing error, but restricting it may stifle innovation and the potential benefits that could come from updated systems. When it comes to AI solutions in healthcare, understanding how to balance the risks and benefits will be a key challenge for regulators.

Of course, the complexity of some AI solutions can in itself mean it will be difficult to predict the potential risks and benefits. This complexity can render AI solutions to some degree a ‘black box’ for clinicians, patients and regulators alike (that is, they may be unable to see or fully understand its inner workings).2,5–7 But I believe there are parallels in traditional medicine that suggest unforeseen hazards and lack of a known mode of action do not necessarily render an intervention impossible to regulate.

For example, we have drugs that have received approval even though we don’t know the precise mechanism of how they work at the time of approval. An example of this is the use of methylphenidate in attention deficit hyperactivity disorder (ADHD).8 This drug was developed decades before it was used in ADHD, and initially used to treat other conditions. However, research showing the effects of stimulants on children with ADHD eventually led to the approval of methylphenidate for the condition. Of course, this approval relied not on a known mechanism of action – this is still being elucidated now. Rather, as with all drugs, it is approved based on reproducible results for patients, with evidence of efficacy and safety that supports a positive balance between risks and benefits. A similar logic could be cautiously applied to ‘black box’ AI solutions for which we do not fully understand their inner workings.

Representative data and avoiding bias

Regardless of the complexity of an AI solution, one of the clear risks that can be foreseen and planned for is bias. The old maxim “garbage in, garbage out” holds true for all data-based solutions because biased data will lead to biased outputs.1 So having strict controls over the data and materials used to train algorithms will be necessary to ensure they are free of bias. For me, this is central to effective regulation of AI.

For instance, we need to consider whether an algorithm trained on data from one population is appropriate for use in another.5,9 If we develop an AI solution trained on data from Denmark for example, we cannot assume that it can also be applied elsewhere in Europe. We have to really understand the data we have used because the representativeness of the training data influences whether positive results can be reliably reproduced elsewhere. Including certain demographics from one population during training may lead to a system that discriminates against particular individuals in another population where the demographics are different (for example, biasing against an ethnic minority not well represented in the training data).

When AI is capable of learning, any biased signal can be propagated and enhanced. So this is where version control will again be important, and there will need to be updates at regular intervals with checks for any bias that might be introduced. Of course, most bias is unconscious and therefore difficult to spot. So we still don’t really know how best to assess the quality of data for AI to ensure the positive results seen during development are reproduced when the solution is implemented in actual real-world settings.

If we look again at medicines regulation, we know that randomised controlled trials are used to approve a drug by testing it under very controlled conditions to remove any possible bias, and yet we are also beginning to see how real-world evidence can change what we think we know about the effectiveness or safety of an approved drug. There is the same risk with an AI solution – bias is revealed only once we have tested it in the real world, or it may work differently to how we expected. I believe that while we are starting to make steps in the right direction, there is still much we need to do to truly understand what is required to ensure training data is unbiased and representative.

The way forward

How do we move forward? I believe we should use the expertise we already have in some EU member states. For example, the Netherlands have some incredibly strong medical device or software development companies, as do Germany where they have also really moved forward into the testing of AI. We should use the experience and expertise of these regions to form clusters of excellence that can support other regions who lack such expertise. This avoids the need for all member nations to have the same capabilities and instead ensures those with expertise will lead on certain aspects of the regulatory process.

However it is achieved, for Europe to become a world-leader in the development and application of AI in healthcare, appropriate regulation that enables innovation while minimising risk is an absolute must. It’s really important that we continue discussions, such as those conducted during the EIT Health Think Tank Round Table Series and the recent European Medicines Agency (EMA) working group, to ensure that scrutiny is applied appropriately, but we must also avoid being overcautious because of fear of the unknown. By using the knowledge and experience we have gained elsewhere in healthcare (and other sectors) as a foundation, we can see that effective and appropriate regulation of AI may be difficult but it is certainly not an impossible task.

Join the conversation and stay up to date with the latest thinking in AI on our Twitter and Facebook channels, @EITHealth, using the hashtag #EITHealthAI.

References

1 Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare 2020. DOI: 10.1016/B978-0-12-818438-7.00012-5.

2 Turpin R, Hoefer E, Lewelling J et al. Machine learning AI in medical devices: adapting regulatory frameworks and standards to ensure safety and performance. AAMI and BSI. 2020. Available from: https://www.bsigroup.com/en-GB/Innovation/digital-healthcare/ (accessed January 2021).

3 Vayena E, Blasimme A, Bohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med 2018; 15(11): e1002689.

4 European Commission. White paper on artificial intelligence: a European approach to excellence and trust. 2020. Available from: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (accessed January 2021).

5 Cohen IG, Evgeniou T, Gerke S, Minssen T. The European artificial intelligence strategy: implications and challenges for digital health. Lancet Digital Health 2020; 2: e376–e379.

6 Matheny M, Thadaney Israni S, Ahmed M, Whicher D (editors). Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. 2019. NAM Special Publication. Washington, DC: National Academy of Medicine.

7 Kuan R. Adopting AI in Health Care Will Be Slow and Difficult. 2019. Harvard Business Review. Available from: https://hbr.org/2019/10/adopting-ai-in-health-care-will-be-slow-and-difficult (accessed January 2021).

8 Lakhan SE, Kirchgessner A. Prescription stimulants in individuals with and without attention deficit hyperactivity disorder: misuse, cognitive impact, and adverse effects. Brain Behav. 2012; 2(5): 661–677.

9 Panch T, Mattie H, Celi LA. The “inconvenient truth” about AI in healthcare. NPJ Digit Med 2019; 2: 77.