The tech world is exploding around the realities and possibilities of Artificial Intelligence (AI). Artificial Intelligence makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks reflecting what we normally think requires natural intelligence. There are currently relatively few FDA cleared products that use Artificial intelligence on the market. But the use of AI in the development of medical devices is accelerating dramatically because AI holds a great deal of promise for solving health problems.
But as a relatively new tool for medical device developers and with rising interest in applying AI to solving health problems, what is the FDA concerned about?
FDA and Artificial Intelligence
In general, the FDA is seeking to ensure the safety and efficacy of new devices using AI while doing so in a way that doesn’t hamper innovation. This balancing act is nothing new for the FDA; but how the FDA is managing safety and efficacy for medical devices incorporating AI is undergoing refinement.
The FDA has its traditional concerns about AI as a software tool: validation, software development lifecycle management and documentation, the need for well controlled study designs, etc. But AI also presents some new, interesting challenges:
- AI programs often depend on real-world benchmark data as original inputs. The real world is notorious for its variability, so how do you justify the benchmark data chosen?
- How do you insure that the benchmark data chosen supports the intended use of the device?
- If you need to re-train your AI algorithm with new real-world data, what does that mean for the status of the device as a cleared medical device?
- There are different kinds of AI, for example machine learning and deep learning. What are the implications for validation where the capabilities of machine learning and deep learning are different?
- How should AI medical device companies be evaluated from a quality perspective? Should they be expected to support more rigorous (i.e. more complex and expensive) systems and approaches to testing and software development lifecycle management?
In the face of some of these challenges the FDA has sought to respond in a variety of ways. For example, the FDA recently issued a draft guidance that it intends to focus oversight on high risk software .
Software that is determined to be lower risk may not be regulated in the same way as a traditional medical device thereby freeing the energies of the FDA to focus on more important areas that impact their mission.
Also, the FDA has been actively exploring ways to adjust their processes and procedures to be more in-line with the pace of software development. This is particularly relevant for devices that incorporate AI, as AI development is usually iterative by nature and their algorithms are constantly learning.
To this end, the FDA has implemented the Digital Health Software Precertification Pilot Program that seeks to develop a model that will “provide more streamlined and efficient regulatory oversight of software-based medical devices developed by manufacturers who have demonstrated a robust culture of quality and organizational excellence, and who are committed to monitoring real-world performance of their products once they reach the U.S. market.”
The Use (and Regulation) of AI in Medical Devices is Just Beginning
The FDA is still in the early stages of determining reasonable and appropriate clearance criteria for medical devices that incorporate Artificial Intelligence. Advances in AI are accelerating the pace of innovation and the regulatory demands on digital health technologies. The FDA will continue to be central to ensuring safety and efficacy of devices while addressing the need for more rapid access to clinical technologies.
Having a knowledgeable and resourceful team guiding your submission process is more critical than ever—as formal and informal guidance will be changing regularly.