FDA and Artificial IntelligenceIn general, the FDA is seeking to ensure the safety and efficacy of new devices using AI while doing so in a way that doesn’t hamper innovation. This balancing act is nothing new for the FDA; but how the FDA is managing safety and efficacy for medical devices incorporating AI is undergoing refinement. The FDA has its traditional concerns about AI as a software tool: validation, software development lifecycle management and documentation, the need for well controlled study designs, etc. But AI also presents some new, interesting challenges:
- AI programs often depend on real-world benchmark data as original inputs. The real world is notorious for its variability, so how do you justify the benchmark data chosen?
- How do you insure that the benchmark data chosen supports the intended use of the device?
- If you need to re-train your AI algorithm with new real-world data, what does that mean for the status of the device as a cleared medical device?
- There are different kinds of AI, for example machine learning and deep learning. What are the implications for validation where the capabilities of machine learning and deep learning are different?
- How should AI medical device companies be evaluated from a quality perspective? Should they be expected to support more rigorous (i.e. more complex and expensive) systems and approaches to testing and software development lifecycle management?