Regulating Digital Health Technologies – Continued
November 11, 2020
In our last post, we provided an overview of how the FDA regulates digital health software. In this blog, we’re taking a look at specific guidelines, considerations, and solutions that are required when artificial intelligence (AI) is applied in healthcare.
The latest challenge in regulating digital solutions has been the growing number of applications of AI in healthcare, touching many specialties, including radiology, cardiology, pathology, endocrinology, neurology, and oncology. AI tools often use vast amounts of static as well as dynamic clinical and real-word data to build algorithms and self-learning products that inform healthcare providers and that drive clinical workflows, diagnostics, and even treatment decisions in a timelier manner.
When regulating an AI tool, in addition to standard medical safety and risk assessment, data safety and security assessments have to be determined and taken into account, as well as patient privacy. In addition, AI tools are scrutinized because of the way they alter a physician’s way of interacting with their patients. AliveCor, the first AI healthcare tool approved in 2014, was a mobile app that could be used to analyze the recording from a portable EKG and send it to a healthcare professional for interpretation. In 2018, the FDA issued the first marketing permission for an AI-based device without any human input for IDx-DR, a device that detects certain diabetes-related eye problems. Caption Guidance, a second such device that does not require human input, was approved in 2020 for guiding an untrained user of an ultrasound system.
The pace of AI approvals has been steadily increasing. AI tools are most likely reviewed as de novo devices or will undergo premarket approval, but with more approved algorithms available in the market, the availability of predicate devices will make 510(k) clearance processes more likely.
In 2019, the FDA began discussing regulatory considerations for medical AI software in a concept paper that suggests AI-specific quality system requirements. Most importantly, it included the first time provisions for adaptive AI that uses non-static, unlocked data sets. This would allow software developers to update their algorithm or allow it to adapt itself within predetermined parameters, and to contain a test protocol that would be followed to validate any changes. Official guidance and recommendations from this paper have yet to be formalized and implemented.
Up until early 2020, radiology in the medical field was dominating and pushing AI applications ahead – and it was largely expected that decisions in radiology would result in trickle-down effects in other specialties. Since then, the COVID-19 pandemic has spurred the rapid creation and application of new software tools, including AI algorithms to gather and compile disease knowledge, disease diagnosis and tracing, using predictive analytics to devise screening and triage paradigms in a setting where testing resources are scarce, delivering virtual care, remote monitoring, and education to large portions of the population.
To meet the challenges of the pandemic, the development of these tools cannot be stifled by regulatory hurdles but still must be developed responsibly and at high quality to guarantee patient and healthcare worker safety. In response, the FDA met this challenge by releasing its updated public health and digital health policies in March of 2020 that allows innovators to react to this imminent public health crises. In addition, Emergency Use Authorizations (EUAs) have been approved for software used to screen COVID-19 patients, for remote monitoring systems, and for wearable devices.