Balance AI Potential with Unique Challenges, FDA Drug Policy Chief Says
Recent years have shown AI’s potential for advancing drug development is undeniable, with hundreds of drug submissions to the FDA referencing AI solutions. But AI’s integration in trials must be balanced with careful considerations for a number of unique challenges, says M. Khair ElZarrad, director of the Center for Drug Evaluation and Research’s Office of Medical Policy.
Since 2016, the FDA has seen a surge of drug submissions citing the use of AI across the spectrum of drug development, from drug discovery and trials to postmarket safety surveillance, ElZarrad said during an episode of the Q&A with FDA podcast. The agency has received approximately 300 AI-referencing submissions in the past eight years, he added.
The FDA recognizes AI’s possibilities in clinical research, including its potential to modernize the design of traditional and decentralized trials, support trials that use real-world data and assess the large volume of drug safety and effectiveness data seen today, ElZarrad said, as well as serve as a tool for pulling and making sense of data from electronic health records, medical claims and other sources that may not come structured.
AI also holds great potential in bolstering safety signal identification, predicting adverse events and participant dropouts, strengthening enrollment and recruitment efforts (especially those related to diversity) and improving retention and access to trial information. But these possibilities need to be tempered with an understanding of the limitations that can be built into AI solutions, unintentionally or not, including biases, output reliability and other concerns.
“Responsible use of AI demands, truly, that the data used to develop these models are fit-for-purpose and fit-for-use” he said. “This is our concept we try to highlight and clarify.”
“Fit-for-use really means that the data should be both relevant, i.e. the data could include key elements and sufficient number of representative participants,” he continued, “and also that the data is reliable … [in terms of] factors such as accuracy, completeness, traceability.”
In addition, the intricacies of the methodologies powering AI solutions can make it challenging to understand how these solutions were developed and how they deliver their conclusions. This hurdle could require the development of new transparency practices to clear, he said.
The performance of AI solutions may also deteriorate, ElZarrad said. This applies especially to learning systems, which can see their outputs differ over time due to “data drift,” a concept in which the statistical properties and characteristics of input data shift.
To address these challenges, the FDA has been gathering feedback, holding multiple workshops, initiating demonstration projects and running a steering committee on the general use and feasibility of digital health technologies and decentralized trials, in addition to putting out a discussion paper specifically on AI and machine learning. It’s the FDA’s position that shaping the regulations behind AI will take collaboration on a global, multistakeholder scale.
“The mutual learning is really critical for us, and we hope as we move forward collectively, we shape this field in a responsible way,” ElZarrad said. “We do recognize that we need to learn from experts and experiences across sectors here … not just the standard traditional sectors. We really need to go beyond, into the technology, into ethics and beyond.”
Access the podcast recording here.
Upcoming Events
-
21Oct