WHO Examines Risks, Rewards of Using AI in Clinical Trials
Although AI holds great potential in optimizing and advancing clinical research, the risks and ethical concerns it presents must be considered when using it in clinical trials, the World Health Organization (WHO) cautions in a recent report.
The report, which covers not just clinical trials but the whole drug development process, notes ways in which AI solutions are or could be used to augment different elements of clinical trials:
- Supporting trial design, including decentralized trials that use real-world data, predicting trial outcomes, selecting sites, optimizing dose selection and regimens, choosing clinical endpoints, and improving drug adherence
- Identifying and selecting participants based on data, including medical history, demographics, social media content, as well as identifying biomarkers for use in selecting participants and helping patients find trials
- Gathering, managing and assessing data from digital health technologies in trials and assessing endpoints, including safety signals
- Analyzing trial results, automatically feeding data into statistical analysis tools and producing documents, tables, labels and reports
But challenges around bias, safety, interpretability/transparency, responsibility/accountability, privacy and informed consent, as well as governance, must be monitored and addressed, according to the report.
Bias and discrimination, for example, could be unintentionally — or intentionally — built into AI technologies depending on who is developing and deploying the technology and lead to medicines unfit for diverse populations.
Datasets used for drug development algorithms “may contain certain biases, including undersampling of people with irregular or limited access to healthcare. These include ethnic minorities, women and socially disadvantaged groups and can be expressed in electronic health records, genomic databases and biobanks,” the report warns. “Training or validating algorithms with these data can encode the biases in algorithms, making them unrepresentative, such that the models are not sufficiently generalizable, resulting in suboptimal outcomes or harm for disadvantaged groups.”
Using health data in AI-informed drug development also presents a number of unique issues. For instance, in leveraging AI to identify patients and improve adherence, sponsors could end up gathering and using data in ways that weaken patient privacy and informed consent. Because of this, patients recruited using AI tools, such as tools that assess health records and public social media data, must provide meaningful informed consent for these uses of their data, possibly proactively and with additional informed consent measures.
There also are risks for patients when using public data or a combination of healthcare and nonhealthcare datasets that need to be mitigated by strong privacy and human rights protections, WHO says.
Similarly, outside parties hired by investigators in AI-supported trials to analyze data or apply algorithms could cause issues. “The participation of third parties raises concern about the handling of sensitive health data, the commitment of any third party to the business and professional standards of healthcare companies, subsequent uses of the data, and to whom access is provided,” the report reads.
AI also presents safety hazards that must be closely watched and regulated, including risks to patient safety when drug development algorithms are not tested for potential errors and the capacity to give false-positive and false-negative recommendations. And while AI algorithms hold huge potential benefits for identifying and developing new medically beneficial compounds, they could also be used to identify bioweapons in just hours.
Overall, it’s on the global community and governments to ensure that the use of AI does not imbalance pharmaceutical and vaccine development in favor of profits over public health and individual patients, WHO says.
“To do so, governments must establish an effective approach to governance, including defining standards, rules, regulations and legal frameworks that prioritize public health and the public interest,” the report concludes. “WHO will continue to examine and monitor how AI is affecting the development and delivery of medicines and vaccines and identify ways in which WHO, member states, pharmaceutical companies, civil society, and global health-oriented product development partnerships and researchers can harness AI to improve pharmaceutical development and access to address unmet health needs.”
The organization says it’s considering drawing up new ethics guidance on the use of AI and looking deeper into governance for data management, AI regulatory considerations and legislation.
Read the full report here.
Upcoming Events
-
21Oct