Scientists from around the world gathered at the National Institutes of Health in Bethesda April 11-12 to present computer models that predict toxicity. The Predictive Models for Acute Oral Systemic Toxicity workshop brought model developers together with regulatory agency representatives to discuss how these models might reduce animal use for chemical safety testing.
The Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM) launched this modeling project in November 2017 (see story). The group provided data sets and invited scientists to develop models that would predict specific acute oral toxicity endpoints needed by regulatory agencies, such as whether a substance is highly toxic or nontoxic.
Goal is reducing animal use
Regulatory agencies use such data to develop requirements for packaging and personal protective equipment, product warning labels, and guidelines for handling environmental releases.
Nicole Kleinstreuer, Ph.D., deputy director of the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM), said the workshop was designed to answer two main questions.
“What can we do [in the near term] to really start using these computational models for end-user applications?” she asked. “Also, can we start thinking more deeply about model interpretation, things like variability, mechanistic insight, and characterizing uncertainty, and thinking about the landscape of chemicals for which we don’t have good prediction models?”
Outputs from the different models are now being combined to generate overall predictions for the acute oral toxicity endpoints of interest. The crowdsourcing approach builds off the strengths and compensates for the weaknesses of individual models. The predictions will be made available via the Environmental Protection Agency (EPA) Chemistry Dashboard for use by regulators and researchers.
Organizers will assess the models submitted for the workshop and publish two journal articles on the project’s outcomes. One article will focus on how the acute oral systemic toxicity data for the project was compiled and evaluated. The second will address the computational modeling effort, including the generation of the overall predictions.
Various approaches needed
“Federal agencies have a wide variety of different requirements [for acute oral toxicity data],” said ICCVAM co-chair Emily Reinke, Ph.D., from the U.S. Department of Defense. “Even within the same federal agency, there are variations on what is and is not accepted, and what is needed for submission.”
Furthermore, systemic toxicity involves complex processes. Because no single model can suffice, the models presented at the workshop used a variety of approaches. For example, some models generated predictions based on chemical properties, whereas others considered biological activity.
Considerations for use
Participants in breakout groups discussed what would be needed for these models to be used by regulatory agencies. Opportunities identified for the near term included using models in conjunction with other evidence, such as mechanistic data and exposure predictions, to characterize toxicity potential. Models could also be used to identify highly toxic substances and find additional data sources to bolster predictions.
Participants also shared concerns with adequate documentation of models, protection for confidential business information, and clear definition of a model’s limits.
They agreed that regulators should work closely with the model developers to ensure appropriate implementation. “We can’t have confidence in something we don’t understand,” said Anna Lowit, Ph.D., from EPA. Lowit co-chairs ICCVAM along with Reinke.
(Catherine Sprankle is a communications specialist for ILS, the contractor supporting NICEATM.)