SEPA

Second Annual Conference on the State of the Science on
Development and Use of New Approach Methods (NAMs) for

Chemical Safety Testing

Conference Summary
October 19-20, 2020
U.S. Environmental Protection Agency (EPA)

Webinar


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Table of Contents

Day 1 Summary	3

Welcome and Charge to the Group	3

Implementation of Animal Testing Reduction at EPA	3

Russell Thomas (EPA): Overview of EPA NAMs Work Plan	3

Louis Scarano (EPA): Progress on Implementing the TSCA Alternatives Strategic Plan	4

State of the Science in Development of NAMs	6

Andreas Bender (University of Cambridge): Using Chemical, Biological, and In Vivo Data for NAMs: Which Data Do
We Have, and What Can We Do with It?	6

Kamin Johnson (Corteva): Transcriptome-Based Derivation of an In Vivo POD: Current and Future Utility	6

Avi Ma'ayan (Mount Sinai Health System): Drugmonizome and Drugmonizome-ML: Integration and Abstraction of
Small Molecule Attributes forDrugSet Enrichment Analysis and Machine Learning	7

Ivan Rusyn (Texas A&M University): "Fitfor Purpose" for Organotypic Models in Environmental Health Protection..8

Day 1 Closing Remarks	9

David Fischer (EPA)	9

Day 2 Summary	9

Welcome	9

Addressing Current Limitations in NAMs	9

Chad Deisenroth (EPA): Retrofitting an Estrogen Receptor Transactivation Assay with Metabolic Competence	9

David Crizer (NTP): In Vitro Disposition of Tox21 Chemicals: Initial Results and Next Steps	10

Developing Scientific Confidence in NAMs	11

Clemens Wittwehr (Joint Research Centre of the European Commission): An OECD Harmonized Template to Report
NAM Results in Regulatory Environments: Principles and Practical Use	11

Monique Perron (EPA): Case Study #1: Integration of NAM Data for Evaluating Potential Developmental
Neurotoxicity	11

Andrew White (Unilever): Case Study #2: Integration of NAM Data in a Next-Generation Risk Assessment for
Cosmetic Ingredients	12

Todd Stedeford (EPA): Case Study #3: Incorporating the Threshold of Toxicological Concern into Regulatory
Decisions under the Amended Toxic Substances Control Act	12

Day 2 Closing Remarks	13

David Dunlap (EPA)	13

Appendix A: Acronyms	14

Appendix B: Hyperlinks to Presentations and Other Materials	15

Appendix C: Questions and Answers	16

2


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Day 1 Summary

Welcome and Charge to the Group

Maureen Gwinn (Director of EPA's Biomolecular and Computational Toxicology Division [EPA-BCTD] within the Office
of Research and Development [EPA-ORD]) opened the webinar and introduced EPA Administrator Andrew Wheeler.
In pre-recorded opening remarks, Administrator Wheeler welcomed everyone to the Second Annual Conference on
the State of the Science on Development and Use of New Approach Methods (NAMs) for Chemical Safety Testing. In
the first conference held in December 2019, over 500 people with a common interest gathered to implement
Administrator Wheeler's vision to eliminate animal testing.

Administrator Wheeler gave examples of how EPA has since made progress to reduce, replace, and refine animal
testing requirements. EPA established a website dedicated to NAMs information. In February 2020, EPA issued
guidance deeming pesticide testing on birds unnecessary and waiving those requirements. EPA engaged
stakeholders by hosting a public webinar and by convening a meeting of the Scientific Advisory Board (SAB). In June
2020, EPA released the NAMs Work Plan, which outlines objectives, strategies, and deliverables to reach the goal of
eliminating mammal testing by 2035. In July 2020, EPA released guidance reducing unnecessary testing on fish. In
October 2020, EPA released guidance expanding waivers for dermal toxicity tests for pesticides. By the end of 2020,
EPA will publish for public comment a draft method for making additions to the list of NAMs that EPA maintains
under Section 4 of the Toxic Substances Control Act (TSCA). This conference will chart the path forward for reducing
animal testing while upholding chemical safety.

M. Gwinn thanked Administrator Wheeler for his opening remarks and introduced Jennifer Orme-Zavaleta (Principal
Deputy Assistant Administrator for Science of EPA-ORD) to provide the charge to the group. J. Orme-Zavaleta
emphasized that Administrator Wheeler is very passionate about making progress on this issue. The goals of the
Administrator's directive are threefold:

1)	reduce requests for and funding of mammalian studies by 30% by 2025;

2)	eliminate mammal testing by 2035; and

3)	exclude EPA from approval processes on mammalian studies that are conducted after January 2035 as
much as possible.

J. Orme-Zavaleta recounted last year's inaugural conference during which stakeholders convened to discuss how
EPA would achieve these ambitious goals and create a work plan. The work plan product released in June 2020
outlines the following objectives:

1)	evaluate regulatory flexibility for accommodating NAMs;

2)	develop baselines and metrics for assessing progress;

3)	establish scientific confidence and demonstrate application;

4)	develop NAMs that fill critical information gaps; and

5)	engage and communicate with stakeholders.

J. Orme-Zavaleta stated that EPA is aiming to release a progress report by the fourth quarter (Q4) of 2020. She
encouraged attendees to check the EPA NAMs website for updates and to email NAM@epa.gov with any questions.

Implementation of Animal Testing Reduction at EPA

Russell Thomas (EPA): Overview of EPA NAMs Work Plan

Russell (Rusty) Thomas (Director of the Center for Computational Toxicology and Exposure within EPA-ORD) provided
an overview of the June 2020 NAMs Work Plan and the strategies that have been developed by EPA to meet the
goals of the Administrator.

Summary

After the first EPA f JAMs conference, held in December 2019. EPA's Office of Chemical Safety and Pollution
Prevention iEPA-OCSPPi and ORD were tasked with developing the NAMs Work Plan to guide EPA toward
meeting the ambitious goals laid out in Administrator Wheeler's

3


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Reduce Animal Testing. Thirty-four experts from across the EPA convened an internal workshop in January 2020
to begin the development of the work plan.

The experts were divided into five subgroups reflecting the five objectives of the work plan. These objectives are
detailed below.

The subgroups reviewed the discussions from the 2019 NAMs Conference's breakout groups to help guide the
development of each section.

Each objective of the work plan has short- and long-term goals with specific deliverables and timelines.

This work plan was developed using current information and understanding of the state of the science. As more

information is reviewed, the work plan may need to evolve and adapt.

Objective 1: Evaluating Regulatory Flexibility and Existing Statutes in Accommodating NAMs

o Review the existing regulations, policies, guidance, and statutes that may not allow for flexibility in
applying NAMs. EPA will report on this review's findings in 2021.

Objective 2: Developing Baselines and Metrics for Assessing Progress

o Build upon previously established baselines and metrics for animal use in OCSPP and ORD, progressively
extending to each of the other offices. EPA will develop the summary metrics and report them through
the NAMs website in Q4 of each year, starting in 2021, aligning with the annual NAMs conferences.
Objective 3: Establishing Scientific Confidence and Demonstration of the Application of NAMs

o Characterize the scientific quality of the NAMs because they will need to be "as good as or better than"
traditional methods. EPA will also work to develop scientific confidence in the NAMs across all agencies,
develop reporting templates, and demonstrate the application of NAMs to regulatory decisions through
case studies. EPA will engage the National Academy of Sciences (NAS) to develop a report on the
uncertainties and utility of existing mammalian toxicity tests by Q4 2022. EPA will release a scientific
confidence framework to evaluate the quality, reliability, and relevance of NAMs in the third quarter (Q3)
of 2022. Case studies to evaluate application of NAMs to decision-making will be released biennially
starting in 2022.

Objective 4: NAM Development to Fill Scientific Gaps

o EPA will facilitate the joint planning of NAM development within EPA across offices and will encourage
the development of NAMs by external parties, building upon current practices such as the Science to
Achieve Results (STAR) Grant Program. Strategic Research Action Plans reflecting internal EPA research
activities will be developed on a four-year planning cycle.

Objective 5: Communication and Outreach with Stakeholders

o EPA has created a centralized portal for the release of all EPA NAMs-related information:
https://www.epa.gov/nam

o EPA will actively solicit feedback associated with all deliverables and objectives in real time through the

website and through the new email address, NAM@epa.gov. The feedback will be reviewed quarterly,
o EPA will also hold public webinars associated with each deliverable release.

Additional points made:

The December 2019 NAMs Conference breakout groups were designed to match the objectives outlined in the
work plan.

It is critical that all stakeholders use the email address (NAM@epa.gov) and the NAMs website to submit
feedback, ideas, and more to EPA for this effort to be successful and beneficial for all groups involved.

Louis Scarano (EPA): Progress on Implementing the TSCA Alternatives Strategic Plan

Louis (Gino) Scarano (EPA Office of Pollution Prevention and Toxics [EPA-OPPT]) provided an update on the eight

objectives in the EPA Strategic Plan to Develop and Implement NAMs in TSCA, with a focus on the first four:

1)	implement NAMs;

2)	maintain and update Section 4(h)(2)(c) list of TSCA-acceptable NAMs;

3)	identify retrospective TSCA information;

4)	identify available confidential business information received under TSCA;

5)	use NAMs for prioritization;

6)	develop an information technology (IT) platform;

7)	collaborate with partners; and

8)	launch a TSCA NAM website.

4


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
Summary

The 2016 Frank R. Lautenberg Chemical Safety for the 21st Century Act began a drive toward reducing animal
testing and promoting alternative testing methods. One of the requirements was to develop a Strategic Plan
which was published in June of 2018.

Objective 1 has seen progress through the lung effects project, which had a manuscript submitted in July 2020,
and the skin sensitization project, which stems from a draft policy stating EPA will accept NAMs in place of the
local lymph node assay (LLNA) test. In the lung effects project, one proposed tiered testing strategy involves
insoluble polymer lung overload. These tiers start with measuring particle size distribution, then in chemico
measurements such as biosolubility, followed by computational modeling, and then finally to strategic in vivo
testing if the prior tiers indicate lung overload. A recent example where this tiered system was used identified a
chemical substance as not being a concern for lung overload, leading to revocation of the Significant New Use
Rule under TSCA for that substance. The skin sensitization project uses NAMs to study adverse outcome
pathways (AOPs) by identifying key events stemming from interactions between the substance of interest and
molecules that are associated with the allergic contact dermatitis adverse outcome.

Progress on Objective 2 has moved steadily through the release of the list's first iteration in June 2018 with 39
NAMs, its first update in December 2019 with the addition of Appendix B and new NAMs (bringing the total to
85), and the beginning of the TSCA NAMs nomination form and process. Public comments led to the
development of Appendix B, which contains "other useful information," such as tools and approaches to
enhance the use of NAMs. EPA plans to release a draft proposal on this list's NAM selection process for public
comment in 2020.

Progress on Objectives 3 and 4 is occurring through Analysis of TSCA Available, Expected, and Potentially Useful
Information (ATAEPI). Under ATAEPI, all study requests since the first one in 1979 are being reviewed. The
preliminary results highlight the most commonly requested studies, such as the 28-day oral and 90-day
inhalation tests for human health and the algae and fish acute tests for environmental organisms.

The newly established Data Gathering and Analysis Division (DGAD) at EPA will focus on Objective 5 (use of
NAMs for prioritization of TSCA chemicals).

Objective 6 progress includes the deployment of new IT platforms, such as the International Uniform Chemical
Information Database (IUCLID) 6.3, the Organisation for Economic Co-operation and Development (OECD)'s
Quantitative Structure-Activity Relationship (QSAR) Toolbox 4.3, and EPA's development of a "sandbox" system
for confidential business information via a local area network (LAN). EPA has also collaborated with the
European Chemicals Agency (ECHA) and Canada to exchange public chemical data and methods using IUCLID
and OECD Harmonization Templates.

Objective 7 has progressed through a variety of partner and stakeholder collaborations, including a series of
webinars with People for the Ethical Treatment of Animals (PETA) and the Physicians Committee for Responsible
Medicine (PCRM), workgroups with the Interagency Coordinating Committee on the Validation of Alternative
Methods (ICCVAM) and OECD, and exchanges with other stakeholders.

Objective 8 is complete with the launching of a TSCA NAM website.

Additional points made:

Between October 2015 and April 2020, OPPT received 49 chemical registration submissions from 24
companies, and 27 of those submissions relied exclusively on NAMs evidence for skin sensitization. This
information will be used along with an evaluation of 51 TSCA chemicals using skin sensitization NAMs by the
National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods
(NICEATM) to refine the draft policy on skin sensitization.

EPA is required to publish a list of TSCA-acceptable NAMs under Section 4. In developing this list, EPA will soon
be releasing documentation describing how NAMs may be nominated to be placed on the list. NAM nominations
should cover nominal information, developmental history, method description, relevance to TSCA chemical
decisions, and reliability.

5


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
State of the Science in Development of NAMs

Andreas Bender (University of Cambridge): Using Chemical, Biological, and\n Vivo Data for NAMs: Which
Data Do We Have, and What Can We Do with It?

Andreas Bender (University of Cambridge) discussed the various in vivo data types that exist, what can be done with
them, potential considerations for their use, and the potential path forward.

Summary

A. Bender discussed key characteristics of chemical, biological, and phenotypic data available from in vivo
animal data.

There are essentially three types of data: chemistry, phenotypes, and targets or modes of action (MOAs) that are

connected through other data, such as bioactivity, phenotypic response data, and AOPs.

A. Bender reviewed three case studies.

Case Study 1: Linking chemistry or an assay to an endpoint.

o Aimed at understanding the mechanisms behind structural cardiotoxicity and demonstrated a novel
workflow to use data from different sources (ChEMBL, ToxCast, FAERS) to derive AOPs.

Case Study 2: Linking gene expression or cell morphology data to an endpoint.

o The QSTAR Project is evaluating gene expression data in lead optimization. The main takeaway is that
data require a clear rationale for interpretation. One cannot just generate data; the data need to be put
into context.

o Gene expression data could be useful, but there are experimental factors that need to be accounted for,
including cell lines, doses, time points, and other setup considerations, to ensure the relevance of the
outcomes to an in vivo setting. Data analysis factors to be accounted for include whether the focus is on
high coverage or specificity of the data, data standardization, and confirmation bias.

Case Study 3: Using historical animal data for analysis.

o These data can improve estimates of treatment-related effects, demonstrate interspecies concordance,

and predict useful time points,
o There are some limitations to animal data, such as lack of annotations and standardization and under-
analysis of databases such as eToxSys.

As long as it is difficult to understand existing animal data, it will be difficult to compare any new methods to
them and even more difficult to replace them.

Computational models are only as good as the underlying data. Data need to be useful for the problem at hand
and developed within context.

There will always be issues of data coverage with any shortcuts.

Partnerships and consortia will be critical to help with identifying which readouts matter, sharing data,
generating data, and developing agreements on the best practices for data analysis.

Additional points made:

No method can save an unsuitable representation or model or remedy data irrelevant to a given question.

Kamin Johnson (Corteva): Transcriptome-Based Derivation of an In Vivo POD: Current and Future Utility
Kamin Johnson (Corteva) discussed a project his company has been working on for the past four years to study
transcriptome-based points of departure (PODs) and their utility for predicting apical PODs. The presentation
covered background on the use of transcriptome-based PODs in the pesticide regulatory process and compared
traditional (apical) to transcriptome-based PODs. This work contributes to the EPA NAM work plan by establishing
scientific confidence in NAMs and developing NAMs that fill critical information gaps.

Summary

Transcriptome-based PODs present an opportunity to reduce the time and number of animals needed for risk
assessment decisions. Agrochemicals typically have the most data requirements. The rodent carcinogenicity
studies take two to three years to complete and are the rate-limiting step in the toxicity assessment process.
General mammalian toxicology studies can use approximately 1,500 animals over four years to generate various
apical POD values that are used to determine the final toxicity POD value used for risk assessment.

The rationale for using molecular PODs to predict apical PODs stems from the hypothesis that a POD based on
comprehensive molecular data will be protective of any downstream apical-effect POD. This hypothesis is

6


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

compatible with that of generic AOPs in which the precursors to an apical effect are chemical exposure and then
a molecular change.

Findings from rat liver data collected by the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation
System (TG-GATES) consortium showed that the rat liver transcriptome became stable after -24 hours. The
concordance between the four-day transcriptome-based POD and the 29-day apical POD was nearly identical,
suggesting that shorter-duration, transcriptome-based PODs can predict apical PODs that otherwise would have
required longer in vivo studies. Other studies of varying lengths have found similar results across five Corteva
chemicals, but some data also suggest that the liver is not always a useful surrogate for effects in other organs.
Overall, there is a growing consensus that a transcriptome-based POD can estimate an apical POD.
Transcriptome-derivation of the POD shows potential to waive the cancer bioassay study in the future. After more
studies are conducted to determine concordance across MOAs and tissue types and upon acceptance from
stakeholders, this method could reduce animal use by an order of magnitude (from 1,500 animals used over
four years to just 150 used over six months).

Additional points made:

The transcriptome POD derivation method was based on the workflow in the Johnson et al. 2020 paper in
Toxicological Sciences. The method requires generation of whole-transcriptome data and then use of
BMDExpress software to determine a single POD value.

The Health and Environmental Sciences Institute (HESI) Emerging Systems Toxicology for Assessment of Risk
(eSTAR) committee has pulled together a molecular POD team to develop a framework to derive an in vivo
transcriptome POD for use in chemical risk assessment that will produce a human health-protective POD based
on concerted molecular change. This committee includes members from industry, regulatory and governmental
groups, academia, and consulting. K. Johnson encouraged attendees to join this committee and to register for
its October 28-30, 2020 annual meeting.

A vi Ma 'ayan (Mount Sinai Health System): Drugmonizome and Drugmonizome-ML: Integration and
Abstraction of Small Molecule Attributes for Drug Set Enrichment Analysis and Machine Learning
Avi Ma'ayan (Mount Sinai Health System) discussed the programs Drugmonizome and Drugmonizome-ML, which
have largely been developed for use in the pharmacology world but can be applied to systems toxicology. These app-
based programs allow bioinformatics data to be used more broadly in toxicological decision making.

Summary

A. Ma'ayan discussed the use of modelling and computational programs to study vast amounts of data covering
thousands of chemicals and dozens of human cell lines.

The genes that have been researched tend to be skewed toward certain "popular" genes, leaving many with no
data at all. Similarly, in reviewing Tox21 chemicals, there are many chemicals with no data at all, whereas others
have a wealth of data.

Genes with less available data can be compared to similar genes with more data to estimate their gene
expression. A. Ma'ayan presented a case study that reviewed which genes have been analyzed, using SARS-COV-
2 as an example. A. Ma'ayan used Geneshot, a program that searches published literature to return which genes
are mentioned and in what percentage of that literature. It also scores the similarity of less studied genes to
those most studied, allowing researchers to identify genes that may function or be expressed similarly.

Enrichr is a program that allows a researcher to find structurally similar genes to the gene of choice. Twenty-
eight million gene sets have been submitted to Enrichr.

Although RNA-seq co-expression and Enrichr co-expression may not be quite as precise as literature-based
associations, they may nevertheless show novel associations.

There are many methods that can be used to combine data sources and known information to strengthen
understanding of compound effects. A. Ma'ayan's team has developed scripts to help gather these data from
publicly available sources through machine learning.

Drugmonizome is one of over 50 "Appyters" that have been developed. Appyters convert Jupyter notebooks into
web-based bioinformatics applications. Drugmonizome is a new program developed to provide drug set
enrichment analysis. This program can potentially predict drug targets, side effects, and more.

The data processed in Drugmonizome can be used to make predictions about drugs, including (potentially) their
side effects.

7


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

A. Ma'ayan presented a case study using machine learning to predict novel drugs that could potentially be used
in the treatment of COVID-19.

Additional points made:

A. Ma'ayan demonstrated the L1000 Fireworks Display, a data map showing drug-induced transcriptomic
signatures. In this visualization, points represent one drug on one cell type; those that do not have an
established MOAare shown in gray.

A. Ma'ayan discussed many programs and platforms that can be used to analyze gene expression data,
including Geneshot, Kallisto, Enrichr, Drugmonizome, Drugmonizome-ML, Harmonizome, and more Appyters, as
well as multiple data sources such as ARCHS4 and LINCS L1000.

ARCHS4 RNA-seq data has been visualized by A. Ma'ayan's group to show gene-gene co-expression.

Ivan Rusyn (Texas A &M University): "Fit for Purpose " for Organotypic Models in En vironmental Health
Protection

Ivan Rusyn (Texas A&M University) discussed the use of tissue chips in tandem with modeling for regulatory decision
making and presented three case studies on their application. Tissue chips are already used for internal decision
making in the pharmaceutical industry. The science itself is fairly new, but increased funding over the past 10 years
has increased research on tissue chips. Texas A&M University receives funding from the National Center for
Advancing Translational Sciences (NCATS) to test and develop tissue chips. Overall, partnerships and investments in
this area are growing.

Summary

For each new tissue chip, over a four- to eight-month timespan, the Texas A&M University Tissue Chip Validation
Center gathers documents and training for new technologies, identifies critical elements to replicate and
conducts reproducibility testing, and defines the technology's area of use. There are a wide range of
technologies under development, such as gravity flow cultures from the University of California, Berkeley and
forced flow cultures from the University of Washington. The University of Pittsburgh's Microphvsiologv Systems
Database houses all the information gathered and related publications.

Going forward, tissue chip research should be "fit for purpose," meaning end users should describe the platform
types they are most interested in prior to conducting experiments.

Case #1 shows how tissue chips can be used to study reabsorption toxicokinetics in the kidney. Specifically,
tissue chip data can be used to develop models using compounds with well-understood absorption compared to
human urine data. The tissue chips are expensive and only replicate one element of a complex physiological
system, so modeling is important.

Case #2 shows how to derive a "safe dose" using NAMs. Ion channel blockage causes toxicity, and this blockage
can be observed with in vitro methods. By testing 13 drugs in vitro, the "safe dose" was predicted within a factor
of 10.

Case #3 shows the importance of considering population variability during risk characterization by using
modeling.

Tissue chip science is not ready to eliminate the use of animals in the near future. There are promising
organotypic models, but EPA needs to define the purpose. Instead of trying to completely replicate the human
physiological system on a chip, the path forward is more likely a combination of physiologically based
pharmacokinetic (PBPK) modelling, organotypic model-derived hazards, and data on mechanisms and
toxicokinetics.

Additional points made:

It is important to remember that the "best available science" can be or include non-animal tests. Many animal
tests used by EPA are not actually required. Tissue chips themselves will not be used in isolation but will serve
as another piece of evidence.

NIH is making large investments in human-based models, while animal data are the comparator for NAMs at
EPA. Animal cell-based micro-physiological systems are needed because this is where comparative data exists
for building scientific confidence in new methods.

NAS hosted a workshop in 2014 on the potential of the tissue chip for environmental health, which highlighted
an information gap: the most effective use and purpose of the method.

8


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
Day 1 Closing Remarks
Da vid Fischer (EPA)

David Fischer is the Deputy Assistant Administrator for EPA's OCSPP. He thanked the speakers and attendees from
around the world. D. Fischer expressed appreciation for the effort that everyone made to participate in this
important virtual conference. He noted that the presentations spoke to EPA's dedication to replacing animal testing
with better science in NAMs. This research brings EPA closer to meeting Administrator Wheeler's goal of eliminating
animal testing by 2035.

Day 2 Summary

Welcome

Anna Lowit (Senior Science Advisor for EPA's Office of Pesticide Programs [EPA-OPP]) welcomed everyone to the
second day of the 2020 NAMs Conference. She noted that participants should use the Q&A box and that any
questions not answered live would be answered by email after the conference. A. Lowit noted that the first day was
very successful and productive. She noted that the presentations and recordings would be available on the
conference website following the conclusion. She introduced Alexandra Dunn, the Assistant Administrator for
OCSPP, to give her opening remarks. A. Lowit noted that Assistant Administrator Dunn is a proponent of the 3Rs:
reducing, refining, and replacing animal testing. The 3Rs reduce animal use but also make scientific assessments
more supportable and efficient.

Assistant Administrator Dunn (EPA-OCSPP) noted that a virtual conference was not the preferred option, but it
nevertheless allowed for progress during a challenging health emergency. She thanked everyone for embracing this
platform. Assistant Administrator Dunn confirmed that she is passionate about this agenda and noted that
Administrator Wheeler's September 2019 memo set very ambitious goals for the Agency. The memo built upon
progress that the Agency had been making toward the reduction of animal testing, as well as the improvement of the
science that is relied upon for regulatory decision making. Assistant Administrator Dunn noted that EPA is working
collaboratively with scientists from around the world and from across government, industry, and academia, as
exemplified by the presenters. She again thanked everyone for their participation and reiterated the importance of
submitted comments and ideas to NAM@epa.gov.

Addressing Current Limitations in NAMs

Chad Deisenroth (EPA): Retrofitting an Estrogen Receptor Transactivation Assay with Metabolic
Competence

Chad Deisenroth (EPA-ORD) presented on the development of NAMs to fill information gaps, such as inadequate
coverage of biological targets, limited capability to address tissue- and organ-level effects, lack of robust integrated
approaches to testing and assessment, and minimal capability for addressing xenobiotic metabolism in in vitro test
systems. Research on these gaps is housed under the four-year Chemical Safety for Sustainability Strategic
Research Action Plan, which includes an objective to develop and apply methods to incorporate endogenous and
exogenous xenobiotic metabolism into high-throughput in vitro assays.

Summary

There is international recognition that in vitro metabolism systems should be considered, particularly for
endocrine disruption. In response to this need, the NTP, NCATS, and EPA launched the Transform Toxicity
Testing Challenge to identify innovative solutions to retrofit high-throughput assays for metabolism. There are
parallel efforts in ORD to develop these systems, including intracellular and extracellular approaches, which can
be integrated into a strategy to model in vivo metabolic bioactivation and detoxification.

The intracellular approach to examine xenobiotic metabolism uses cell-based assays. It more closely models
effects of target tissue metabolism and allows users to bypass the need for a DNA template by enabling user-
defined composition and ratios of multiple input mRNAs. ORD used this approach to compare two activities
across the three strongest in vitro systems. The intracellular approach suggests that the liver cell is not
necessarily the gold standard and that other cells have unique metabolic processes.

9


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

The extracellular approach for examining xenobiotic metabolism uses the media or buffer of cell-based or cell-
free assays. This approach more closely mimics hepatic metabolism and the circulation of metabolites. An
example of this approach is the alginate immobilization of metabolic enzymes (AIME) method, which uses lids
fabricated with pillars going into a well microplate to capture liver metabolism along with hepatic S9 fraction and
optimization cofactors like NADPH. To help improve cell viability, the S9 fractions are suspended in alginate
hydrogel.

The purpose of the AIME method is to evaluate cytochrome P450 metabolism. These experiments showed the
AIME method is optimized for Phase I metabolism. Metabolic activity was validated diversely, and a two-hour
incubation period was found suitable for parent compound depletion.

The AIME method was used for a case study on 63 chemical compounds in an effort to retrofit it to estrogen
receptor transactivation. The study reprioritized hazards based on metabolism-dependent bioactivity,
demonstrated the AIME method's utility for identifying false positives and false negative effects, and enhanced
in vivo concordance with the rodent uterotrophic bioassay. Chemicals with demonstrated estrogen receptor
activity were used to test the method and develop an estrogen receptor QSAR method.

o To test the method, the false-positive test set used chemicals with metabolites known to spike

bioactivity, and the results were compelling in distinguishing between parent compound and metabolite
and in demonstrating bioactivation.
o The negative test set examined chemicals with metabolites that were not expected to be more bioactive
than the parent compounds. The results demonstrated the expected results in most compounds and
was also able to detect both bioactivation and bioinactivation.
o Collecting a full dataset across the 63 chosen substances will allow for extensive comparison of results
to develop this method.

Additional points made:

EPA is adapting the method to use with a bioprinter that can pattern the hydrogel into a microplate to achieve a
more scalable application to high-throughput screening.

Da vid Crizer (NTP): I n Vitro Disposition ofTox21 Chemicals: Initial Results and Next Steps
David Crizer is a chemist in the Predictive Toxicology and Screening Group at NTP. He presented on the difference
between the nominal (stated) concentration of a chemical applied to an in vitro system and the actual concentration
absorbed by the cell. D. Crizer explained how to test and predict this difference.

Summary

Current in vivo to in vitro extrapolation (IVIVE) predictions rely on the nominal concentration. If that nominal
concentration fails to accurately represent the cellular concentration of that assay, the prediction accuracy may
be affected. Uncertainty in the amount of chemical absorbed into the cell makes it difficult to say whether or
how the administered chemical concentration or dose level relates to the observed results. Current steady-state
IVIVE assumes that blood-to-tissue partitioning is equivalent to the cell-to-medium partitioning in an assay.
Currently, there is a lack of empirical research into in vitro chemical partitioning. It is unknown how many
chemicals in chemical databases, such as Tox21, have differential partitioning.

Uncertainty in dose response poses a problem for making regulatory decisions.

The hypothesis of this work is that the physicochemical properties of chemicals can be used to predict the
difference between the nominal applied concentration versus the actual observed concentration within the
cellular compartment.

This project aims to analyze about 200 chemicals representative of the broad chemical space, about 92.5% of
which are also present in ToxCast. A pilot test of 10 representative chemicals was conducted to refine the
protocols. The pilot also consisted of a "cassette" test in which five chemicals were combined and tested in a
single liquid chromatography-mass spectrometry injection to reduce testing time by roughly one third. It was
shown that these cassette samples produced similar results to individual samples. The pilot showed that the
nominal concentration did not equal the cellular concentration unless the surface area of the bottom of the dish
was considered unavailable for bonding.

The future of this project consists of testing minor changes to the protocol to continue to explore this subject.

10


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
Additional points made:

When a chemical is introduced to a medium, several pathways are possible; it could sit in the medium, it could
evaporate, it could be absorbed into the cell, or it could attach to the container walls.

Armitage etal. 2014 suggested that in vitro partitioning related strongly to the octanol-water partition coefficient
(log Kow) and concentration of serum in the medium. Plastic binding was not as important in this case but is still
a consideration. This model was used as a comparison to the work conducted here.

Developing Scientific Confidence in NAMs

Clemens Wittwehr (Joint Research Centre of the European Commission): An OECD Harmonized Template
to Report NAM Results in Regulatory Environments: Principles and Practical Use

Clemens Wittwehr (Joint Research Centre of the European Commission) presented on the use of OECD Harmonized
Templates (OHTs) to improve data reporting and exchange and to subsequently accelerate data acceptance. In
2012, when numerous templates for apical animal data had already been made available, the OECD identified a
need to develop templates for mechanistic data and, as such, published a first version of OHT 201 in 2017. OHT
201 facilitates the reporting of NAM study results within an internationally agreed-upon format. An updated version
of OHT 201 was officially released by the OECD on November 6, 2020.

Summary

A unique template for mechanistic data was necessary because those data are not intrinsically linked to apical
endpoints from in vivo animal tests. Mechanistic data investigate a sequence of events linking exposure to a
biological response and frequently come from non-animal tests.

IUCLID, which is available free of charge, is the most popular software for OHT implementation, and OHT 201 is
available there.

OHT 201 is designed to link chemicals to an intermediate mechanistic effect identified by a process-object-
action ontology. The template is compatible with a variety of technologies including all classes of NAMs. OHT
201 will continue to be refined to best support the most popular technologies.

OHT 201 can be used regardless of whether a NAM has an OECD test guideline supporting it. If there is a
supporting OECD guideline, many fields are pre-filled.

OHT 201 was made with the triangle of chemical safety in mind. The first point of the triangle represents a
stressor. The second point represents a study method (e.g., in vivo or in vitro tests), and the third point
represents a key event in an AOP. Once all three elements of this triangle use the same ontologies, stressor data
(in OHT 201), key event data (in the AOP Knowledge Base), and method descriptions (in method databases) can
be cross-referenced. Overtime, OHT 201 may help researchers learn which methods are best for identifying
different key events.

The European Commission's Endocrine Active Substances Information System (EASIS) is expected to be
published in 2020 and will then include 550 studies across 100 substances focused on intermediate effects
using OHT 201, with more data added over time.

Monique Perron (EPA): Case Study #1: Integration of NAM Data for Evaluating Potential Developmental
Neurotoxicity

Monique Perron (Senior Toxicologist at EPA-OPP) reviewed the work that EPA's OPP is conducting with ORD to test a
weight of evidence (WoE) approach to incorporating NAM data in evaluating the developmental neurotoxicity of
pesticides.

Summary

Due to challenges and limitations in the developmental neurotoxicity (DNT) guideline study, there has been an
international effort to develop in vitro assays that assess processes critical to development of the nervous
system.

These efforts have resulted in a battery of in vitro tests developed by ORD and investigators funded by the
European Food Safety Authority (EFSA).

Data from the ORD assays (microelectrode assay-network formation assay [MEA-NFA] and high content imaging
[HCI]) were recently presented to the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) Scientific
Advisory Panel in September 2020 using organophosphates as a case study.

11


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

ORD assays can replicate key cellular events and processes relevant to DNT as demonstrated through the use of
appropriate assay performance controls.

ORD assays demonstrate reproducibility in both positive responses and the potency of those responses.
Twenty-seven organophosphate chemicals in this set are differentially active in the MEA-NFA and HCI assay
suites.

IVIVE approaches for in vitro bioactivity observed in ORD assays result in estimated external dose values that are
greater than, or in some cases approximate to, doses that inhibit acetylcholinesterase in vivo.

Organophosphate data from these assays will be considered in combination with the results of assays
sponsored by EFSA as part of an overall weight of evidence evaluation of the DNT potential of individual
organophosphates.

Andrew White (Unilever): Case Study #2: Integration of NAM Data in a Next-Generation Risk Assessment
for Cosmetic Ingredients

Andrew White (Unilever) presented on how NAMs can be used for consumer product risk assessments, which
answer the generic question: Can we safely use x% of ingredient y in product z? A. White used a hypothetical case
study for a face cream containing 0.1% coumarin to demonstrate how this would be done with the limitation of using
no animal study data. This limitation was applied to "History of Safe Use" data, clinical data, animal read-across
data, and in silico data based on animal use.

Summary

In the face cream scenario, a reasonable estimation of daily likely exposure can be used in a PBPK model to
inform the key parameters. The model shows the simulated plasma concentration of coumarin after dermal
exposure, as well as uncertainty and population variability.

After conducting exposure estimation, the next step is collating existing information from sources like ToxTree,
the Molecular Initiating Event (MIE) Atlas, the OECD toolbox, and the scientific literature. In this hypothetical, the
ToxTracker results were negative. There were some findings that reactive coumarin metabolite(s) could induce
DNA lesions secondary to oxidative stress, and all binding and enzymatic assay results from a journal article
were negative at 10 pM.

An immunomodulatory screening assay found no effects at relevant concentrations.

The results of newly developed cellular stress response assays to characterize non-specific biological activity
suggested some dose response. However, in comparison to other known "high-risk compounds," coumarin is not
very active.

In high-throughput transcriptomics (HTTr) for in vitro biological activity screening, the MCF7 POD data were not
sufficiently robust to determine the margin of safety. The lowest POD values for the other cell models, HepG2
and HepaRG 2D, were used for the margin-of-safety calculation.

Tier 2 refinement focused on examining metabolic pathways. The findings were within range of those from the
HTTr test, showing low bioactivity.

In conclusion, the risk assessment showed that the expected exposure to face cream was lower than all the
PODs determined through the various tests and that coumarin is not genotoxic, does not bind to any targets of
interest, and does not show immunomodulatory effects. This WoE suggested that 0.1% inclusion of coumarin in
face cream is safe for consumer use.

Risk assessments require many lines of WoE to make safe decisions rather than a couple of assays alone. The
goal is to protect against hazards, not to define mechanisms. Diverse expertise, unbiased tools with biological
coverage, uncertainty estimates, and more case studies are necessary to build confidence in applying NAMs to
consumer product risk assessments.

Todd Stedeford (EPA): Case Study #3: Incorporating the Threshold of Toxicological Concern into
Regulatory Decisions under the Amended Toxic Substances Control Act

Todd Stedeford (Toxicologist at EPA-OCSPP) presented the third case study, which covered the incorporation of the
Threshold of Toxicological Concern (TTC) into regulatory decision making as dictated by TSCA.

Summary

T. Stedeford gave an overview of TSCA and specifically covered regulation of new substances under Section 5
and existing substances under Section 6. He also discussed Section 4h, which deals with the reduction of
testing on vertebrates.

12


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

EPA assesses a wide variety of human health and environmental associations with known, intended, and
reasonably foreseen conditions of use for new and existing chemical substances.

For the types of data used for new and existing chemical substances, EPA requires producers to provide all data
in their possession or control related to health and environmental effects. EPA may require the development of
new information relating to the chemical substances.

The amendment to TSCA in 2016 included a statutory mandate regarding testing on vertebrates. Such testing
must consider all reasonably available information and encourage and facilitate scientifically valid test methods
and strategies to reduce or replace use of vertebrate animals while providing information of equivalent or better
scientific quality and relevance to support regulatory decisions.

The TTC approach can fulfill many of these requirements under TSCA. EFSA and the World Health Organization
have endorsed the TTC approach as a pragmatic, scientifically valid methodology to assess the safety of
substances of unknown toxicity that are found in food.

The TTC is a level of exposure determined to be of no appreciable risk to human health due to the absence of
chemical-specific toxicity activity. TTC is regarded as an extension of generalized read across and chemical
category approaches that are already used by EPA under TSCA and by other regulatory agencies, such as the
Food and Drug Administration.

TTC is a possible approach to screening some structural classes of new and existing chemical substances to
make preliminary decisions. For example, TTC can be used:

o for evaluating exposure/release controls for new chemical substances and identifying the need for

possible refinements in the absence of chemical-specific or analogue data;
o for risk-based prioritization of existing chemical substances; or

o as a non-animal approach for making initial decisions regarding the need for further data gathering.
Several points warrant further evaluation. Oral TTC values are well established. However, TSCA risk assessments
may include evaluations via other routes (e.g., dermal and inhalation), other endpoints (e.g., portal-of-entry
effects), less-than-lifetime assessments, and ecological assessments.

EPA is developing a collaboration with other organizations to determine opportunities for the use of TTC in
regulatory decision making.

Additional points made:

In the TSCA chemical hazard approach, the hierarchy of data preference for risk assessments is chemical-
specific data followed by analogue data and then model data.

Day 2 Closing Remarks

Da vid Dun lap (EPA)

David Dunlap (Deputy Assistant Administrator for Science Policy, EPA-ORD) thanked everyone for their participation.
He noted NAMs research and development will be critical for reducing resource use and for rapidly gathering
information on chemical safety to protect the public.

D. Dunlap highlighted current NAMs research and applications from earlier presentations. ORD is integrating
metabolic competency into NAMs, an area of previous weakness. M. Perron (EPA-OPP) explained how NAMs can be
used for developmental neurotoxicity testing. A. White (Unilever) demonstrated how NAMs can be used for cosmetic
product risk assessments. T. Stedeford (EPA-OPPT) shared how OCSPP is using NAMs to determine TTCs in risk
assessments.

D. Dunlap stated this larger NAMs effort will require continued collaboration across government agencies, industry,
and research institutions. He encouraged attendees to submit comments on the work plan and feedback for the
conference by emailing NAM@epa.gov and expressed anticipation for another year of progress by the third annual
conference in 2021.

13


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
Appendix A: Acronyms

Acronym

Full Name

AIME

alginate immobilization of metabolic enzymes

AOP

adverse outcome pathway

ATAEPI

Analysis of TSCA Available, Expected, and Potentially Useful Information

BCTD

Biomolecular and Computational Toxicology Division

DGAD

Data Gathering and Analysis Division

DNT

developmental neurotoxicity

EASIS

Endocrine Active Substances Information System

ECHA

European Chemicals Agency

EFSA

European Food Safety Authority

eSTAR

Emerging Systems Toxicology for Assessment of Risk

FIFRA

Federal Insecticide, Fungicide, and Rodenticide Act

HCI

high content imaging

HESI

Health and Environmental Sciences Institute

HTTr

high-throughput transcriptomics

ICCVAM

Interagency Coordinating Committee on the Validation of Alternative Methods

IT

information technology

IUCLID

International Uniform ChemicaL Information Database

IVIVE

in vivo to in vitro extrapolations

Kow

octanol-water partition coefficient

LAN

local area network

LLNA

local lymph node assay

MEA-NFA

microelectrode assay-network formation assay

MIE

molecular initiating event

MOA

mode of action

NAMs

new approach methods

NAS

National Academy of Sciences

NCATS

National Center for Advancing Translational Sciences

NICEATM

NTP Interagency Center for the Evaluation of Alternative Toxicological Methods

NTP

National Toxicology Program

OCSPP

Office of Chemical Safety and Pollution Prevention

OECD

Organisation for Economic Co-operation and Development

OHT

OECD Harmonized Template

OPP

Office of Pesticide Programs

OPPT

Office of Pollution Prevention and Toxics

ORD

Office of Research and Development

PBPK

physiologically based pharmacokinetic

PCRM

Physicians Committee for Responsible Medicine

PETA

People for the Ethical Treatment of Animals

POD

point of departure

QSAR

quantitative structure-activity relationship

Q3

third quarter

Q4

fourth quarter

14


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Acronym

Full Name

SAB

Scientific Advisory Board

STAR

Science to Achieve Results

TG-GATES

Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System

TSCA

Toxic Substances Control Act

TTC

Threshold of Toxicological Concern

WoE

weight of evidence

Appendix B: Hyperlinks to Presentations and Other Materials

Hyperlink

Presentation Name

httDs://www.eDa.gov/research/eDa-new-aDDroach-methods-efforts-

Welcome and Charge to the Group

reduce-use-animals-chemical-testinp

httDs://www.eDa.gov/newsreleases/eDa-takes-imDortant-steD-

Welcome and Charge to the Group

reduce-unneeessgry-gnimgl-testing

3i//www,eDa,gov/chemical-research/eDa-new-aDDroacl>

Welcome and Charge to the Group

methods-work-Dlan-reducing-use-animals-chemical-testing

httDs://www,eDa.gov/newsreleases/eDa-continues-efforts-reduce-

Welcome and Charge to the Group



httDsi//v¥ww,eDa,gov/newsreIeases/eDa^announces^guiclance^

Welcome and Charge to the Group



3i//www,eDa.gov/sites/Droduction/files/2019-

09/documents/imaee2019-09-09-231249.Ddf

Welcome and Charge to the Group

iitu •" HHH 01 ! sho-v 0! ! ik^v-aDDroach^

Overview of EPA NAMs Work Plan



httDs://www,eDa.gov/research/administrator-memo-Drioritizing-

Overview of EPA NAMs Work Plan

eff o rts-re d u ce-a n i m a l-testi n e-se ote m be r-10-2 019

httDs://www.eDa.gov/research/eDa-new-aDDroach-methods-efforts-

Overview of EPA NAMs Work Plan

reduce-use-animals-chemical-testing

httDs://www.eDa.gov/assessing-and-managing-chemicals-under-

Progress on Implementing the TSCA
Alternatives Strategic Plan

tsca/strategic-Dlan-reduce-use-vertebrate-animals-chemical

httDs://www.regulations.gov/document?D=EPA-HO-OPP-2016-

Progress on Implementing the TSCA
Alternatives Strategic Plan

0093-0090

httDs://www,eDa.gov/sites/Droduction/files/2019-

Progress on Implementing the TSCA
Alternatives Strategic Plan

iDdate final.Dd

f

httDSi//www,eDa.gov/assessing-and-managing-chemicals-under-

Progress on Implementing the TSCA
Alternatives Strategic Plan

tsca/alternative-test-methods-and-strategies-reduce

httDSi//doi.org/10.1093/toxsci/kfaa062

Transcriptome-Based Derivation of an In
Vivo POD: Current and Future Utility

httDs://aDDvters.maavanlab. cloud/#/

Drugmonizome and Drugmonizome-ML:
Integration and Abstraction of Small
Molecule Attributes for Drug Set
Enrichment Analysis and Machine Learning

httDsi//uDddi.Ditt.edu/microDhvsiologv-svstems-database/

"Fit for Purpose" for Organotypic Models in
Environmental Health Protection



httoi//nas-s ites.org/emergingscience/meetings/biODlatform/

"Fit for Purpose" for Organotypic Models in
Environmental Health Protection



15


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Appendix C: Questions and Answers

Conference attendees submitted questions for the speakers at the end of each presentation. All questions were

answered either during the presentation time slot or following the presentation via the Q&A box or email.

Implementation of Animal Testing Reduction at EPA

Russell (Rusty) Thomas (EPA): Overview of EPA NAMs Work Plan

Questions following the presentation included:

Rao Shaila (Gowan Company): How would the NAMs help with regulatory submissions in the rest of the world
where testing is a requirement?

o Russell Thomas (EPA): Most of the recent international guidelines allow for the inclusion of NAMs in
the regulatory submissions, if not as a one-for-one replacement of the current animal-based tests, as
supporting evidence that may be used to waive testing requirements.

Jennifer McPartland (Environmental Defense Fund [EDF]): Are there other metrics for success aside from
reduced animal testing, for example metrics related to improved public health decision-making reached in
part through the use of NAMs?

o Russell Thomas (EPA): Right now, the goals and metrics are tracking the use of mammals in both
research and testing within the Agency. I think you are referring to goals and objectives that are
associated with the deliverables that will be outlined in the NAMs Work Plan.

Anonymous: Has the National Academies committee been formed? If so, has it started working?

o Russell Thomas (EPA): Not yet. It is still in the formative stages.

Jennifer McPartland (EDF): Is there a weblink to the National Academy of Sciences (NAS) committee on
uncertainties around mammalian testing?

o Russell Thomas (EPA): Not yet, but once available it will be posted on www.epa.gov/nam and we
intend to have workshops prior to the development of the committee so that we are all on the same
page regarding the charge of that committee.

Katya Tsaioun (Johns Hopkins Bloomberg School of Public Health): Are you planning to use evidence-based
methods such as systematic literature maps and reviews in identifying scientific gaps? These methods also
provide a guide for certainty assessment and one of your sister agencies at EPA's Integrated Risk
Information System (IRIS) is pioneering these approaches which could add transparency and objectivity into
the process.

o Russell Thomas (EPA): I think there will be some of these evidence-based methods and systematic
literature reviews in identifying the scientific application and development of these new approach
methods as well as comparing them to the traditional methods. There is not necessarily a
component of the NAMs Work Plan that covers systematic literature review, but the Agency plans to
continue using them not only in our traditional methods but also extending them to the way that we
use NAMs for Agency decision-making.

Anonymous: Could you elaborate on steps EPA is taking to figure out how to communicate cellular, in silico,
and other NAMs results to ordinary people for whom the tests may sound bizarre?

o Russell Thomas (EPA): That is a good question, and it is part of the last objective, which is the
communication and engagement of stakeholders. Obviously, stakeholder groups range from more
technically oriented stakeholders to less. We plan to engage both scientific community members in
how we are using some of these NAMs as well as laypeople. I think it is going to be a continual
process to develop communication approaches that match both of those stakeholder groups and
everyone between. Not everyone has the same level of understanding of NAMs at this point even if
they were well versed in the traditional methods, so there is going to have to be a range of training
materials and educational workshops to be sure that everyone is brought on this journey with us.
Laura Gutierrez: Is part of the Work Plan to validate existing method to study acute toxicities (as ocular
toxicity) with Organization for Economic Co-operation and Development (OECD) guidelines for the use of
pesticides?

o Anna Lowit (EPA): EPA's Office of Pesticide Programs (OPP) has a substantial effort to evaluate NAMs
for the acute "6 pack studies" made up of the acute lethality studies for oral, dermal, and inhalation
and the topical studies on eye irritation, skin irritation, and skin sensitization. The status of these

16


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

efforts can be found at: https://www.epa.gov/pesticide-science-and-assessing-pesticide-

risks/sttatMto

Rao Shaila (Gowan Company): Agreed, but some authorities do agree but some do need testing. Eventually
we end up conducting studies despite waiver request. My question is how to approach this type of situation,
o Russell Thomas (EPA): I believe the case studies that are being developed by regulators, academics,
and trade associations will be the path to achieve "evolution" in this space.

Rob DeWoskin (etiologic): Is EPA collaborating with other regulatory agencies nationally and worldwide to
advance NAMs?

o Russell Thomas (EPA): We have a series of collaborations through other organizations, such as
OECD, working to apply NAMs where we can.

Anonymous: Would NAMs be acceptable to EPA for consent orders and testing required under significant
new use rules (SNURs)?

o Gino Scarano (EPA): The short answer is yes, and there will be an example in my presentation of that.
Anonymous: Will the statistics on mammal use be normalized to research studies and regulatory dossiers
submitted? Would NAMs usage/test waivers also be documented?

o Russell Thomas (EPA): It really depends on how you define a "study." I'm not sure we can normalize it
in an effective way based on studies and dossiers. Right now, the current metric is tracking the
mammalian use in research studies within EPA.
o Anna Lowit (EPA): In the pesticides program, under 40 Code of Federal Regulations (CFR) part 158,
like most other countries in the world there are defined regulatory testing requirements for pesticide
chemicals. In our metrics and baselines for the reduction in animal use for pesticides will be based
on the standard animal numbers used in the OECD studies. Our baseline is expected to be the
standard number of animals used as our starting point. This link is to our new metrics page for the
pesticide program: Links to OPP's waiver guidance and animal reduction metrics can be found at:
https://www.epa.gov/pesticicle-scietice-aticl-assessitig-pesticicle-risks/strategic-visioti-adoptitig-tiew-
approach-methodologies. NAMs test waivers will be documented. The pesticides program has
guidelines publicly accessible, and we are in the process of developing more.

Anonymous: Would you elaborate on how you plan to evaluate both mammal studies and NAMs? Would this
include epidemiological studies?

o Russell Thomas (EPA): Yes, and I can elaborate. There are some clues already in the Work Plan
through these different case studies that are being formulated with the EPA Office of Research and
Development (ORD) and the regulatory partners. These are really the vehicle of how we plan to
evaluate the NAMs and use them in terms of replacing some of the traditional animal tests. These
could include epidemiological studies. The current case studies do not necessarily include them but
are not necessarily ruled out in the future.

Anonymous: Will there be a process to validate the NAMs and how would that look like?

o Russell Thomas (EPA): We are moving towards that. It would be established under the scientific
confidence framework and not necessarily use the more traditional trial validation process. In order
to meet the goals that we outline, the traditional validation process is not really appropriate or useful
for validating all of the new approach methods and replacing all the traditional animal tests on a
one-to-one basis. The scientific confidence framework really is how the Agency intends to establish
confidence in these NAMs for the purpose of decision making.

Meg Whittaker (ToxServices LLC): What consideration of mixtures toxicity will go into developing NAMs
specifically for mixtures?

o Russell Thomas (EPA): Currently the work plan does not specifically address mixtures, but it certainly
is part of the larger research program that ORD is developing. Trying to apply these NAMs to mixtures
is not something specifically called out in the Work Plan.

Fatma Oku§: Where in silico tools about toxicology are in NAMs. Can we trust them, how we can understand
the reliability, especially publicly available ones?

o Russell Thomas (EPA): Hopefully in developing the scientific confidence framework, EPA is going to
lay out how we would establish scientific confidence in them. Also, in the Work Plan, we intend to
develop a report on the current uncertainty surrounding the traditional animal studies and use that
to establish the expectations for the NAMs. It is through both objectives that we intend to develop
scientific confidence in our new approach methods together with setting expectations based on our

17


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

traditional animal studies.

Jennifer McPartland (EDF): Relatedly, is EPA coordinating with federal research institutes in addition to
regulatory agencies (e.g., National Institute of Health [NIH], National Institute of Environmental Health
Sciences [NIEHS]), particularly regarding gaps and adverse outcome pathway (AOP) development?

o Russell Thomas (EPA): The short answer to this is yes. We partner with an NIEHS through the Tox21
consortium. Both in terms of trying to identify what those gaps are and how we approach them from
research and science perspective, we are coordinating and working with our federal research
partners on that.

Jason Fritz (EPA): Outside of EPA's Office of Chemical Safety and Pollution Prevention (OCSPP), is there a
current Agency activity (or case study) currently underway, where human health risk measures are based
solely on NAMs endpoints?

o Russell Thomas (EPA): There is a case study outlined in the work plan in terms of using NAMs for
biosolids with our partners in the EPA Office of Water.

Anonymous: Even if EPA phases out, but the same chemicals would be required to be tested in other
countries. It [is] super important that EPA encourages other countries to come up with similar vision.

o Russell Thomas (EPA): It is important for EPA to encourage other countries to come up with a similar
approach as we are part of a global economy. Certainly, what we do is impacted by and impacts the
requirements of other countries.

Katy Wolton (Syngenta): Will there be guidance for NAMs developers in the meantime on how to assess the
performance of their new method? Bearing in mind we do not necessarily want to compare performance to
animal data.

o Russell Thomas (EPA): This is going to be covered by Gino Scarano in the next talk in terms of how
NAMs developers will be asked to assess the performance associated with these methods.

Ravi Menon (Afton Chemical): As part of the NAMs program, is there a plan to update traditional EPA
quantitative structure-activity relationship (QSAR) tools such as Estimation Program Interface (EPI) Suite and
Ecological Structure Activity Relationships (ECOSAR)? These tools, while helpful for preliminary screening, do
not have robust domain information, that limits their use. These tools do not pass the OECD validation
principles requirements. It would be good to improve/update these tools.

o Gino Scarano (EPA): This is not a part of my presentation, but it is a good point and something to
consider.

Anonymous: To continue the line of questioning that Fatma Oku§ mentioned five questions ago on in silico,
where in vitro or in vivo tools about toxicology are in NAMs. Can we trust them, how can we understand the
reliability of the species selection? The assay amenability?

o Russell Thomas (EPA): I hope that this will be addressed in the NAS report that is planned as a
deliverable, and in the scientific confidence framework on our new approach methods whether they
are in vitro or reduced in vivo models.

Bill Eckel (EPA): OPP's Environmental Fate and Effects Division (EFED) is currently working on QSAR for fish
acute toxicity studies.

Anonymous: For OPP and OCSPP, could you describe how staff is being trained to understand and use NAMs
given that the science is changing so quickly? For example, how often is their training updated?

o Anna Lowit (EPA): That is a good question because it can be hard to keep up as the science changes
so rapidly. Several times, we have had our staff attend training specific to some of the areas such as
skin irritation studies and in vitro dermal absorption. We have had Clive Roper give a training on
dermal absorption and inhalation 3D models. I would anticipate getting staff trained on
developmental neurotoxicity assays. You make a good point that it is difficult to keep up, but we are
focusing our training on those areas we believe to be close to regulatory use.

Rob DeWoskin (etiologic, LLC): Are there any plans to validate improvement in risk decisions and regulatory
actions with more field testing of body burdens and effects, understanding how difficult this is especially for
chronic exposures?

o Russell Thomas (EPA): That is a good question to validate the improvements in risk decisions in
regulatory actions. I think that involves more monitoring as well as epidemiological studies. Right
now, that is not a part of the Work Plan, but we also have research activities related to some of those
suggested activities.

Clive Roper (Charles River Labs): I am happy to give folks training on in vitro dermal absorption and in vitro

18


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
respiratory inhalation.

Louis (Gino) Scarano (EPA): Progress on Implementing the Toxic Substances Control Act (TSCA)

Alternatives Strategic Plan

Questions following the presentation included:

Jennifer McPartland (EDF): Beyond the public comment period for the NAMs nomination and acceptance
process, will there be opportunities for public comment on individual NAMs for potential listing?

o Gino Scarano (EPA): There is not currently a separate opportunity for submitting public comments
specifically on individual NAMs. Those comments can be submitted along with the public comments
for the NAMs nomination and acceptance process.

Ravi Menon (Afton Chemical Corporation): The pre-manufacture notice (PMN) process does not require any
data currently. EPA assesses the hazard and risk of a submitted new chemical using structure-activity
relationships and other resources (confidential business information data for example). How will NAMs
development be integrated with the PMN review process?

o Gino Scarano (EPA): NAMs information will be integrated into that process as it is collected.

Xianglu Han (Lanxess): Is there any scientific basis of setting the cutoff for insoluble polymers as a water
solubility of 100 mg/L? If the water solubility is larger, is it true that there would be no testing requirement?
For nanoparticles, there is a definition of dissolution in water. What is the definition of water dissolution for a
polymer?

o Gino Scarano (EPA): The proposal does not apply to nanomaterials. The 100 mg/L water
extractability cutoff was based on EPA's published general water solubility classifications (i.e.,
moderate solubility: >100 mg/L - 1,000 mg/L). Though these values were not established for
evaluating the solubility of particles for lung overload, we used them as a conservative cutoff for
extractability, per OECD TG120. If the high molecular weight polymer has a value greater than 100
mg/L, then the substance would not be considered within the category.

Rovida Costanza (CAAT Europe): Tier III should be in vitro testing with 3D reconstruction of human airway
tissue, before in vivo.

o Gino Scarano (EPA): Thank you for your suggestion.

Anonymous: Were there mostly Good Laboratory Practice (GLP) laboratories?

o Gino Scarano (EPA): Most were GLP laboratories and some were internal company laboratories.
Donna Macmillan: How were the 51 substances chosen? Was a balanced dataset considered?

o Gino Scarano (EPA): Many more chemicals were nominated than the 51 eventually selected. In some
cases, it was difficult to procure the chemical, so it could not proceed in the study. Additional
chemicals were dropped for reasons not recalled.

Joseph Manuppello (Physicians Committee for Responsible Medicine): TSCA directs anyone submitting
voluntary information to first attempt to develop that information by an alternative method identified by EPA.
We have noted numerous examples of PMN submitters including the results of recent animal tests, including
tests for which alternatives have been identified. How is EPA communicating this TSCA provision to chemical
sponsors and reviewing compliance?

o Gino Scarano (EPA): We have communicated this in our outreach and presentations made to the
public and stakeholders.

Anonymous: Does OPP provide similar NAMs use information (how many are registered with or without NAMs
each year) about pesticides under Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA)?

o Gino Scarano (EPA): You can find the number of in vitro studies submitted to OPP in the last 2 years
at this site (https://www.epa.gov/pesticicle-scietice-aticl-assessitig-pesticicle-risks/adoptitig-21st-
centurv-science-methodologies-metrics).

Ishita Virmani (RECETOX): Can anyone submit a NAMs?

o Gino Scarano (EPA): Anyone with sufficient familiarity with TSCA, such as EPA, other United States
federal agencies, international agencies, groups advancing alternatives to animal tests, TSCA
submitters (regulated industry), consulting firms specializing in TSCA support, and companies with
assays, models, and other tools seeking to repurpose and commercialize for TSCA compliance.

Rocky Goldsmith (EPA): What does "Number > 650" in slide 26 refer to?

o Gino Scarano (EPA): That represents the approximate total studies as preliminary findings are
gathered. For slide 26, over 650 studies have been requested for any health endpoint and the top

19


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

five requested studies are shown in the pie chart.

Jim Baldassari (FMC Corporation): Does EPA recommend running OECD 442C and 442D before 442E as
part of the two out of three method, or can any order be done?

o Gino Scarano (EPA): For the two out of three method, the assays can be run in any order. See the
draft policy (httDs://www.regulations.gov/document?D=EPA-HO-OPP-2016-0093-0090) for
additional information.

Anonymous: Related to the EPA Office of Pollution Prevention and Toxics (OPPT) receiving dermal acute
studies, OPP just released a draft policy that allows for waiving these studies since they did not contribute to
risk decision-making. Could this apply to TSCA chemicals? TSCA does not have specific requirements, but
companies are doing these studies. Can OPPT communicate that it does not require these studies?
o Gino Scarano (EPA): Correct, there are no data requirements and EPA does not have control over
what companies submit. If it is something required for a different country and it was performed, it
has to be submitted under TSCA. Communication is important, but most companies know EPA does
not require any studies.

State of the Science in Development of NAMs

Andreas Bender (Cambridge): Using Chemical, Biological, and In Vivo Data for NAMs: Which data do we

have, and what can we do with it?

Questions following the presentation included:

Anonymous: How have you concluded that the NovaScreen (NVS) assay was related to mitral valve
incompetence? What was the endpoint measured in this assay?

o Andreas Bender (Cambridge): I would need to check that with the student.

Fatma Oku§: Is it appropriate to understand the reliability of a tool from its database (e.g., in silico toxicology
tool)?

o Andreas Bender (Cambridge): Yes absolutely, the underlying data is key to any prediction or analysis.
I think some Applicability Domain concept is key here, i.e., understanding which say recall or positive
predictive value (PPV) a certain similarity to existing datapoints gives us.

Anonymous: Many of the Tox21 and ToxCast assays have activity due to activity burst? What precautions
must be taken into account for the cytotoxicity?

o The speaker was unable to answer this question.

Anonymous: How quick is it to process and interpret the omics data?

o Andreas Bender (Cambridge): Today's computers can be sufficiently quick, and there are hardly
computational issues. The question is how to analyze it to see something meaningful. The
intellectual question is more of a consideration than the computational.

Anonymous: How did you determine the association of adverse events to the various assays of
ToxCast/Tox21 (e.g., the assay NVS_NR_hER measuring ERalpha)? What was the logic behind associating it
with mitral valve prolapse?

o The speaker was unable to answer this question.

Karma Fussell (Nestle): With large datasets like omics, several groups have demonstrated the enormous
effect that decisions on digital data management and analysis processes have on the final results, such that
ten experts get ten different differentially expressed gene lists, or ten different interpretations of pathways
affected. Knowing this and in the context of regulatory decision making where there can only be one version
of "true results," how useful are gene expression NAMs?

o Andreas Bender (Cambridge): It is an important question. We need to bring people together to have
an agreement among companies or other consortia to have a standard protocol. I think there can
very well be signals for most endpoints we are interested in. There are times where it will be very
helpful, but we need to establish those standardized protocols.

Dave Number (Honeywell): Using a frequentist approach to analyzing single nucleotide polymorphisms (SNP)
data sometimes lead to non-replicable results. I was wondering if you can elaborate on your Bayesian
approach to gene expression data. What are you using for your prior odds?

o Andreas Bender (Cambridge): We did not do this ourselves using Bayesian methods, but you can
e.g., estimate points of departure using gene expression data this way:
https://academic.oup.eom/toxsci/article/176/l/236/5818890.1 think this will be a later

20


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

contribution by Unilever as well
Rocky Goldsmith (EPA): Can you imagine the federated model approach such as Machine Learning Ledger
Orchestration for Drug Discovery (MELLODDY) for anonymizing data between partners useful beyond pharma
players (e.g., agriculture, petroleum)?

o Andreas Bender (Cambridge): What you can anticipate, and what we also generally see, is better
chemical space (or generally input data) coverage, so better recall for models if you use them in this
way. I think this is consistently shown, also with what Lhasa has shown from student-teacher models
and the like. There are a few problems including assay conditions, which might not be compatible,
and trust between partners (i.e., how do I trust other people's data quality?). My preference would be
joint precompetitive data generation, otherwise, you end up with really clumped data, whether in
chemistry or readout space. Of course, "many clumps are at least better than fewer clumps;" that is
also true.

Kamin Johnson (Corteva): Transcriptome-Based Derivation of an In Vivo Point of Departure (POD): Current
and Future Utility

Questions following the presentation included:

Chris Vulpe (University of Florida): What are your thoughts about using a similar approach for ecotoxicity
endpoints?

o Kamin Johnson (Corteva): The AOP concept of molecular change leading to apical effects applies to
all species. In my opinion, the transcriptome approach to POD derivation can be used in any species.
If interested, there are studies in fish published by others that have examined this question.
Anonymous: How do you know that a gene has a treatment related change? Are there specific signatures
that particular agents leave on a gene? How do we know that the cellular mechanisms for repair do not limit
the expression of the gene?

o Kamin Johnson (Corteva): Statistical tools, such as the Williams trend test and the 1.5-fold change
test, help identify treatment related changes. Cellular mechanisms may limit expression, but we
focus on changes in bioactivity and any gene can drive that type of departure.

Atish Patel (FMC Corporation): Were the same doses used to compare 24-hour transcriptomic versus 29-day
apical PODs? The fold difference can also be due to dose selection.

o Kamin Johnson (Corteva): In the data I showed, the same dose levels were used for the 24-hour
microarray and 29-day apical studies. Appropriate study design is important to accurately derive a
POD. Data that I did not have time to show suggest that a study design with more dose groups (but
smaller group sizes) is ideal.

Anonymous: Did I hear that in one of the presented cases, the 90-day showed no transcriptional changes in
the liver but liver tumors in the chronic study?

o Kamin Johnson (Corteva): Liver gene expression at 90-day study was null, but tumors in kidney in a
2-year study.

Xianglu Han (Lanxess): I missed some [of the] presentation, so [I am] not sure if this has been addressed. If
transcriptome POD is recommended as POD for risk assessment, do you recommend transcriptome data be
used as the basis for Globally Harmonized System (GHS) classifications? I feel the hazards used in risk
assessment and in GHS classifications should be identified in a consistent manner.

o Kamin Johnson (Corteva): In my opinion, if the regulatory paradigm requires identification of hazards
for protection of human health, using transcriptome data to identify the hazard is difficult for most
hazards.

Atish Patel (FMC Corporation): If you show that the liver is not always a useful surrogate and that the
transciptomic POD from target organ is also sometimes >10* of apical POD, do you anticipate that a 10* UF
will then be applied to the transcriptomic POD, thus impacting how the molecule is in turn used? What work
is being done to correlate apical to transcriptomic PODs to gain further confidence in this method to reduce
the uncertainty? If a 10* UF is applied to a transcriptomic POD, how different is this to using duration based
UFs (6x versus 3*) to extrapolate from short-term (28-90 days) studies where we can still get some target
organ or apical toxicity information?

o Anna Lowit (EPA): As a risk assessor, we acknowledge the need for considering uncertainty factors
and such this issue will be important. The first thing we must do is the science. When we think about
policy considerations before we establish the science, it ties us in knots. We do not know how

21


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

applying uncertainty factors or extrapolation factors will look. We want to work with stakeholders on
the science, create case studies, and consider options.

Anonymous: What do you think are the next steps to encouraging this type of data to be generated and
accepted as a surrogate to conducting long-term rodent bioassays?

o Kamin Johnson (Corteva): I think there are a few major questions that need to be addressed: 1) what
is the appropriate study design in terms of dose level selection, group sizes, group spacing, and
derivation of the transcriptome POD; 2) is linking the gene expression data to a mechanism required
or is a "bioactivity"-based POD sufficient; and 3) how flexible is the current regulatory system to
change that may not specifically identify a hazard (e.g., in a risk-based system, does labeling a
molecule as a carcinogen or a reproductive toxicant actually protect human health?).

Anonymous: How often are you not able to accurately predict (no more than a 3x fold difference) the animal
POD using the transcriptome POD (i.e., what is the rate of false negative data)?

o Kamin Johnson (Corteva): For the molecules we have examined to date, about 75% of these have a
transcriptome POD within 3* of the apical POD. My hypothesis is that the lack of concordance may
be more due to inherent error (lack of exact reproducibility) in estimating the "true" apical or
transcriptome POD.

Anonymous: Do you think this approach could be applied to reproductive and developmental toxicity? What
are the first steps we could take in this field?

o Kamin Johnson (Corteva): Yes, I think it can be applied to any toxicity. We have data which I did not
show that this approach works for a developmental toxicant acting via aromatase inhibition.

James Jacobus (Minnesota Department of Health): Have you analyzed the testis data in more detail to
understand why this cancer study POD was overshot by 20-fold using the transcriptome POD approach?
o Kamin Johnson (Corteva): We have not in detail. It may be that the initiating for the toxicity is not the
testis but another organ, which we did not examine. It may be the higher dose levels used in the
transcriptome study compared to the cancer bioassay were not ideal to generate the most accurate
transcriptome POD. These are hypotheses that require additional studies to answer.

Karma Fussell (Nestle): Has the Health and Environmental Sciences Institute (HESI) team established a
harmonized workflow for data analysis to determine the POD by each member?

o Kamin Johnson (Corteva): This is an important question we are working on that is a long-term goal.
There are methods in literature for a harmonized workflow. For regulatory use, we need a consensus
on what methods to use.

Mark Jankowki (EPA): Can you comment on when/if proteomics should be used in concert with
transcriptomics? Do you see proteomics as promising as well?

o Kamin Johnson (Corteva): We have only examined transcriptomic data in our studies. Other types of
omics (proteomics; lipidomics; metabolomics) may also be informative. We have not examined any of
these other data types. Proteomics may be more technically challenging that transcriptomics.
Proteomics are noisier and more expensive, but they add to the quality of the predictions if combined
with transcriptomics. Alone, predictions with transcriptomics are more robust than predictions with
proteomics.

Chris Vulpe (University of Florida): Have you tried using these approaches on CRISPR screens? There are
large databases of CRISPR data for example at Broad or Sanger.

o Kamin Johnson (Corteva): Yes, these are great datasets to integrate. We created gene set libraries
from this data for Enrichr. We also use these data to validate predictions, and to prioritize drugs.

A vi Ma 'ayan (Mount Sinai): Drugmonizome and Drugmonizome-ML: Integration and Abstraction of Small

Molecule Attributes for Drug Set Enrichment Analysis and Machine Learning

Questions following the presentation included:

Logan Everett (EPA): For All RNA-seq and CHIP-seq Sample Search Space (ARCHS4), has there been any
effort to clean up the annotations for these data sets that came from Gene Expression Omnibus (GEO)? In
my experience, GEO has many miRNA-seq and ChlP-seq studies misannotated as traditional RNA-seq, and I
have seen these pulled into past versions of ARCHS4. These seem likely to inflate gene-gene global
correlations in the repository.

o Avi Ma'ayan (Mount Sinai): The best effort that I am aware of is MetaSRA

https://metasra.biostat.wisc.edu/. We do some filtering, so not all studies are included in the gene-

22


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
gene correlation matrices.

Patricia Ruiz (Centers for Disease Control and Prevention [CDC]): How these tools compare to commercially
available systems biology tools, e.g., ingenuity, MetaCore?

o Avi Ma'ayan (Mount Sinai): Since these tools are beginning to become more mature and have been
widely used in Pharma, as Novartis and Johnson & Johnson, showing they are competitive with those
commercial option. These are open and free as well.

William Irwin: Is there an association with cholesterol lowering drugs and poorer outcomes for Coronavirus
Disease 2019 (COVID-19) cases?

o Avi Ma'ayan (Mount Sinai): There are mixed reports, but there is strong evidence that this pathway is
central for the virus life cycle.

Patricia Ruiz (CDC): Could you expand on current chemical space available at the tools presented (e.g.,
environmental chemicals)?

o Avi Ma'ayan (Mount Sinai): We just looked at it between the Library of Integrated Network-Based
Cellular Signatures (LINCS) L1000 data and about 400 of them overlap with the Tox21 compounds.
We look forward to future data being analyzed in this way.

Annie Jarabek (EPA): How is curation documented?

o Avi Ma'ayan (Mount Sinai): The curation involves Extract, Transform, Load (ETL) processes of taking
data from a public database and transforming it an abstract representation. These are documented
as Jupyter notebooks on GitHub and now also as ETL Appyters:
httPs://appvters.maavanlab.cloud/#/?a=ETL.

Anonymous: You have demonstrated workflows using a variety of technologies. Do you think the regulatory
use and transformation of this data should be used and adapted into agile workflows on the fly? Or should
we be building large infrastructures as SaS. In research mode, one discovers and applies and publishes, but
in regulatory mode, we try to build elaborate frameworks. How should we focus our efforts? [Should we]
make modular workflow tools and draw assumptions with one-off questions when they arise, or build
complex models that predict endpoints that never end?

o Avi Ma'ayan (Mount Sinai): I think you still want to have an elaborate framework that is considered
the state-of-the-art pipeline and is continually improving, but I am not familiar enough with the
domain to determine with certainty.

Ivan Rusyn (Texas A&M): "Fit for Purpose" for Organotypic Models in Environmental Health Protection

Questions following the presentation included:

Karma Fussell (Nestle): Have toxicological mechanistic data collected using organotypic models become
reproducible enough that they can be used as part of standardized regulatory safety testing?

o Ivan Rusyn (Texas A&M): That is a good question. Currently they are not very reproducible to use for
regulatory safety, but if standards are applied, they can transform the field.

Rocky Goldsmith (EPA): We create organotypic organ-on-a-chip solutions, and leave the "whole" system, how
will we "apply" these tools to mixtures that do not reach the target tissues being assayed at the same
relative levels as they would have through biology? This also specifically hits hard on racemic mixtures. Take
a technical grade pesticide with specific stereoisomeric ratios; their relative abundance on a tissue-on-a-chip
will not be "biologically-relevant." How do you anticipate this could be addressed sooner than later (i.e.,
estimates at target tissues relevant for emulating mixture) so that we do not continue to generate cloudy
data fraught with misrepresentation of isomeric ratios. How do we make organotypically relevant
concentrations for mixtures (modeling?)?

o Ivan Rusyn (Texas A&M): The mixtures challenge runs first into the throughput challenge. I am
personally less concerned about "like human" than throughput.

Rocky Goldsmith (EPA): Just because these assays, primarily designed for pharma, could be used to assay,
does not mean that in unique chemical space areas (i.e., heavily halogenated) they will still be amenable to
that assay. How have partners addressed that problem early in the context of non-pharma chemical space,
for each of the organotypic "end-points?"

o Ivan Rusyn (Texas A&M): The challenge is not drug versus non-drug, but how can one use it with the
diversity of chemicals and concentrations. I do not believe that current tissue chips are only for
pharma. Most I have seen are not "disease model" type, so they can be used. I have yet to see a
model that is reasonable throughput/cost and availability to be applicable to EPA.

23


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Jennifer McPartland (EPA): What has been gleaned thus far in evaluating "tissue on a
chip/microphysiological" systems alongside other more typical in vitro cell-based reporter assays,
transcriptomics, etc.? If this type of evaluation is not yet underway, are the plans to do some comparative
analyses along these lines? (I appreciate this is a complicated question.)

o Ivan Rusyn (Texas A&M): Most comparisons are needed in the first step. If someone publishes a
model, the question is whether that will then work in another lab, not necessarily including gene
expression. Their publications in that space provide a direct comparison. There is not convincing
evidence that we need more complicated models. We must be ready to test thousands of
compounds. The throughput can become a showstopper for elegant models.

Rob DeWoskin (etiologic): Are you seeing increasing regulatory acceptance of NAMs that are validated
against human data rather than animal data? Are these comparisons with human data becoming more of a
trend?

o Ivan Rusyn (Texas A&M): There is little "regulatory acceptance" to begin with at the moment, in my
opinion. However, both regulators and the industry repeatedly point out that a comparison to animal
data is most straightforward and desired to understand the "value proposition" of NAMs for decision-
making.

Satinder Sarang (Shell): How does metabolism compare to traditional cell-lines, do you see a change in
metabolism over time? How well does the effective concentration(s) in vivo compare to actual effective dose
in organotypic cultures for pharmaceutical-like chemistries and petrochemicals?

o Ivan Rusyn (Texas A&M): It depends on the cell type, and there are several liver tissue chip models
that have demonstrated metabolism over 14-28 days in culture. I have no data on the latter
question.

Annie Jarabek (EPA): When you say the Agency needs to articulate what it needs, do you mean specific
endpoints in target systems?

o Ivan Rusyn (Texas A&M): There are two points: (1) the Agency would be best served articulating what
"-icities" are a priority so that NAMs developers understand what needs to be "replaced", and (2) for
chemicals where there are data gaps, it may be good to articulate what the most impactful gaps are.

Addressing Current Limitations in NAMs

Chad Deisenroth (EPA): Retrofitting an Estrogen Receptor Transactivation Assay with Metabolic

Competence

Questions following the presentation included:

Jay Petrick (Bayer): Why were Phase II cofactors excluded?

o Chad Deisenroth (EPA): There's certainly an interest to include them. It was a hurdle to optimize
Phase I cofactors without negatively affecting cell viability. We considered this an acceptable first
step, but [I] agree that Phase II cofactors eventually need to be included.

Atish Patel (FMC Corporation): How do you account for solubility differences in +/-S9 conditions and how
does it impact the interpretation of the results?

o Chad Deisenroth (EPA): The expectations is that the S9 is already solubilized when put in the

alginate. Then, alginate gets cross linked inside a matrix. Our real concern is diffusion of compounds
across the barrier. Our analysis shows some passive diffusion of chemicals to feed metabolites
produced. No issues with solubility have been observed.

Anonymous: As Dr. Chad Deisenroth explained it well, the in vitro assay sometimes may produce the best
results because the complications of S9 and bioactivation in these cell assay systems.

Anonymous: Have these studies been conducted using human S9?

o Chad Deisenroth (EPA): The initial development was with human S9. Unfortunately, when applying it
to guideline estrogen receptor (ER) assays, we did find some estrogenic effects in the S9, so we
could not apply it to this specific assay.

David Saltmiras (Bayer): For the activated metabolites, are you aware of any data for in vivo half-lives?
o Chad Deisenroth (EPA): To some degree, we are making hazard identification with only bioactivity.
We do not know what mixture of metabolites is produced from the parent compound. The identity of
the metabolites is somewhat unknown, but the literature helps us have a decent idea.

Anonymous: Does the S9 system include sulfation and glucuronidation?

24


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

o The speaker was unable to answer this question.

Paul Carmichael (Unilever): Could you comment further regarding the utility of this very neat too beyond the
ER transactivation assay?

o Chad Deisenroth (EPA): The applications should work on any assay within this microplate well format.
His lab focuses on endocrine related assays, but it is still relevant to tier I assays.

Susan Borghoff (ToxStrategies): Is it true that glucuronidation or sulfation enzymes, cofactors are no in the
S9 system but are major metabolic systems involved with genistein and BPA that would decrease their
bioactivity. Are you discussing how to ensure Phase 2 enzymes are incorporated?

o Chad Deisenroth (EPA): Yes, the lack of phase II activity is a noted limitation of the current Alginate
Immobilization of Metabolic Enzymes (AIME) method and likely impacts the bioactivity of some
chemicals. We are actively working on approaches to include phase II activity
Martin Phillips (EPA): Any plan to apply the same metabolic system approach to the Attagene assays so we
can see the effects on the transcriptome?

o The speaker was unable to answer this question.

Paul Carmichael (Unilever): Are there any plans to ensure broader adoption (e.g., in
bioactivation/detoxification in POD setting with high-throughput transcriptomics)?

o Chad Deisenroth (EPA): We are working on promoting adoption of the method by Cross-Origin
Resource Sharing (CROs), but [we] have not addressed application specifically to the Attagene
assays.

Sayak Mukherjee (Battelle): How are you relating the in vitro concentration of primary and secondary
metabolites to the in vivo concentrations that one might see?

o Chad Deisenroth (EPA): They are not related at this time, but [it] is an important point for risk
characterization.

Helen Tinwell (Bayer): Have you applied this method to other in vitro assays (e.g., H295R steroidogenesis
assay)?

o Chad Deisenroth (EPA): Not yet, but we have our sights set on other endocrine assays, particularly
steroidogenesis.

Anonymous: Could you elaborate more on the challenges to transfer the AIME method to the bioprinting?
o Chad Deisenroth (EPA): We are in the early stages of the project, but are actively evaluating hydrogel
and crosslinking options, speed, and pressure of dispensing, and technical variability in standard
microplates.

William Irwin: What is the maximum S9 exposure time that can be used (i.e., is 2 hours long enough)?

o Chad Deisenroth (EPA): We have evaluated out to 8 hours, but activity of S9 is quickly lost post-thaw.
Metabolic profiling experiments are classically run for 30-60 minutes, so 2 hours was a reasonable
time point for efficiency. This likely is not applicable to all chemicals, particularly those with slower
reaction kinetics.

Christopher Choi (Takasago International Corporation): I understand the use of a cell-line to artificially
express the receptors to evaluate the binding potential, however, there is a disconnect as these receptors
are not endogenously expressed. How can we go about filling in the gaps in our understanding the
usefulness of these assays?

o Chad Deisenroth (EPA): The VM7Luc4E2 cell line uses endogenous ER to transactivate artificial
response elements. There is a case to be made for biological relevance, as far as that argument
stretches for engineered, transformed cell lines, but the high throughout is the primary point of
utility. The application of orthogonal assays with cellular and structural complexity of the native
tissue and organ would perhaps be the better way to address biological relevance and
toxicodynamics.

Anonymous: How does the mRNA direct method compare to the liver on the stick method for predicting
bioactivation?

o Chad Deisenroth (EPA): Good question. The formal analysis on the comparison between the two
methods has not been conducted. The effects are more mixed in the mRNA liver mixtransfection
where the kinetics of parent and metabolite binding are likely overlapping to a greater extent than
the AIME method.

o Christopher Choi (Takasago International Corporation): i.e., just because it binds, does not always
mean that it will result in downstream activities (e.g., G-coupled protein activities).

25


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Nicole Kleinstreuer (NIH): Regarding the reliability of the Tox21 assays, as integrated into the ER pathway
model, this model has been validated against the regulatory reference standard of the guideline uterotrophic
assay and found to be robust, predictive, and reliable.

Anonymous: I am wondering if there were controls run to determine potential background activity associated
with S9. In other projects examining sequential or co-incubations of S9 and ER CALUX, the S9 was reported
to have some innate activity.

o Chad Deisenroth (EPA): Indeed, this was the case with human S9 fractions where there was

considerable intrinsic estrogenic activity. The background ER activity for rat S9 was captured in the
negative solvent controls run on each plate.

Shoba Iyer (CalEPA): What are the advantages of retrofitting immortalized cells with metabolic capacity as
compared to testing chemicals in primary cell cultures instead?

o Chad Deisenroth (EPA): Primarily the speed and throughput, but legacy data and acceptance of
specific guideline in vitro assays also plays a role. Primary cells are also a good option, but [they]
present their own challenges and domains of applicability for screening.

Barbara Losey: For all speakers: Can you provide a list of citations on the NAMs website of publications
related to the work presented here today?

o David Crizer (National Toxicology Program [NTP]): Chad Deisenroth, Danica E DeGroot, Todd
Zurlinden, Andrew Eicher, James McCord, Mi-Young Lee, Paul Carmichael, Russell S Thomas, The
AIME Platform Retrofits an Estrogen Receptor Transactivation Assay with Metabolic Competence,
Toxicological Sciences, kfaal47, https://doi.org/lQ.lQ93/toxsci/kfaal.47

David Crizer (NTP):\u Vitro Disposition ofTox21 Chemicals: Initial Results and Next Steps

Questions following the presentation included:

Anonymous: Are the NTP methods validated and GLP quality?

o David Crizer (NTP): They are not to that extent.

Martin Phillips (EPA): Have you considered using nuclear magnetic resonance (NMR) to quantitate using
stable-labeled compounds versus radiolabeled ones? Could increase the number of compounds you can
"ground-truth."

o David Crizer (NTP): That is not something we have considered at this point, but that is a good point.
Mark Jankowski (EPA): Are you counting (flow cytometry?) the number of cells to calculate the total volume
of cells for determining chemical concentration in cells?

o David Crizer (NTP): I am not sure how Josh Harold did that. We would have to reach out to him to see
how he exactly tackled that.

Rocky Goldsmith (EPA): For the chemistries you have looked at, are there specific physicochemical
parameters or molecular descriptors, or fingerprints that suggest one could identify chemicals in "well-
behaved" or "poorly behaved" chemistries in the in vitro context? Molecular features or properties that could
suggest assay amenability and flags?

o David Crizer (NTP): We have not gotten to that point yet. We are focusing on really nailing down the
analytical work first before we dig into those trends into what properties matter and how chemicals
partition. This is something that we plan to dig further into in the next pilot and hope to publish that
larger study.

William Irwin: What is the effect of xenobiotic efflux pumps, which can be upregulated?

o David Crizer (NTP): We do not have enough data at this point to comment on that, which is why we
are going to a larger pilot with more concentrations.

Anonymous: Why wouldn't there be binding at the bottom of the wells?

o David Crizer (NTP): These MCF7 Cells are confluent at the bottom of the wells, and it is also 100%
covered, or at least close to 100% covered. The chemical is not going to move through the cells and
then into the plastic, so we feel good with that assumption.

Xiangiu Han (Lanxess): Has the EPA or NTP considered developing an in vitro system to account for
metabolism of chemicals by the human gut microbiome? For oral exposure of chemicals, this is upstream of
other steps, such as tissue absorption of chemicals. Ideally, in many cases, the gut microbiome metabolites
should be used for other in vitro systems to study chemical toxicity.

o David Crizer (NTP): I am currently not aware of ongoing work on this at this time. I do however know
that there is certainly interest in this area.

26


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Sayak Mukherjee (Battelle): Does your work validate Armitage model? Do you expect to see departures from
Armitage model and why?

o David Crizer (NTP): So far this seems like there is good agreement, but we do think there are other
things out in the literature that are important factors that are not included in the Armitage model.
That is why we were trying to see if it is possible to develop a more involved model that considers
physicochemical properties.

Costanza Rovida (Center for Alternatives to Animal Testing Europe [CAAT-EU]): Have you considered intra-
cellular distribution of chemicals?

o David Crizer (NTP): Currently we are not looking at intra-cellular distribution of chemicals.

Natalia Ryan (Syngenta): What are the existing quality control (QC) grades for the 200 chemicals? Will your
work have any impact on explaining QC grades, or improving the grades in the future?

o David Crizer (NTP): For all the 200 chemicals that I have checked they passed analytical QC based
on the scoring in Tox21 database. Most of which have a QC grade of A. At this time, it is not expected
that this work will have any impact on QC grades.

Developing Scientific Confidence in NAMs

Clemens Wittwehr (Joint Research Centre [JRC]): An OECD Harmonized Template (OHT) to Report NAM

Results in Regulatory Environments: Principles and Practical Use

Questions following the presentation included:

Patrick Phibbs: For OPP and OPPT speakers: Assuming 201 is approved in November 2020, how long would
it take for the EPA offices to decide whether they will accept data presented in that format? How do they let
regulated parties know?

o The speaker was unable to answer this question.

Anonymous: This approach feels very human-centric. How do you envision this applying to ecological
applications for AOP? Should species domain be included in the OHT?

o Clemens Wittwehr (JRC): It is neither human nor ecologically centric. It is just focused on NAMs
methods. You should not link mechanistic data to be siloed in one area or another.

Karma Fussell (Nestle): Is there any plan to either amend the OECD TG guidance with the standardized OHT
201 templates or have them co-located with the TG on the OECD website?

o Clemens Wittwehr (JRC): OECD needs to develop guidance on when to follow which test guideline.
Anonymous: How are dose response dependency in induction of mechanistic effects and thresholds, and
toxicokinetics taken into account? Is there simply an inference that any response, irrespective of dose or
toxicokinetics is linked to an adverse outcome?

o Clemens Wittwehr (JRC): For regulatory (validated methods) dose response information from a test
item/chemical will be compared with a reference item that gives a known response in the method.
Only when the measured response is above a certain limit a test item would be considered 'positive'
for a key event. In case of non-validated method, a test item could be considered positive (e.g., when
the response is significantly higher than the background). Depending on the metabolic competence
of test system used in the method toxicokinetics can be taken into account as well. Most in vitro test
systems do not have metabolic competence, in which case toxicokinetics information must be
obtained elsewhere. The template asks for information on how the result is obtained (data
calculation and statistics and evaluation/data interpretation criteria) and for the metabolic
competence of the test system. This should enable the recipient of the data to judge in how far the
results confirm the mode of action of the test item and its contribution to an adverse outcome.

• Carol Marchant (Lhasa Limited): Is the ontology linking to OHT 201 already in place or is this work to be
done?

o Clemens Wittwehr (JRC): This is already complete, and the ontology terms are selected.

Barbara Birk (BASF): How is Good In Vitro Method Practices (GIVIMP) included in the OHT Process?

o Clemens Wittwehr (JRC): GIVIMP and OHT 201 are inseparable. We were working closely together so
that if you want to report data under the paradigm, you could use OHT 201.

Yadvinder Bhuller (Health Canada): Is the OHT 201 also designed for in silico approaches (e.g., read across,
QSAR)?

o Clemens Wittwehr (JRC): Yes, it is technology independent. However, since we know in vitro is so

27


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

popular, we have added elements to help with that technology specifically.

Ravi Menon (Afton Chemical): Have you looked at using OHT 201 for reporting QSAR results, for predicting
bioconcentration factors (BCF) in fish for example?

o Clemens Wittwehr (JRC): OHT 201 is technology independent. Because most NAMs are in vitro, we
have added elements to the template to help with in vitro methods specifically. However, you can
report QSAR results with OHT 201 by using free texts and attachments. Do not forget that OHTs are
updated every year - so if the need to accommodate QSARs in a more comfortable way increases,
these needs will be taken into account.

Anne Gourmelon (OECD): How does OHT 201 match with the OECD Guidance Doc 211 on the description of
non-standard guidelines?

o Clemens Wittwehr (JRC): OECD 211 is about the method description (e.g., what you must do when
executing the method). This relates to the triangle of chemical safety. The OECD 211 document
relates to how you should describe the method in right upper angle of triangle, while OHT is
represented in left upper hand triangle.

Kellie Fay (EPA): How do your resolve user-selected ontology terms (process, action, or object) that are the
same but stem from different ontologies? I believe this is an issue with the AOP wiki.

o Clemens Wittwehr (JRC): That is exactly what we are trying to address right now. [It is] challenging,
but crucial.

o Kellie Fay (EPA): I would love to hear your ideas on how to resolve this issue at some point. I believe
there are some tools which map across ontologies, but [I am] unsure how good they are.

Monique Perron (EPA): Case Study #1: Integration of NAM Data for Evaluating Potential Developmental

Neurotoxicity

Questions following the presentation included:

Anonymous: Where can we find the number of pesticides registered with the EPA per year?
o Monique Perron (EPA): Information about pesticide registration can be found at:

https://www.epa.gov/pesticide-registration/about-pesticide-reeistration.

Francesca Pistollato: What about times of exposure in these in vitro assays?

o Monique Perron (EPA): The microelectrode array network formation assay (MEA NFA) is exposed to
chemicals from day 0-12 with measurements taken days 5-12 to recapitulate neuronal network
formation. For the HCI suite, exposure times vary by specific assays ranging from approximately 24
hours to 5 days.

Katy Wolton (Syngenta): Did you learn anything about the mode of action (MoA) of the organophosphates
(OPs) using the NAMs batteries? Could you expand on this?

o Monique Perron (EPA): Currently we still do not know what the MoA is for any potential

developmental neurotoxicity (DNT) effects. The results observed in these assays were chemical
specific, so you did not see a trend necessarily for the disruption of certain critical processes over
others. As such, the results appear to be chemical dependent.

Anonymous: Why do some of the most toxic OPs in humans (phorate, etc.) not appear to have a human-
computer interaction (HCI) signal?

o Monique Perron (EPA): Phorate was positive in some of the HCI assays with effects on rat
synaptogenesis/maturation assay endpoints. However, I would note that when referring to the
human toxicity of OPs typically one is referring to the neurotoxic effects related to
acetylcholinesterase inhibition. The MoA(s)forthe developmental neurotoxicity outcomes have not
been established.

Rocky Goldsmith (EPA): How have these methods changed the way you establish common mechanism
groups, if at all?

o Monique Perron (EPA): To date, these methods have not changed the way we establish the common
mechanism group.

William Irwin: Any mitochondrial tox assays, they have good concordance for developmental toxicity at least?
o Monique Perron (EPA): The OPs tested have data in other ToxCast assays (e.g., for mitochondrial
toxicity and developmental toxicity) and some potency comparisons have been performed; however,
additional examination of the responses across other assay targets would need to be performed. For
this case study, we focused only on the DNT NAMs results.

28


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing

Afolarin Ogungbemi (Helmholtz-Centre for Environmental Research): Based on your data, do you see a

difference between the effect or action of oxon metabolites and parent compound of OPs?

o Monique Perron (EPA): There was no clear global pattern of effect on the basis of oxon/non-oxon
with respect to DNT NAMs bioactivity; however, you can find discussion of the results obtained for
each oxon/parent pair (when available) in EPA's issue paper that was presented to the FIFRA
Scientific Advisory Panel (SAP) in September 2020: https://www.epa.gov/sap/fifra-scientific-
advisorv-panel-meetines.

Andrew White (Unilever): Case Study #2: Integration of NAM Data in a Next Generation Risk Assessment

for Cosmetic Ingredients

Questions following the presentation included:

Paul Carmichael (Unilever): What do you see as the major (surmountable) shortcomings of the approach?
What work is underway to address these?

o The speaker was unable to answer this question.

Anonymous: Given differences in POD from different cell lines, could you tell me your criteria for selecting the
cell lines you used?

o Andrew White (Unilever): Some cell lines were chosen so that they can be used as a benchmark
against other work being done. Some cells lines were chosen for their ability to indicate metabolism.
Scott Jenkins (EPA): Even though the in vitro PODs are higher than the steady state concentration prediction,
how do you think about the fact that users of the face cream could be exposed for months to years while the
in vitro data is based on several hours of exposure?

o Andrew White (Unilever): Good question, there is still additional work to define how relevant single
dose short timescale data is to longer term exposures. Two things help provide some support we
compared to a steady state repeat dose scenario in the Physiologically Based Pharmacokinetic
(PBPK) model. Also, there is now the Accelerating the Pace of Chemical Risk Assessments (APRCA)
publication that indicates for 90% of compounds assessed the in vitro data was conservative
compared to the repeat dose test. This does not mean we are satisfied and expect to utilize more
organotypic models with repeat dose in the future where necessary
Anonymous: In doing risk assessment, would you combine existing/old animal data together with in vitro/in
s/7/co data exclusively?

o The speaker was unable to answer this question.

James Dawick (Innospec inc.): Did you include any in vitro assays to look at reproductive or developmental
toxicity of coumarin as part of this Next-Generation Risk Assessment (NGRA)?

o Andrew White (Unilever): Not as part of this study, [and] we realize this is a gap for developmental
and reproductive toxicology (DART) that needs developing further.

Katya Tsaioun (Johns Hopkins Bloomberg School of Public Health) Is there a definition of weight of evidence
that the EPA is using? I am seeing a variety of ways different groups use the term.

o Andrew White (Unilever): There are questions outstanding on extrapolation of short-term assays to
long-term in vivo exposures and the biological coverage on these. Some of these have ongoing
activities to map cell lines and biological space, also some additional tier 2 lower throughput
organotypic assays will aid in refining the exposure extrapolation.

Anonymous: How can you be confident that the set of tests you choose provides an adequate biological
coverage?

o Andrew White (Unilever): There is still further work ongoing to help provide more evidence and
confidence for this across a larger range of compounds. The question of biological coverage and
what is sufficient is still being addressed not just by us but across different groups. Also, we do not
think this set is absolute and will be added to either due to identified concerns or where additional
tools will provide added value.

Todd Stedeford (EPA): Case Study #3: Incorporating the Threshold of Toxicological Concern into

Regulatory Decisions under the Amended Toxic Substances Control Act

Questions following the presentation included:

James Jacobus (Minnesota Department of Health): Since you are using an exposure-based approach are you
taking into account how additional exposure routes such as diet increase daily exposure totals for an

29


-------
2020 Conference Summary: State of the Science on Development and Use of NAMs for Chemical Safety Testing
individual?

o Todd Stedeford (EPA): EP/yOPPT has not implemented the Threshold of Toxicological Concern (TTC)
approach for its risk assessments. The presentation was an overview of the TTC approach and its
potential applications under TSCA.

Anonymous: How do you define "a scientifically validated test method?" Is TTC used in TSCA high priority
substance (HPS) regulation?

o Todd Stedeford (EPA): We have not used it yet; I was providing some ideas about where it could be
used. This is something that we are going to investigate through that collaboration and see is it really
fit for purpose NAMs that we can use under new chemicals or existing chemicals.

Anonymous: How do you define "a scientifically validated test method?" Is there any TTC case study for TSCA
HPS regulation?

o Todd Stedeford (EPA): TSCA Section 4(h)(2)(C) requires EPA to develop a list of alternative test
methods or strategies that are "scientifically reliable, relevant, and capable of providing information
of equivalent or better scientific reliability and quality to that which would be obtained from
vertebrate animal testing." EPA is currently developing criteria that will be used for identifying NAMs
for placement on the list. EP/yOPPT has not developed a TTC case study for a chemical substance
identified as a high-priority substance.

Jennifer McPartland (EDF): Where can we get more information about the 2020 collaboration focused on
TTC under TSCA?

o Todd Stedeford (EPA): That is all the information that is available now is what I stated. It is really
going to be a collaboration of external partners also relying on our experts in the office of research
and development.

Ravi Menon (Afton Chemical): Do you recommend use of EPA tools such as the Chemical Screening Tool for
Exposures and Environmental Releases (ChemSTEER) and the Exposure and Fate Assessment Screening
Tool (E-FAST)for exposure/release and risk assessment for PMNs? Are there plans to update these tools?
o Todd Stedeford (EPA): EP/yOPPT routinely uses ChemSTEER and E-FAST to estimate exposures and
releases. These tools are updated periodically. Updated tools and models are available at the
following: httPSi//www.epa.gov/tsca-screening-tools/using-predictive-methods-assess-exposure-and-
f a te-u n d e r-ts c a #f a te.

Helen Goeden (Minnesota Department of Health): The U.S. Food and Drug Administration (FDA) has been
working on a major 'upgrading' of TTC for several years. Is EPA aware of their efforts?

o Todd Stedeford (EPA): Yes, EPA/OPPT is aware of these efforts.

Rocky Goldsmith (EPA): Is there a way to capture TTC in QSAR, such as Quantitative Structure Threshold of
Toxicological Concern Relationships? Can a structural basis be developed?

o Todd Stedeford (EPA): EP/yOPPT has not explored this possibility.

Jennifer McPartland (EDF): Has the collaboration not yet started?

o Todd Stedeford (EPA): It has not started yet, but we anticipate a kickoff later this month.

Ravi Menon (Afton Chemical): Can the TTC approach be integrated with EPA tools such as ChemSTEER?
o Todd Stedeford (EPA): ChemSTEER generates screening-level estimates for environmental releases
of and worker exposures to a chemical manufactured and used in industrial and commercial
operations (i.e., workplaces). These exposure estimates are integrated with hazard information in the
health or ecological risk assessment. EP/yOPPT has not yet explored integrating the TTC approach
into ChemSTEER or into the risk assessments.

Karma Fussell (Nestle): Has EPA considered using in vitro NAMs to navigate the TTC decision tree?

o Todd Stedeford (EPA): EPA/OPPT has not explored this possibility.

Jennifer McPartland (EDF): As a follow up, is there a way we can obtain more information about how folks
might be able to participate in the TTC collaboration in advance of the meeting scheduled for next month?

o Todd Stedeford (EPA): EPA/OPPT provided follow-up information to this request after the meeting.
Gianluca Selvestrel (Mario Negri Institute for Pharmacological Research): On the last presentation slide,
which [was] the reference for ecological TTC and subcategorization into "MoA-like" substances (HESI
project)?

o Todd Stedeford (EPA): Slide note 2 from the last slide was, "HESI, Animal Alternatives in

Environmental Risk Assessment Committee, available at: https://hesiglobal.org/animal-alternatives-

in-environmental-risk-assessment/."

30


-------