Climate Change Indicators in the United States, 2016 Fourth Edition TECHNICAL DOCUMENTATION OVERVIEW August 2016 ------- Overview The U.S. Environmental Protection Agency (EPA) follows an established framework to identify datasets, select indicators, obtain independent expert review, and publish its indicators in reports and online. This document provides technical supporting information for the 37 indicators and five chapter-specific call- out features that appear in EPA's report, Climate Change Indicators in the United States, 2016, and the accompanying website. EPA prepared this document to ensure that each indicator is fully transparent— so readers can learn where the data come from, how each indicator was calculated, and how accurately each indicator represents the intended environmental condition. EPA uses a standard documentation form, then works with data providers and reviews the relevant literature and available documentation associated with each indicator to address the elements on the form as completely as possible. EPA's documentation form addresses 13 elements for each indicator: 1. Indicator description 2. Revision history 3. Data sources 4. Data availability 5. Data collection (methods) 6. Indicator derivation (calculation steps) 7. Quality assurance and quality control (QA/QC) 8. Comparability over time and space 9. Data limitations 10. Sources of uncertainty (and quantitative estimates, if available) 11. Sources of variability (and quantitative estimates, if available) 12. Statistical/trend analysis (if any has been conducted) 13. References In addition to indicator-specific documentation, this appendix to the report summarizes the criteria that EPA uses to screen and select indicators for publication. This documentation also describes the process EPA follows to select and develop those indicators that have been added or substantially revised since the publication of EPA's first version of this report in April 2010. Indicators that are included for publication must meet all of the criteria. Lastly, this document provides general information on changes that have occurred since the 2014 version of the Climate Indicators in the United States report. The development of the indicators report, including technical documentation, was conducted in accordance with EPA's Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency.1 EPA may update this technical documentation as new and/or additional information about these indicators and their underlying data becomes available. Please contact EPA at: climateindicators@epa.gov to provide any comments about this documentation. 1 U.S. EPA. 2002. Guidelines for ensuring and maximizing the quality, objectivity, utility, and integrity of information disseminated by the Environmental Protection Agency. EPA/260R-02-008. http://www.epa.gov/qualitv/informationguidelines/documents/EPA lnfoQualitvGuidelines.pdf. Technical Documentation: Overview 2 ------- EPA's Indicator Evaluation Criteria General Assessment Factors When evaluating the quality, objectivity, and relevance of scientific and technical information, the considerations that EPA takes into account can be characterized by five general assessment factors, as found in A Summary of General Assessment Factors for Evaluating the Quality of Scientific and Technical Information.2 These general assessment factors and how EPA considers them in development of climate change indicators are: • Soundness (AF1) is defined as the extent to which the scientific and technical procedures, measures, methods, or models employed to generate the information are reasonable for and consistent with the intended application. As described below, EPA follows a process that carefully considers 10 criteria for each proposed indicator. EPA evaluates the scientific and technical procedures, measures, and methods employed to generate the data that underpin each indicator as part of its consideration of the 10 selection criteria. If a proposed indicator and associated data meet all of the criteria, EPA determines they are reasonable for, and consistent with, use as an indicator for this report. • Applicability and utility (AF2) is defined as the extent to which the information is relevant for the Agency's intended use. Considerations related to this assessment factor include the relevance of the indicator's purpose, design, outcome measures, results, and conditions to the Agency's intended use. As described below, EPA follows a process that carefully considers 10 criteria for each proposed indicator. Some of these criteria relate to the relevance or usefulness of the indicator. • Clarity and completeness (AF3) is defined as the degree of clarity and completeness with which the data, assumptions, methods, quality assurance, sponsoring organizations, and analyses employed to generate the information are documented. EPA investigates each indicator's underlying data, assumptions, methods and analyses employed to generate the information, quality assurance, and sponsoring organizations in order to record this information clearly, completely, and transparently in a publicly available technical support document. Because the underlying data and methods for analyses are peer-reviewed and/or published by federal agencies and reputable scientific journals, these publications provide additional documentation of assumptions, methods, and analyses employed to generate the information. • Uncertainty and variability (AF4) is defined as the extent to which the variability and uncertainty (quantitative and qualitative) in the information or in the procedures, measures, methods, or models are evaluated and characterized. EPA carefully considers the extent to which the uncertainty and variability of each indicator's underlying data were evaluated and characterized, based on their underlying documentation and source publications. In the 2 U.S. EPA. 2003. Science Policy Council assessment factors: A summary of general assessment factors for evaluating the quality of scientific and technical information. EPA 100/B-03/001. www.epa.gov/sites/production/files/2015-01/documents/assess2.pdf. Technical Documentation: EPA's Indicator Evaluation Criteria 3 ------- technical documentation, EPA also describes known sources of uncertainty and variability, as well as data limitations (see elements #9, #10, and #11, listed above). • Evaluation and review (AF5) is defined as the extent of independent verification, validation, and peer review of the information or of the procedures, measures, methods, or models. EPA carefully considers the extent to which the data underlying each indicator are independently verified, validated, and peer-reviewed. One of EPA's selection criteria relates to peer review of the data and methods associated with the indicator. EPA also ensures that each edition of the report—including supporting technical documentation—is independently peer-reviewed. The report and associated technical documentation are consistent with guidance discussed in a newer document, Guidance for Evaluating and Documenting the Quality of Existing Scientific and Technical Information,3 issued in December 2012 as an addendum to the 2003 EPA guidance document. These general assessment factors form the basis for the 10 criteria EPA uses to evaluate indicators, which are documented in 13 elements as part of the technical documentation. These 13 elements are mapped to EPA's criteria and the assessment factors in the table below. Criteria for Including Indicators in This Report EPA used a set of 10 criteria to carefully select indicators for inclusion in the Climate Change Indicators in the United States, 2016 report. The following table introduces these criteria and describes how they relate to the five general assessment factors and the 13 elements in EPA's indicator documentation form, both listed above. 3 U.S. EPA. 2012. Guidance for evaluating and documenting the quality of existing scientific and technical information, www.epa.gov/sites/production/files/2015-05/documents/assess3.pdf. Technical Documentation: EPA's Indicator Evaluation Criteria 4 ------- Assessment Factor Criterion Description Documentation Elements AF1, AF2, AF4 Trends over time Data are available to show trends overtime. Ideally, these data will be long-term, covering enough years to support climatically relevant conclusions. Data collection must be comparable across time and space. Indicator trends have appropriate resolution for the data type. 4. Data availability 5. Data collection 6. Indicator derivation AF1, AF2, AF4 Actual observations The data consist of actual measurements (observations) or derivations thereof. These measurements are representative of the target population. 5. Data collection 6. Indicator derivation 8. Comparability over time and space 12. Statistical/ trend analysis AF1, AF2 Broad geographic coverage Indicator data are national in scale or have national significance. The spatial scale is adequately supported with data that are representative of the region/area. 4. Data availability 5. Data collection 6. Indicator derivation 8. Comparability over time and space AF1, AF3, AF5 Peer-reviewed data (peer- review status of indicator and quality of underlying source data) Indicator and underlying data are sound. The data are credible, reliable, and have been peer- reviewed and published. 3. Data sources 4. Data availability 5. Data collection 6. Indicator derivation 7. QA/QC 12. Statistical/ trend analysis AF4 Uncertainty Information on sources of uncertainty is available. Variability and limitations of the indicator are understood and have been evaluated. 5. Data collection 6. Indicator derivation 7. QA/QC 9. Data limitations 10. Sources of uncertainty 11. Sources of variability 12. Statistical/ trend analysis AF1, AF2 Usefulness Indicator informs issues of national importance and addresses issues important to human or natural systems. Complements existing indicators. 6. Indicator derivation Technical Documentation: EPA's Indicator Evaluation Criteria 5 ------- Assessment Factor Criterion Description Documentation Elements AF1, AF2 Connection to climate change The relationship between the indicator and climate change is supported by published, peer- reviewed science and data. A climate signal is evident among stressors, even if the indicator itself does not yet show a climate signal. The relationship to climate change is easily explained. 6. Indicator derivation 11. Sources of variability AF1, AF3, AF4, AF5 Transparent, reproducible, and objective The data and analysis are scientifically objective and methods are transparent. Biases, if known, are documented, minimal, or judged to be reasonable. 4. Data availability 5. Data collection 6. Indicator derivation 7. QA/QC 9. Data limitations 10. Sources of uncertainty 11. Sources of variability AF2, AF3 Understandable to the public The data provide a straightforward depiction of observations and are understandable to the average reader. 6. Indicator derivation 9. Data limitations AF2 Feasible to construct The indicator can be constructed or reproduced within the timeframe for developing the report. Data sources allow routine updates of the indicator for future reports. 3. Data sources 4. Data availability 5. Data collection 6. Indicator derivation Technical Documentation: EPA's Indicator Evaluation Criteria 6 ------- Process for Evaluating Indicators This section describes the process for evaluating and selecting indicators, including the application of EPA's standard set of criteria. EPA published the first edition of Climate Change Indicators in the United States in April 2010, featuring 24 indicators. Three more editions have been published since then, in 2012, 2014, and 2016, using the following approach to identify and develop a robust set of new and revised indicators for the report: A. Identify and develop a list of candidate indicators. B. Conduct initial research; screen against a subset of indicator criteria. C. Conduct detailed research; screen against the full set of indicator criteria. D. Select indicators for development. E. Develop draft indicators. F. Facilitate expert review of draft indicators. G. Periodically re-evaluate indicators. EPA's set of indicators are a function of the criteria used to evaluate them as well as the need to transparently document the underlying data and methods. EPA screens and selects each indicator using a standard set of criteria that consider data availability and quality, transparency of the analytical methods, and the indicator's relevance to climate change. This process ensures that all indicators selected for reports are consistently evaluated, are based on credible data, and can be transparently documented. Building on a core set of indicators published in 2010, EPA has added indicators to subsequent reports based on newly available data and analyses from the scientific assessment literature, other peer- reviewed sources (e.g., published journal articles or new EPA reports), and collaborative partnerships with federal and non-federal agencies. Key considerations for new indicators include: 1) filling gaps in the existing indicator set in an attempt to be more comprehensive; 2) newly available, or in some cases improved, data sources that have been peer-reviewed and are publicly available data from government agencies, academic institutions, and other organizations; 3) analytical development of indicators resulting from existing partnerships and collaborative efforts within and external to EPA (e.g., development of streamflow metrics in partnership with the U.S. Geological Survey for the benefit of the partner agencies as well as key programs within EPA's Office of Water); and 4) indicators that communicate key aspects of climate change and that are understandable to various audiences, including the general public. Importantly, all of EPA's climate change indicators relate to either the causes or effects of climate change. EPA acknowledges that some indicators are more directly influenced by climate than others, yet they all meet EPA's criteria and have a scientifically-based relationship to climate. This report does not attempt to identify the extent to which climate change is causing a trend in an observed indicator. Connections between human activities, climate change, and observed indicators are explored in more detail elsewhere in the scientific literature. EPA's indicators generally cover broad geographic scales and many years of data, as this is the most appropriate way to view trends relevant to climate change. The Earth is a complex system and there will Technical Documentation: Process for Evaluating Indicators 7 ------- always be natural variations from one year to the next—for example, a very warm year followed by a cold year. The Earth's climate also goes through other natural cycles that can play out over a period of several years or even decades. Thus, EPA's indicators present trends for multiple decades, or for as many years as the underlying data allow. EPA also includes features such as "Community Connection" and "A Closer Look" in certain chapters (e.g., Cherry Blossom Bloom Dates in Washington, D.C.) that focus on a particular region or localized area of interest to augment the report and engage readers in particular areas or topics of interest within the United States. While the features and their underlying data are not national in scale or representative of broad geographic areas, these features are screened, developed, and documented in a manner consistent with the indicators in the report. In selecting and developing the climate change indicators included in this report, EPA fully complied with the requirements of the Information Quality Act (also referred to as the Data Quality Act) and EPA's Guidelines for Ensuring and Maximizing the Quality, Objectivity, Utility, and Integrity of Information Disseminated by the Environmental Protection Agency.4 As part of this process, existing indicators are re-evaluated as appropriate to ensure that they continue to function as intended and meet EPA's indicator criteria. The process for evaluating indicators is described in more detail below. A: Identify Candidate Indicators EPA investigates and vets new candidate indicators through coordinated outreach, stakeholder engagement, and reviewing the latest scientific literature. New indicators and content can be broadly grouped into two categories: • Additions: Completely new indicators. • Revisions: Improving an existing indicator by adding or replacing metrics or underlying data sources. These revisions involve obtaining new data sets and vetting their scientific validity. Outreach and Stakeholder Engagement EPA invited suggestions of new indicators from the public following the release of the April 2010 Climate Change Indicators in the United States report, and continues to welcome suggestions at climateindicators@epa.gov. For example, in March 2011, EPA held an information gathering meeting of experts on climate change and scientific communication to obtain their impressions on the first edition of the report. Meeting participants considered the merits of data in the report and provided input for new and revised content. Participants noted a variety of concepts for new indicators and data sources for EPA to consider. A summary of this workshop is available on the website: www.epa.gov/climate- indicators. 4 U.S. EPA. 2002. Guidelines for ensuring and maximizing the quality, objectivity, utility, and integrity of information disseminated by the Environmental Protection Agency. EPA/260R-02-008. www.epa.gov/qualitv/informationguidelines/documents/EPA lnfoQualitvGuidelines.pdf. Technical Documentation: Process for Evaluating Indicators 8 ------- New Science and Data The process of identifying indicators includes monitoring the scientific literature, assessing the availability of new data, and eliciting expert review. Many federal agencies and other organizations have ongoing efforts to make new data available, which allows for continued investigation into opportunities for compiling or revising indicator content. EPA also engages with current data contributors and partners to help improve existing indicators and identify potential new indicators. B and C: Research and Screening Indicator Criteria EPA screens and selects indicators based on an objective, transparent process that considers the scientific integrity of each candidate indicator, the availability of data, and the value of including the candidate indicator in the report. Each candidate indicator is evaluated against fundamental criteria to assess whether or not it is reasonable to further evaluate and screen the indicator for inclusion in the upcoming report. These fundamental criteria include: peer-review status of the data, accessibility of the underlying data, relevance and usefulness of the indicator (i.e., the indicator's ability to be understood by the public), and its connection to climate change. Tier 1 Criteria • Peer-reviewed data • Feasible to construct • Usefulness • Understandable to the public • Connection to climate change Tier 2 Criteria • Transparent, reproducible, and objective • Broad geographic range • Actual observations • Trends over time • Uncertainty The distinction between Tier 1 and Tier 2 criteria is not intended to suggest that one group is necessarily more important than the other. Rather, EPA determined that a reasonable approach was to consider which criteria must be met before proceeding further and to narrow the list of indicator candidates before the remaining criteria were applied. Screening Process EPA researches and screens candidate indicators by creating and populating a database comprising all suggested additions and revisions, then documents the extent to which each of these candidate indicators meet each of EPA's criteria. EPA conducts the screening process in two stages: Technical Documentation: Process for Evaluating Indicators 9 ------- • Tier 1 screening: Indicators are evaluated against the set of Tier 1 criteria. Indicators that reasonably meet these criteria are researched further; indicators that do not meet these criteria are eliminated from consideration. Some of the candidate indicators ruled out at this stage are ideas that could be viable indicators in the future (e.g., indicators that do not yet have published data or need further investigation into methods). • Tier 2 screening: Indicators deemed appropriate for additional screening are assessed against the Tier 2 criteria. Based on the findings from the complete set of 10 criteria, the indicators are again evaluated based on the assessment of the remaining criteria. Information Sources To assess each candidate indicator against the criteria, EPA reviews the scientific literature using numerous methods (including several online databases and search tools) to identify existing data sources and peer-reviewed publications. In cases where the candidate indicator is not associated with a well-defined metric, EPA conducts a broader survey of the literature to identify the most frequently used metrics. For instance, an indicator related to "community composition" (i.e., biodiversity) was suggested, but it was unclear how this variable might best be measured or represented by a metric. As noted above, to gather additional information, EPA contacts appropriate subject matter experts, including authors of identified source material, existing data contributors, and collaborators. D: Indicator Selection Based on the results of the screening process, the most promising indicators for the report are developed into proposed indicator summaries. EPA consults the published literature, subject matter experts, and online databases to obtain data for each of these indicators. Upon acquiring sound data and technical documentation, EPA prepares a set of possible graphics for each indicator, along with a summary table that describes the proposed metric(s), data sources, limitations, and other relevant information. Summary information is reviewed by EPA technical staff, and then the indicator concepts that meet the screening criteria are formally approved for development and inclusion in the report. E: Indicator Development Approved new and revised indicators are then developed within the framework of the indicator report. Graphics, summary text, and technical documentation for all of the proposed new or revised indicators are developed in accordance with the format established for the original 24 indicators in the 2010 indicators report. An additional priority for development is to make sure that each indicator communicates effectively to a non-technical audience without misrepresenting the underlying data and source(s) of information. Regional features are developed in the same manner. Technical Documentation: Process for Evaluating Indicators 10 ------- F: Internal and External Reviews The complete indicator packages (graphics, summary text, and technical documentation) undergo internal review, data provider/collaborator review, and an independent peer review. Internal Review Report content is reviewed at various stages of development in accordance with EPA's standard review protocols for publications. This process includes review by EPA technical staff and various levels of management within the Agency. Data Provider/Collaborator Review Organizations and individuals who collected and/or compiled the data (e.g., the National Oceanic and Atmospheric Administration and the U.S. Geological Survey) also review the report. Independent Peer Review The peer review of EPA's 4th Edition report and technical supporting information followed the procedures in EPA's Peer Review Handbook, 4th Edition (EPA/100/B-15/001)5 for reports that do not provide influential scientific information. The review was managed by a contractor under the direction of a designated EPA peer review leader, who prepared a peer review plan, the scope of work for the review contract, and the charge for the reviewers. The peer review leader played no role in producing the draft report. Under the general approach of the peer review plan, the peer review consisted of 11 experts: • The entire report was reviewed by three reviewers: one with general expertise in the field of climate change, one with expertise in climate-related health effects, and one general expert in communications and indicator typology. • Eight subject-matter experts each reviewed selected indicators within their fields of expertise. These experts had the following expertise: climate and hydrology, including streamflow and river flooding; surface water temperature; polar sea ice; snow cover; coastal flooding; marine phenology; climate and health, particularly heat-related illness and deaths; and climate-related water-, food-, and vector-borne diseases, particularly West Nile virus. The peer review charge asked reviewers to provide detailed comments and to indicate whether the report (or any specific indicators), including the associated technical documentation (appendices), should be published (a) as is, (b) with changes suggested by the review, (c) only after a substantial revision necessitating a re-review, or (d) not at all. Nine reviewers answered (a) or (b), while two reviewers answered (c) related to the specific indicators they reviewed. Separately, a full report reviewer with expertise in climate and health effects suggested making significant clarifications to the introductory chapter with respect to EPA's approach for selecting indicators and the definition of an indicator for the purposes of this report. 5 U.S. EPA. 2015. EPA's peer review handbook. Fourth edition. EPA 100/B-15/001. www.epa.gov/osa/peer- review-handbook-4th-edition-2015. Technical Documentation: Process for Evaluating Indicators 11 ------- One of the reviewers who answered (c) expressed concerns about the interpretation of the River Flooding indicator and its ability to discriminate between climate change and other significant factors that commonly affect flood flows (e.g., control structures, such as reservoirs and diversion structures that are designed to reduce peak flood discharges). This reviewer also noted that changes in watershed land use and land management can also significantly increase or decrease the magnitude and frequency of flooding. The other reviewer who answered (c) expressed concerns about the limitations of the data used in the Heat-Related Deaths and Heat-Related Illnesses indicators, specifically noting how the underlying death records significantly underrepresent the true numbers of deaths and hospitalizations that actually occur. EPA revised the report to address all comments and prepared a spreadsheet to document the response to each of the approximately 400 comments from the peer review. The revised report and EPA's responses were then sent for re-review to three reviewers: the two reviewers who had answered (c) for specific indicators and one full-report reviewer. The full-report reviewer was asked to review revisions that EPA made to clarify the indicator selection and evaluation process, as well as the section entitled "Understanding the Connections Between Climate Change and Human Health," which EPA revised substantially based on comments from several reviewers. The two indicator-specific reviewers concluded that EPA had adequately addressed all of the comments and critiques from the original review of the indicators in question—River Flooding, Heat-Related Deaths, and Heat-Related Illnesses—and that EPA could now publish these indicators as written. The full-report reviewer concluded that the revised sections of the report and technical appendix more robustly describe EPA's process for selecting and evaluating indicators for the report. This reviewer also concluded that the section entitled "Understanding the Connections Between Climate Change and Human Health" was significantly improved. The reviewer provided some additional minor suggestions, which EPA then addressed. EPA's peer review leader conducted a quality control check to ensure that the authors took sufficient action and provided an adequate response for every peer review and re-review comment. G: Periodic Re-Evaluation of Indicators Existing indicators are re-evaluated to ensure they are relevant, comprehensive, and sustainable. The process of re-evaluating indicators includes monitoring the availability of newer data, eliciting expert review, and assessing indicators in light of new science. For example, EPA determined that the underlying methods for developing the Plant Hardiness Zone indicator that appeared in the first edition of Climate Change Indicators in the United States (April 2010) had significantly changed, such that updates to the indicator are no longer possible. Thus, EPA removed this indicator from the 2012 edition. EPA re-evaluates indicators during the time between publication of the reports. EPA updated several existing indicators with additional years of data, new metrics or data series, and analyses based on data or information that have become available since the publication of EPA's 2014 report. For example, EPA was able to update the Heat-Related Deaths indicator with a more focused analysis of deaths due to cardiovascular disease. These and other revisions are described in the technical documentation specific to each indicator. Technical Documentation: Process for Evaluating Indicators 12 ------- Summary of Changes to the 2016 Report The table below highlights major changes made to the indicators during development of the 2016 version of the report, compared with the 2014 report. The 2016 report also differs from previous versions in that it does not show every piece of every indicator in print. Instead, to make the most efficient use of space, the 2016 report shows a condensed version of each indicator, featuring more concise introductory text, more concise "About the Indicator" text, and in some cases, only selected figures. EPA continues to maintain a complete version of every indicator on the Web, featuring more detailed text and a full complement of graphs and maps. These Web versions also allow readers to explore the data through an increasing array of interactive tools. EPA updated these indicators on the Web in conjunction with the publication of the 2016 report. Technical Documentation: Process for Evaluating Indicators 13 ------- Indicator (number of figures) Change Years of data added since 2014 report Most recent data U.S. Greenhouse Gas Emissions (3) 2 2014 Global Greenhouse Gas Emissions (3) 1 2012 Atmospheric Concentrations of Greenhouse Gases (5) 2 2015 Climate Forcing (2) 2 2015 U.S. and Global Temperature (3) 2 2015 High and Low Temperatures (6) 2 2016 U.S. and Global Precipitation (3) 3 2015 Heavy Precipitation (2) 2 2015 Tropical Cyclone Activity (3) 2 2015 River Flooding (2) New indicator 2015 Drought(2) 2 2015 Ocean Heat (1) 2 2015 Sea Surface Temperature (2) 2 2015 Sea Level (2) 2 2015 Coastal Flooding (2) New indicator 2015 Ocean Acidity (2) 2 2015 Arctic Sea Ice (3) Expanded with new metric (timing of melt season); added March to monthly analysis 3 2016 Antarctic Sea Ice (1) New indicator 2016 Glaciers (2) 3 2015 Lake Ice (3) 3 2015 Snowfall (2) 2 2016 Snow Cover (3) Expanded with new metric (timing of snow cover season) 2 2015 Snowpack (1) 3 2016 Heat-Related Deaths (2) Expanded with new metric (cardiovascular disease deaths) 4 2014 Heat-Related Illnesses (3) New indicator 2010 Heating and Cooling Degree Days (3) 2 2015 Lyme Disease (2) 2 2014 Technical Documentation: Process for Evaluating Indicators 14 ------- Indicator (number of figures) Change Years of data added since 2014 report Most recent data West Nile Virus (2) New indicator 2014 Length of Growing Season (6) Added three trend maps 2 2015 Ragweed Pollen Season (1) 2 2015 Wildfires (5) Expanded map figure into two figures 2 2015 Streamflow (4) 2 2014 Stream Water Temperature (1) New indicator 2014 Great Lakes Water Levels and Temperatures (2) 2 2015 Bird Wintering Ranges (2) 2013 Marine Species Distribution (3) New indicator 2015 Leaf and Bloom Dates (3) 2 2015 Discontinued Indicators Plant Hardiness Zones: Discontinued in April 2012 Reason for Discontinuation: This indicator compared the U.S. Department of Agriculture's (USDA's) 1990 Plant Hardiness Zone Map (PHZM) with a 2006 PHZM that the Arbor Day Foundation compiled using similar methods. USDA developed6 and published a new PHZM in January 2012, reflecting more recent data as well as the use of better analytical methods to delineate zones between weather stations, particularly in areas with complex topography (e.g., many parts of the West). Because of the differences in methods, it is not appropriate to compare the original 1990 PHZM with the new 2012 PHZM to assess change, as many of the apparent zone shifts would reflect improved methods rather than actual temperature change. Further, USDA cautioned users against comparing the 1990 and 2012 PHZMs and attempting to draw any conclusions about climate change from the apparent differences. For these reasons, EPA chose to discontinue the indicator. EPA will revisit this indicator in the future if USDA releases new editions of the PHZM that allow users to examine changes over time. For more information about USDA's 2012 PHZM, see: http://planthardiness.ars.usda.gov/PHZMWeb/. The original version of this indicator as it appeared in EPA's 2010 report can be found at: www.epa.gov/climate-indicators. 6 Daly, C., M.P. Widrlechner, M.D. Halbleib, J.I. Smith, and W.P. Gibson. 2012. Development of a new USDA plant hardiness zone map for the United States. J. Appl. Meteorol. Clim. 51:242-264. Technical Documentation: Process for Evaluating Indicators 15 ------- |