oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 1 of 85
Progress on LDAR Innovation
Report on Research Under CRADA #914-16
EPA Publication Number EPA/600/R-20/422
Revision Number: 0.8
January 28, 2021
Prepared by:
The LDAR Innovation CRADA Team
Published by:
U.S. Environmental Protection Agency
Office of Research and Development
Center for Environmental Measurement and Modeling
Research Triangle Park, NC 27709

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB. moiex	Page2of85
USEMOItoglRcuKliaMDlMEVaM
Table of Contents
Figures	3
Tables	4
List of Appendices	5
Acknowledgments and Disclaimer	5
Executive Summary	6
1.0 Introduction	6
1.1	LDAR Innovation CRADA - Background and Status	6
1.2	Report Organization	7
1.3	CRADA Research Objectives	8
1.4	Executive Vi ew of LD SN/DRF	9
1.4.1	TheLDSN - Current Sensor Used	10
1.4.2	The LDSN/DRF Process	11
2.0 Refinery LDAR Innovation	12
2.1	Current Work Practice - Comprehensive M21	12
2.1.1	Overview of M21	12
2.1.2	Strengths of M21-based CWP	13
2.1.3	Weaknesses ofM21-based CWP	14
2.2	Comparing the CWP to Proposed Alternatives	15
2.2.1	Comparison of CWP and LDSN/DRF	15
2.2.2	Alternate Work Practice - OGI	16
2.3	CRADA Scientific Approach	17
3.0 LDSN/DRF Development and Testing	19
3.1	Exploratory Research	20
3.2	LDSN/DRF Development 1	24
3.3	I IIR SLOF Pilot Test	31
3.4	LDSN/DRF Development 2	35
3.5	I I IR CC Pilot l est	38
4.0 LDSN/DRF to CWP Equivalency	43
4.1	Historical LDAR Monitoring Data m-Xylene and Mid-Crude	44
4.2	Emissions Equivalency Analysis Approach	47
4.2.1	LeakDAS Data Processing	47
4.2.2	Emissions Calculations Methods	49
4.2.3	Monte Carlo Simulation	53
4.3	Simulation Results	56
4.4	Additional LDSN/DRF Benefits	60
4.5	Conclusions	61
5.0 LDSN/DRF Methods and Quality Assurance	61

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB. moiex	Page3of85
USEMOItoglRcuKliaMDlMEVaM
5.1	LDSN/DRF Monitoring Plan Design	62
5.2	LDSN Node Pre-deployment and In-field QA Testing	64
5.3	The DRF Procedure	66
5.4	Ongoing QA of LDSN	69
5.4 Independent QA Audits of LDSN/DRF	71
6.0 Summary and Future Work	73
6.1	Major Accomplishments and Findings to Date	73
6.2	Development of an LDSN/DRF OTM	74
6.3	System Design Criteria and Method Standardization	75
6.4	Emission Reporting and Permit Representations	76
6.5	The Power of LD SN/DRF - Forward Research	77
7.0 Acronyms and Acknowledgments	78
8.0 References and Endnotes	81
Figures
Figure 1-1. Overview of the LDSN/DRF infrastructure	9
Figure 1-2. Overview of the LDSN/DRF system	10
Figure 1-3. Overview of LDSN based on fixed-place 10.6 eV PIDs	11
Figure 1-4. Overview of the LDSN/DRF process and DRF equipment	11
Figure 2-1. (a) and (b) M21 gear weighing 39.5 lbs, (c) donned gear with thermal sleeves	14
Figure 2-2. Examples of commercial OGI cameras	17
Figure 2-3. Example of 10.6 eV PID relative response	18
Figure 3-1. Sequence of LDSN/DRF development and testing	19
Figure 3-2. Exploratory trials at EPA ORD test range	20
Figure 3-3. Example of early collocated sensor node test	21
Figure 3-4. Example of a network detection	23
Figure 3-5. Evolution of LDSN prototypes	24
Figure 3-6. Illustration of distance effects of detection	25
Figure 3-7. Controlled release ERs, M21 values and DT band	26
Figure 3-8. DT bands for FHR CC m-Xylene and Mid-Crude	29
Figure 3-9. Leak clusters in historical CWP data	29
Figure 3-10. Overview of the FHR SLOF pilot test	32
Figure 3-11. Fin-fan leak behavior over night	34
Figure 3-12. An early mSyte™ detection in complex background	34
Figure 3-13. Molex's pre-production sensor - 2019	35
Figure 3-14. Molex's mSyte™ data platform	36
Figure 3-15. Leak detection on mSyte™ (a) sensor signal, and (b) PSL box	36

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex Page4of85
Figure 3-16. LDAR technician demonstrating the mSyte™ mobile device	37
Figure 3-17. mSyte™ leak investigation dashboard	38
Figure 3-18. View of Mid-Crude and m-Xylene process units	39
Figure 3-19. Examples of DRF leak quantification	39
Figure 3-20. FHR CC pilot test summary	41
Figure 4-1. Historical LDAR leak apportionment by component category, SVs, and emissions .45
Figure 4-2. FHR CC historical leak profiles (a) SV distribution, (b) cumulative emissions	46
Figure 4-3. Historical AVO leak identification: a) Mid-Crude and (b) m-Xylene	47
Figure 4-4. LeakDAS data processing methods	48
Figure 4-5. EPA ORD Monte Carlo emissions model	54
Figure 4-6. Molex Monte Carlo emissions model	54
Figure 4-7. Example EPA ORD Simulation of daily emissions: (a) Mid-Crude, (b) m-Xylene...55
Figure 4-8. Example EPA ORD simulation of cumulative emissions: (a) Mid-Crude, (b) m-
Xylene	56
Figure 4-9. LDSN/DRF simulated net emissions compared to M21 LDAR (a) Mid-Crude and (b)
m-Xylene	58
Figure 4-10. Emissions scenario leak repairs (a) Mid-Crude, (b) m-Xylene	59
Figure 4-11. Pegged leak relation to LDSN/DRF equivalency (a) Mid-Crude, (b) m-Xylene	60
Figure 5-1. Key elements of a fugitive emission management plan	63
Figure 5-2. Example of LDSN node placement in FHR CC Mid-Crude	64
Figure 5-3. LDSN fabrication and implementation procedures	64
Figure 5-4. LDSN node bump test QA check	65
Figure 5-5. Basic DRF procedures and mobile interface	67
Figure 5-6. DRF in action on a leak found under insulation	69
Figure 5-7. Example of LDSN data review: (a) meteorological data, (b) LDSN data	70
Figure 5-8. Assessment of Molex LDSN data with an open-source approach	72
Figure 6-1. Typical sensor layout plan	76
Figure 6-2. Example of a TE detection	78
Tables
Table 2-1. Comparison of CWP and LDSN/DRF attributes	16
Table 2-2. Select 10.6eY PII) RI s	19
Table 3-1. Summary of LDSN/DRF detection sensitivity terms	31
Table 3-2. Comparison of M21 SVs leaks detected by CWP and LDSN/DRF	42
Table 4-1. Description of EPA ORD modeling terms	50
Table 4-2. Description of Molex modeling terms	50
Table 4-3. Equivalency simulation results	57
Table 5-1. Summary of LDSN data review for Figure 5-6	71

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
J- r03OurceB. moiex	Ragesofss
Table 7-1. Acronyms and abbreviations	78
Table 7-2. Acknowledgment of contributions	81
List of Appendices
Appendix A: Exploratory Tests	A1-A51
Al: RTP Test 1 Summary	A2-A16
A2: RTP Test 2 Summary	A17-A51
Appendix B: FHR SLOF Tests	B1-B55
Bl: SLOF Test 3.1 Summary	B2-B26
B2: SLOF Test 3.2 Summary	B27-B55
Appendix C: FHR CC Tests	C1-C59
C1: LDSN Node Layouts for FHR CC Tests	C2-C13
C2: Summary of LDSN PSLs and DRF SVs	C14-C48
C3: Revised LDSN Node Layouts for FHR CC	C49-C59
Appendix D: Test Methods and Procedures	D1-D68
Dl: EPA Method 21 Procedures	D2-D8
D2: FHR TVA 1000-B FID Calibration Procedure	D9-D18
D3: Phx42™ FID Use and Calibration Procedures	D19-D35
D4: MiniRAE 3000 and Cub PID Information and Procedures	D36-D44
D5: Optical Gas Imaging Procedures	D45-D58
D6: Node Field Test Procedures (Sensor Bump Test and Cal.)	D59-D62
D7: Simulated Leak Test Gas Release Procedures	D63-D68
Appendix E: EPA ORD Equivalency Simulations	E1-E41
El: EPA ORD Simulation Approach	E2-28
E2: EPA ORD Software Code	E29-E40
E3: Simulation Files	 E41-E41
Appendix F: Molex Equivalency Simulations	F1-F43
Fl: Molex Simulation Approach	F2-F32
F2: FHR CC Test Unit Leak Cluster Simulation and Analysis	F33-F43
Acknowledgments and Disclaimer
As detailed in Acronyms and Acknowledgments section, this work has many contributors.
Funding and in-kind support for this project was provided by the U.S. Environmental Protection
Agency (EPA), Office of Research and Development (ORD), Flint Hills Resources (FHR), and
Molex under cooperative research and development agreement 914-16. This document has been
reviewed in accordance with EPA policy and approved for publication. Approval does not
signify that the contents reflect the views of the Agency, nor does mention of trade names or
commercial products constitute endorsement or recommendation for use. Any mention of trade
names, products, services, or enterprises does not imply an endorsement by EPA, FHR, or
Molex.

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 6 of 85
Executive Summary
Under a collaborative research agreement, FHR, Molex, and EPA ORD developed and tested a
fugitive leak detection approach that provides environmental, safety, and cost saving advantages
over the current manual EPA Method 21 inspection procedure. The automated and continuous
Leak Detection Sensor Network system, along with optimized facility response protocols,
enables leaks to be detected and repaired faster and more efficiently than with quarterly or
annually executed Method 21. This report reviews the strengths and weaknesses of sensor-based
leak detection approaches and describes the development and testing process that culminated in
long-term trials in working FHR refinery process units. The real-time analytics of Molex's
mSyte™ sensor information system helped the facility repair team discover and assess a range of
leak sizes, many with emission levels well below that routinely detectable with other next-gen
survey approaches, such as optical gas imaging. In addition to leaks associated with routinely
monitored components, unexpected emission sources not detectable by other approaches were
also found, illustrating the value of the 24/7 area-monitoring concept. Multiyear simulated
emission modeling based on real-world leak detection data showed that the new sensor-based
approach can provide equivalent or better emissions control to Method 21 for cost-realizable
sensor network node densities. In addition to development, testing, and emission modeling
results, this report describes quality assurance advantages of the developed approach and outlines
areas for potential future research.
1.0	Introduction
This introductory section provides an executive view of this research program and this report.
1.1	LDAR Innovation CRADA - Background and Status
In the United States (U. S.), volatile organic compound (VOCs) and hazardous air pollutant
(HAPs) industrial fugitive emissions (leaks) from process components have traditionally been
managed using scheduled manual inspection of individual components with U.S. Environmental
Protection Agency EPA Method 21 (M21) as part of leak detection and repair (LDAR).1"3
Although sensitive and flexible, M21 is resource intensive, requiring equipment-laden operators
to physically inspect and document a vast number of components, searching for the approximate
1-2% that are leaking (FHR data believed typical of refineries). Due to the implementation
burden of M21-based LDAR, it is conducted infrequently (quarterly to annually), creating the
potential for emissions to go undetected for extended periods of time. Additionally, M21 LDAR
programs are not designed to comprehensively monitor all potential fugitive emission points in a
facility. This likely increases the risk that unintended emissions can go undiscovered indefinitely,
or until worsening to the point of human detection by audio-visual-olfactory (AVO) procedures
or safety monitors in more serious cases.
Emerging leak detection assessment approaches, such as optical gas imaging (OGI)4"8 and
remote sensor systems,9"14 may form the basis for more efficacious fugitive emission
management strategies in a range of industrial applications. As a driving motivation for facilities
such as refineries and chemical plants, faster and more cost-effective detection and repair of

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB. moiex	Page7of85
USEMOItoglRcuKliaMDlMEVaM
fugitive leaks and other unintended emissions directly reduces air pollution and associated
impacts.15"19 Better LDAR can save companies money through improved operational
performance, reduced and better maintenance, and promotion of safer working environments.
Regulators benefit from development of new LDAR systems that further understanding,
identification, and documentation of overall emissions and emissions sources, enabling optimal
control strategies. Future congruent stakeholder benefits, such as enhanced transparency and
community relations, can also be envisioned.
Under its next generation emission measurement (NGEM) program, EPA ORD's Center for
Environmental Measurement and Modeling (CEMM) works collaboratively with industry,
regulators, academic groups, and innovative companies to explore new air pollutant source
management strategies made possible by advancing technologies. In 2015, representatives from
FHR, a refinery operator, approached EPA ORD regarding potential collaboration on cutting-
edge fugitive emission detection research. In 2017, EPA ORD CEMM20, FHR, and Molex (a
sensor/connector company) initiated work under a cooperative research and development
agreement (CRADA)21 to explore new NGEM-based emission detection techniques for
petrochemical and other facilities.
As of December 2019, the "LDAR Innovation CRADA" (EPA #914-16) has successfully
developed and tested a first-of-its-kind refinery leak detection sensor network (LDSN) that
operates with specialized facility procedures defined in a detection response framework (DRF) to
produce an integrated emission monitoring and documentation system. The LDSN/DRF
prototype system has demonstrated the potential for improved fugitive emission detection and
mitigation capability for select applications in pilot deployments in working petrochemical
facilities. This report summarizes work under the CRADA and presents the scientific and
business cases for LDSN/DRF as an alternative for M21-based LDAR. The report outlines what
is known and the work that remains to understand regarding this novel fugitive emissions
detection approach.
1.2 Report Organization
The primary purpose of this report is to describe research and development activities and results
since project inception, covering an approximate 2.5-year period that represents the midway
point for a typical five-year CRADA. This report is organized into the following sections that
sequentially discuss progress against the CRADA objectives outlined in Section 1.3.
Sections 1 and 2 of this report outline technical milestones and provide an overview of the
innovative LDAR technology developed under the CRADA. The advantages and disadvantages
of M21 and other forms of leak detection are described to provide context for the developed
approach. These sections briefly discuss the necessary components of the LDSN/DRF system,
including sensor hardware, informatics, data management, and procedures for assessing and
documenting emissions and quality assured operation.
Section 3 of this report summarizes technology development and testing performed to date. With
support from Appendices A and B, Section 3 discusses test methods and LDSN/DRF
development steps in the context of key findings from field trials. The concept of a detection
threshold (DT) performance band for the LDSN/DRF approach in the context of refinery process
unit applications is presented and discussed.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page8of85
USEMOItoglRcuKliaMDlMEVaM
Section 4 and Appendices E-G explore the environmental performance of the LDSN/DRF
system compared to the current work practice (CWP) for the FHR process units studied. Section
5 outlines the method quality assurance/quality control (QA/QC) procedures needed to assure
positive environmental and business benefits for the process units studied. Section 6 briefly
summarizes major accomplishments to date and discusses additional attributes and challenges of
the concept and the work needed to further develop a transferable performance-based method.
Project acknowledgments and a list of acronyms are provided in Section 7 with cited references
and endnotes contained in Section 8.
1.3 CRADA Research Objectives
This CRADA has both near-term objectives (NTOs) and longer-term objectives (LTOs), listed
below. The former focuses on determination of the basic feasibility of an innovative LDAR
approach, whereas the latter evaluates standardization, transferability, and the ultimate use cases
of the same. This report focuses on progress against key NTOs and outlines potential for
continued work against LTOs.
•	NTO 1: Investigate candidate approaches for the LDAR Innovation CRADA and select
one for development and testing. This objective was met in the selection of a fixed-placed
multi-node sensor network approach, called LDSN/DRF [progress #####].
•	NTO 2: Through progressive design iterations and testing, develop one embodiment (one
sensor type) of the LDSN/DRF system and evaluate implementation feasibility in multi-
month pilot testing in working FHR process units [progress
•	NTO 3: Establish LDSN/DRF emission control performance in comparison to M21 [the
CWP] in the FHR pilot test process units [progress ####0],
•	NTO 4: Establish the ongoing method, QA, and documentation requirements necessary to
ensure ongoing validated performance of the specific embodiment of the LDSN/DRF in
the FHR pilot test process units [progress ###OOJ.
•	LTO 1: Understand the method requirements and regulatory hurdles for creation of a
standardized, performance-based LDSN/DRF approach for possible consideration as an
alternative work practice (AWP) to the M21-based CWP [progress ###OOJ.
•	LTO 2: If feasible, deliver a standardized performance-based method for LDSN/DRF22
[progress ##000],
•	LTO 3: Understand the potential range of applications and additional benefits (e.g.
improved emission inventory knowledge) that may be realized from this NGEM
approach, [progress ##OOOJ.
•	LTO 4: Disseminate non-proprietary technical learning established in this CRADA by
publishing aspects of this research as part of scientific conferences and in peer reviewed
journal articles and reports [progress ##OOOJ.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 9 of 85
mSyte™
1.4 Executive View of LDSN/DRF
Figures 1-1 and 1-2 provide conceptual overviews of the LDSN/DRF infrastructure and systems,
the primary product of this CRADA. In brief, sensitive leak detection sensor nodes are deployed
at strategically selected positions inside a facility process unit to form the LDSN system. A web-
based analytics platform (called mSyte™) automatically acquires and analyzes real-time data
from the network of wireless sensors, along with wind information and facility metadata, to
identify, locate, and categorize emissions of interest (Figure 1-1). Together, this hardware and
cloud-based software is called the LDSN.
An equally important part of this innovative LDAR approach is the DRF (lower right of Figure
1-2). The DRF represents the procedures facility personnel use both to respond to information
delivered by the LDSN and to provide important metadata and QA inputs to the LDSN to ensure
ongoing performance. The interaction of the machine (the LDSN) and the human (the DRF) is
facilitated by mobile device technologies (cell phones and tablet interfaces) that are viewable by
personnel while working in the unit. The DRF and the LDSN work together to locate, assess, and
produce auditable records of discovered leaks and associated repairs under the innovative LDAR
program. Tools like M21 and OGI are part of the DRF and assist in identifying and assessing
emissions. The LDSN/DRF focuses on timely detection of larger emissions sources, facilitating
more rapid mitigation and leading to emissions reductions.
Figure 1-1. Overview of the LDSN/DRF infrastructure
_ .°_ i>& .JL k.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 10 of 85
Sensor
nodes
LDSN/DRF
System
LDSN / Facility Interface
Detection
Response
Framework
(DRF)
Analytics
(leak detects,
records, OA)
Figure 1-2. Overview of the LDSN/DRF system
1.4.1 The LDSN - Current Sensor Used
Leveraging advancements in information technology and innovative manufacturing, the
performance of gas sensors is improving over time, while the cost for integration into business
operations is steadily decreasing. Current and future LDSNs can utilize different types of sensors
to optimally monitor for leaks and malfunctions in specific applications (e.g. a methane sensor
for natural gas). The mode of sensor deployment may include fixed networks or mobile systems
(e.g. airborne drone, work truck, or next-gen personal safety monitors), and systems may be
point sensors or utilize extended open-path optical beams.
For this project, an LDSN approach consisting of a fixed-place network of 10.6 eV
photoionization detectors (PIDs) was selected for development and testing. This specific
embodiment of an LDSN (Figure 1-3) was chosen based on an analysis of the current state of
VOC sensor technology (near-term ready), the expected performance of the envisioned system,
and projected development and implementation cost factors. In this project a new high sensitivity
wireless sensor node suitable for deployment in hazardous environments was developed
[Figure 1-3(a)]. The pumpless FID sensor node passively samples air in the process unit,
producing a concentration measurement once every second. The PTD ionizes certain air pollutant
molecules [Figure 1 -3(b)] that are emitted from leaks and malfunctions, producing a time-
resolved measure with detected excursions above baseline indicating proximate emission events
[Figure 1 -3(c)], The automated LDSN system collaboratively analyzes sensor and wind data,
reporting detection and location estimates through the DRF communication process.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS mol©x
resources*
Progress on LDAR Innovation
January 28, 2021
Page 11 of 85
The Current LDSN is
based on 10.6 eV PID
Sensor nodes
4000
3500
3000
2500
2000
1500
1000
500
0
Example of Leak Detection Signal J
(C)
• node near leak
¦ node upwind of leak
t
«
• * •
• *»
»	. i-* ':
10.6 eV Lamp
Insulation
Nn\ Detected (ionized)
Gas Molecule (M)
Schematic View of PID Sensor j
Figure 1-3. Overview of LDSN based on fixed-place 10.6 eV PIDs
1.4.2 The LDSN/DRF Process
An effective LDSN/DRF system must meet technical performance requirements including robust
automated LDSN operation and requisite leak detection capability at cost-realizable node
densities. Additionally, there are other requirements related to the intelligence of the system,
such as the way it communicates with and is utilized by the facility, that are equally important.
As detailed in this report, the LDSN/DRF is part of an overall operational process that brings in
necessary metadata, personnel interactions, record keeping, supporting measurements, and site-
specific factors. In this manner, both business processes and environmental objectives can be
optimized. A schematic representation of the LDSN/DRF is shown in Figure 1-4.
LDSN
Monitors
LDSN
Documents
DRF
Investigates
and Repairs
f| M21 Gear
Figure 1-4. Overview of the LDSN/DRF process and DRF equipment

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 12 of 85
The LDSN system automatically detects, categorizes, and approximates the location of emissions
in the monitored process unit based on VOC and meteorological measurements. The LDSN
notifies the facility LDAR team of detected emissions, who then take appropriate action under
the DRF. A detected emission that may represent a potential safety concern is communicated
quickly through an alert and/or alarm and requires immediate attention (an augmentation to
existing safety systems). Other more routine emissions are detected and communicated over time
(hours to days), with repeating detected concentration peaks [Figure 1 -3(c)] under varying wind
directions building confidence in the assignment of an emission event. Once a notification
threshold is reached, a potential source location (PSL) window is assigned. With information
from a mobile device, the LDAR personnel respond, per the DRF, to investigate the PSL and
identify (pinpoint) the source(s), first with "fast survey gear" that is easily deployed. The
identified source is then measured with calibrated M21-gear to document the leak's peak
concentration value and initiate repair procedures. Information is efficiently entered into the
mobile device and recorded for regulatory, operational, and QA purposes. Other metadata, such
as maintenance activities, are entered into the system to facilitate intelligent LDSN operation.
2.0	Refinery LDAR Innovation
This section describes the strengths and weakness of the CWP (Section 2.1), compares CWP and
LDSN/DRF attributes (Section 2.2), and outlines the scientific approach used in this CRADA to
investigate this innovative LDAR monitoring approach (Section 2.3).
2.1	Current Work Practice - Comprehensive M21
The first step to improve LDAR monitoring is to understand the strengths and weaknesses of the
CWP. Comprehensive manual M21 is used in multiple industrial sectors and is a required work
practice in over 45 U.S. federal and numerous state regulations. Since one of the NTOs of this
CRADA centered on FHR operations, this report focuses on petrochemical refining facilities.
Alternative monitoring to Manual M21 under a CWP for other source categories (e.g. chemical
facilities that process/store high toxicity compounds) may vary in implications and applicability.
2.1.1 Overview of M21
Methane, VOCs, and organic HAPs can be emitted from a variety of sources in refineries.
Known and/or permitted emissions can originate from stacks, tanks, vents, and other sources as
part of normal operations. Unintended emissions from various sources can also occur and remain
unknown until detected and assessed. This source category includes fugitive emissions from
potential sources that are typically part of facility LDAR programs under the CWP, with
emissions originating from potential leak interfaces on process equipment and components.
Valve packing, pump seals, compressor seals, and flange gaskets are examples of common
potential leak interfaces that are periodically monitored with M21. Other unintended sources
include potential emissions from components or equipment that are not routinely monitored as
part of LDAR programs or from process malfunctions. Although this latter category of sources
may be considered fugitive in nature, they are typically not considered to be fugitive emissions
under LDAR program definitions and are accounted for in other ways for regulatory, permitting
and inventory purposes.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page 13 of as
JSSWOflwolRwwthJW 5iv««xTOit
A U.S. refinery may be subject to a combination of federal, state, and local regulations that
require LDAR programs for fugitive emissions control. The work practices are designed to
identify leaking equipment so that emissions of VOCs and HAPs can be reduced through
effective repairs. Although site-specific compliance requirements can be complex, each LDAR
regulation/work practice essentially consists of periodic direct (manual) inspection of LDAR
program components in light liquid or gas vapor VOC service [as per 40 CFR Part 60 Appendix
A-7 (M21)1"4] to find fugitive leaks, followed by repair and re-inspection within a specified time
frame. Maintaining records necessary to demonstrate compliance and reporting is a key part of
every LDAR program.
The primary tool used in M21 is an extractive hand-held probe, a portable instrument that
determines the effective concentration of the emitted gas stream at the leak interface. The M21
probe is calibrated daily and must be sufficiently sensitive to the potentially emitted compounds
and meet other method requirements, such as response time, linearity, and flow rate. To execute
M21, an LDAR technician carries the M21 instrument to the inspection point and places the
extractive probe in direct contact with the component under test and slowly traces its
circumference to check the potential leak surface (or leak interface). The technician must dwell
an appropriate amount of time (seconds) to register a reading of leak concentration. The M21
peak leak concentration, also called screening value (SV), is the measured air mixing ratio of
combustible fraction [for a flame ionization detector (FID)-based unit] or ionizable fraction [for
a 10.6 eV PID-based unit] of the emitted stream as determined by the M21 hand-held probe and
can be affected by several factors.15 If the highest M21 concentration reading (its SV) for the
inspected component is found to be above an action control limit set by regulation or permit
[typically 500 to 10,000 parts per million by volume (ppmv) on a specific calibration compound
basis], the component is defined as "leaking" and is tagged for repair. Typically, an FID-based
hand-held probe is calibrated to methane whereas a PID-based unit is calibrated to isobutylene.
The instrument response for a given compound is expressed in in relation to the calibration gas
used.
2.1.2 Strengths of M21-based CWP
There are several inherent method strengths to the use of M21 in refineries (listed below). The
final benefit cited is derived from the prescription for use of M21 under the CWP.
•	M21 is a manually implemented physical contact inspection procedure that provides
rapid feedback to the LDAR technician as the probe is moved around the leak surface. In
this manner, the origin (the cause) of the component's emission can usually be
determined, providing valuable information for the repair (if required).
•	Because M21 uses a sensitive and calibrated hand-held probe, the method can deliver
relatively reproduceable SV assessments against a specific leak definition (when
executed by well-trained and diligent operators) over a wide range of leak interface
concentration values.
•	The SV at the leak interface recorded by M21 can be used to estimate the mass emission
rate (ER) of the leak through established correlation equations for use in fugitive
emissions inventories and other regulatory reporting purposes.2

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX Page 14 of 85
USfflOltadltawintuHOiwiepwt
•	Since M21 is required by many federal and state regulations, the business ecosystem
supporting the vast record keeping associated with its execution is well-developed.
•	M21 is an established technique used by different industries providing common points of
compari son and useful historical performance data to facility operators and regulators.
•	Under the CWP, M21 is comprehensively applied to all the LDAR program components
in a facility on at least an annual basis. The primary purpose for this is to attempt to
ensure that leaking components are found and fixed within (at least) a one-year time
horizon (components such as valves are typically inspected quarterly). As an additional
benefit, this indiscriminate application of M21 to the complete set of LDAR program
components provides population data for emissions inventory purposes.
2.1.3 Weaknesses of M21 -based CWP
The M21-based CWP for refineries has several technical and business-related weaknesses that
should be understood in the context of an improved LDAR monitoring approach.
•	Due in part to the implementation burden of the M21-based CWP, it is executed
periodically (e.g. quarterly or annually), depending on component type. Because of this
high temporal latency, it is possible for large leaks to emerge between M21 inspection
cycles and go undetected for weeks to years
•	There is potential for unintended emissions to occur which originate from components,
equipment, or malfunctioning processes that are not part of the CWP LDAR monitoring
program and, therefore, can go undiscovered for extended periods of time.
•	The accuracy of fugitive leak mass ER estimates based on M21 concentration
measurements at the leak interface is relatively modest.
•	M21 is physically demanding, requiring equipment laden LDAR technicians (Figure 2-1)
to frequently work in challenging environmental conditions and access hard to reach
locations. This creates safety concerns for LDAR Technicians and additional expenses
for companies.
Figure 2-1. (a) and (b) M21 gear weighing 39.5 lbs, (c) donned gear with thermal sleeves
• The daily M21 inspector's job is difficult, and turnover rates are among the highest in a
facility. Variation in M21 application can be a concern, leading to high oversight and
training costs.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page isofss
JSSWOflwolRwwthJW 5iv««xTOit
•	The calibration and record keeping aspects of the CWP are significant, creating
thousands of opportunities for defects and associated monetary and reputational impacts.
•	The M21-based CWP represents an inefficient business practice. For FHR for example,
overall LDAR program monitoring expenditures for facilities are measured in the
millions of dollars per year with fully accounted costs ranging up to $8 per component
per monitoring event in some cases. A weakness of the CWP is the unnecessary
expenditure of resources on non-leaking components. Since typically over 98% of
individual M21 inspections result in "no leak" findings requiring no action (other than
record keeping), the strategy of progressive inspection through the complete set of LDAR
program components in the uninformed search for the few requiring attention is
inefficient (produces no environmental benefit and yet has significant cost) and produces
personnel safety risk.
•	The M21-based CWP emissions control strategy is not optimized due to its systematic
assessment of all components. The vast majority of emissions from LDAR program
components originate from a small population of high emitters.23 Monitoring techniques
that focus on more timely identification and repair of high emitters should support more
effective control strategies.
•	The M21-based CWP creates difficulty to interface with data management and reporting
software resulting in multiple touchpoints and opportunities for data loss.
2.2 Comparing the CWP to Proposed Alternatives
Any successful AWP must compare favorably with the CWP in two respects. The AWP must be
equivalently protective of the environment. In this case, an alternative NGEM-based LDAR
monitoring approach and associated facility response must control fugitive air pollutant
emissions at least as well as the CWP. Equally important, a successful AWP must also make
sense from a business perspective, for if the proposed alternative is more expensive or
burdensome to execute, facilities will choose the CWP. In Section 2.2.1, these factors are
discussed by comparing select attributes of the CWP previously described to the envisioned
LDSN/DRF approach. Section 2.2.2 provides further context on the comparison by briefly
examining the current OGI-based AWP and why that LDAR alternative has not been widely
adopted by industry.
2.2.1 Comparison of CWP and LDSN/DRF
For the purposes of the following comparison (Table 2-1), it is assumed that the specific
LDSN/DRF implementation has been shown to produce equivalent (or better) mass emissions
reductions compared to the CWP (discussed in Section 4). In addition to fugitive control, several
other technical benefits and cost factor attributes are discussed.

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex	Page isofss
Table 2-1. Comparison of CWP and LDSN/DRF attributes
CWP Attribute
LDSN/DRF Attribute
A CWP technical strength is the use of well-established
M21 procedures to produce actionable SVs against leak
definitions, to derive mass emission estimates for leaks,
and to inform leak repairs.
This is also a technical strength for LDSN/DRF
since M21 procedures are used to characterize the
emissions from leaks discovered by the system.
A CWP technical strength lies in the use of the sensitive
M21 physical contact inspection procedure to examine
LDAR program components in order to identify and
document both actionable leaks and small to moderate
emissions that are ubiquitously present.
This is a technical weakness for LDSN/DRF (and
other NGEM approaches like OGI). Since LDSN
is not a physical contact inspection procedure, it is
less sensitive to lower-level emission rate leaks
and cannot directly identify the leak source.
A secondary benefit of the CWP is derived from the
comprehensive (100%) manual inspection of all LDAR
program components. The large dataset thus produced is
useful for fugitive emission inventory estimation.
Because LDSN/DRF does not require 100%
manual inspection of all LDAR program
components, other emission inventory approaches
are required. One such approach is discussed in
later sections of this report.
A technical weakness of the CWP is the delayed
discovery and repair of actionable leaks for LDAR
program components due to the periodic nature of the
inspection procedure (increasing emissions).
A technical strength of LDSN/DRF lies in the
continuous discovery and timely mitigation of
significant emissions from LDAR program
components (reducing emissions).
A technical weakness of the CWP is it typically results
in identification and control of emissions only from
LDAR program components.
As an added benefit to an LDAR program, a
technical strength of LDSN/DRF lies in the
continuous discovery and timely mitigation of
significant emissions from non-LDAR program
components (reducing emissions).
A business weakness of the CWP is the requirement for
comprehensive (100%) documentation and inspection of
a vast number of LDAR program components using a
resource-intensive manual inspection method in an
uninformed search for a typically small number (<2%)
of leaking components.
A business strength of LDSN/DRF is that
resources are focused on timely discovery and
mitigation of significant emissions using
continuous information gathering and optimized
response, providing both operational and safety
benefit.
2.2.2 Alternate Work Practice - OGI
For the past decade, OGI has played an important role in LDAR innovation.4"8 In this CRADA,
OGI is used as a supporting tool under the DRF to help facility technicians locate and assess
emissions that are detected by the LDSN. OGI systems work in a sub-region of the infrared (IR)
spectrum where molecules that comprise hydrocarbon emission plumes absorb and emit energy
in the form of IR photons. The most widely used OGI camera, the model FLIR® GF320 (FLIR
Systems, Billerica, MA), which operates in the 3.2 to 3.4 jam region (Figure 2-2, far left). The
OGI camera operator looks at a scene containing the LDAR component(s) under inspection and
observes the IR photons emitted from objects in the background, the component(s), and by the
gas itself (if emitted). If a fugitive or vented gas emission is present above detection limit, the
molecules in the plume can net absorb IR radiation (or net emit if the gas is hotter than the
scene), producing a discernable contrast in the image that makes the normally invisible plume
visible. The emitted gas must have IR spectral features detectable by the OGI camera utilized to
be observed.

-------
A rnA United States	_C	n wr. r ,TT, «	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS	January 28, 2021
Q8jjjj>	J" lr«»ourcaa. moiex	Page 17 of 85
Figure 2-2. Examples of commercial OGI cameras
In 2008, EPA promulgated an LDAR A WP4 ; that allows use of OGI on a bimonthly inspection
schedule in lieu of M21 for certain applications. Although facilities that choose to adopt the
current OGI AWP can reduce the amount of M21 executed, full application of the CWP's M21
manual inspection approach is still required under the AWP once per year.
From the regulatory development perspective, there were likely several reasons why annual M21
remained a requirement under the OGI A WP. Some of these reasons may have been related to
the risk tolerance of EPA, showing caution with full adoption of a new technology where
methods and performance assessments were not fully mature. Other reasons may have been
related to perceived weaknesses in the mass emission reduction equivalency calculations and
hesitancy to abandon established procedures with clear secondary benefits associated with
emission inventory determination, discussed in Table 2-1. Regardless of the underlying reasons,
a decade later it is now clear that annual M21 and associated documentation requirements
significantly reduced the business case for OGI-AWP adoption. As a consequence, the OGI-
AWP is little used by the refinery sector. Some lessons learned from the OGI-AWP process are
considered in the discussion and formulation of a the proposed LDSN/DRF alternative.
2.3 CRADA Scientific Approach
To achieve CRADA NTOs 1-4 (Section 1.3), significant sensor and system engineering
advancement was required. Early engineering development was complicated by the lack of
established knowledge regarding remote detection of emissions in complex industrial
environments (e.g. refinery process units). For this reason, an observational research approach
was adopted whereby progressively more complex field testing using steadily advancing LDSN
prototypes allowed the parameter space to be efficiently explored, sensor design decisions to be
evaluated, and QA methods to be developed and refined (Section 3).
One of the key scientific and engineering questions for the LDSN/DRF approach centered on the
ability of the developed system to robustly detect leaks of various sizes in process units, with
acceptably low false positive rates. Due to its remote observation nature, factors affecting
detection sensitivity for LDSN/DRF (and other NGEM systems) differ from a physical contact
inspection approach, such as M21. The time-resolved "leak signal" recorded by a sensor node
[Figure l-3(c)] depends on leak ER, integrated sensor response to emitted compounds, source to
sensor node separation distance, atmospheric plume transport and dispersion, and wind field
obstructions, with the first two factors shared by M21. As detailed in Section 3, initial testing
was performed using rudimentary prototypes and simulated leak scenarios where release of a
known amount of a test gas allowed certain factors to be explored. As the LDSN/DRF system
advanced, testing progressed into full process unit deployments where simulated-leak
information was replaced with real-world emission data.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page isofss
JSSWOflwolRwwthJW 5iv««xTOit
As sensor science develops over time, new systems that can provide more sensitive and/or
speciated measurement of VOCs and HAPs will emerge. For example, a sensor that mimics the
broad VOC detection capability and sensitivity of a gas-powered FID may one day be available
in a miniaturized package suitable for automated and long-term sensor network deployment.
These new sensors can be used in LDSN/DRF approaches in the future. For this project, the best
available sensor technology for LDSN development and testing was determined to be a 10.6 eV
PID (Figure 1-3). For this reason, the detection capability of this sensor type to the compounds
comprising the refinery emissions of interest was a focus of the research. Figure 2-3 presents the
relative response of this sensor type for some common industrial gases, noting that compounds
with an ionization potential (IP) greater than 10.6 eV are not detectable. The primary gas used
for simulated-leak controlled-release testing was isobutylene, a standard calibration gas choice
for PIDs with a universally defined response factor of 1.0. As described in Sections 3 and 4,
testing in FHR process units was conducted across a spectrum of compounds possessing a range
of PID responses to assist in understanding LDSN/DRF detection capability in a variety of real-
world scenarios.
Figure 2-3 illustrates relative strength of the response of a 10.6 eV PID to select gas species. For
example, a concentration of 1 ppmv benzene produces approximately 1.89 times the PID signal
compared to 1 ppmv of isobutylene (the calibration standard). To express a PID-measured
benzene reading in terms of isobutylene, the signal is multiplied by a calibration correction
factor, also called the response factor (RF), that for benzene is expressed as:
RFbenzene — 1/1.89 (relative response) = 0.53 eq. 1
3.0
CD
C
to
o
Q_
V)
0)
cn
Q)
>
2.5
2.0
Strong Response
(aromatics)
-2 1.5
aj
cd
9
o_
>
0)
O
1.0
0.5
0.0
¦ 9
Q- ro
No Response
IP > 10.6 eV
Response =
(isobutylene)
Weak Response
(ethylene)
Figure 2-3. Example of 10.6 eV PID relative response

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Rageigofss
USEMOItoglRcuKliaMDlMEVaM
Compounds with RFs >1.0 produce less PID signal than isobutylene and are therefore more
difficult to detect. A sample of PID RFs for compounds relevant to initial CRADA testing is
found in Table 2-2.24 Compounds such as methane and propane possess ionization energies
above 10.6 eV, so are not detectable. Instruments like FIDs and
OGI also have compound-specific RFs and perform well on
saturated alkanes not suitable for PIDs. As part of the scientific
approach design, CRADA testing included controlled releases of
isobutylene with real-world testing in process units with emitted
gas steams that spanned the range of PID RFs. The use of
LDSN/DRF in process units possessing multiple gas streams is
discussed in Section 3.
The time-resolved signal produced on an LDSN sensor node from
a leak located some distance away is a function of atmospheric
conditions and local wind flow obstructions. As part of the
scientific approach, tests were conducted at multiple locations that
possessed a variety of wind flow complexity and base
meteorological conditions. These tests are described in Sections
3.1, 3.3, and 3.5 and associated appendices.
Compound
RF
Benzene
0.5
m-Xylene
0.5
Toluene
0.6
Jet Fuel JP-8
0.7
Diesel
0.7
Gasoline
0.9
Isobutylene
1.0
Cyclohexane
1.3
Propylene
1.4
Heptane
1.7
n-Hexane
3.0
n-Pentane
7.0
n-Butane
40
Propane
non detect
Methane
non detect
Table 2-2. Select 10.6eV PID RFs
3.0 LDSN/DRF Development and Testing
This section summarizes research and engineering progress accomplished under the CRADA in
pursuit of the NTOs discussed in Section 1.3. As outlined in Figure 3-1, the incremental design
advancements in hardware, software, and procedures that now form the LDSN/DRF
methodology were developed and tested over time, starting in the fall of 2017. As the
LDSN/DRF approach matured, it was deployed in progressively more complex testing scenarios.
Field trials culminated in a long-term pilot test at FHR's Sour Lake Olefins Facility (FHR SLOF)
in 2018 and a full-scale pilot test in two process units at the FHR's Corpus Christi West Refinery
(FHR CC) in 2019. Sections 3.1 through 3.5 provide an overview of the LDSN/DRF
development and testing, with supporting information found in appendices. These sections
introduce important technical concepts that are necessary for system performance assessment in
comparison to the CWP (Section 4), and in the discussion of methods required to maintain and
document the performance of LDSN/DRF as an alternative LDAR approach (Section 5).
2017
2018
2019
Exploratory
Research
(Section 3.1)
Early sensors
EPA test range
Controlled releases
Isobutylene (RF= 1)

c \

IL
LDSN/DRF

rN
Development 1

I"
(Section 3.2)
~
j


Detection limits
Early simulations
Algorithm dev
System design
FHR SLOF
Pilot Tests
(Section 3.3)
Prototype sensors
Small process unit
Controlled releases and
real-world leaks
Isobutylene (RF= 1)
Ethylene mix (RF = 10)
LDSN/DRF
Development 2
(Section 3.4)
	
Network design
Mobile interface
LDAR Process
Advanced simulations
Quality assurance
Documentation
FHR CC
Pilot Tests
(Section 3.5)
Pre production sensors
Refinery process units
Real-world leaks
M-Xylene (RF = 0.8)
Mid-Crude (RF= 2.5)
Fuel-gas (RF = N/A)
Figure 3-1. Sequence of LDSN/DRF development and testing

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 20 of 85
3.1 Exploratory Research
Work to identify candidate sensor technologies was executed by EPA ORD and Molex in
separate laboratory and field trials during the initial phases of research. EPA ORD performed
field test comparisons of its PID-based SPod fenceline sensor as part of near-source research
studies,17'and Molex conducted environmental chamber tests to carefully examine the effects
of temperature and relative humidity on various PID sensor designs. Collaborative field testing
of early prototypes using controlled releases of simulated-leak test gases (primarily isobutylene)
was executed at the EPA ORD test range in Durham, NC, from 11/27/2017 to 12/07/2017 and
2/26/2018 to 3/07/2018 (Figure 3-2). The objectives of these exploratory tests were to establish
the basic feasibility of a 10.6 eV PID-based LDSN approach and to develop detection sensitivity
and engineering data to facilitate the development of progressively more advanced prototype
LDSN systems. These tests also explored early aspects of DRF development as each of the
controlled-release experiments was characterized using M21, OGI, and other leak assessment
approaches. These tri als examined early aspects of source location approaches, building off the
fenceline approaches developed by EPA ORD. Additionally, these tests helped develop sensor
QA procedures, such as node-specific functional tests, described in Section 5.
Mezzanine
Collocated Sensors
Mezzanine
M21 Operator
MFC-controlled Gas Release

Exploratory Trials - EPA ORD Test Range
Simulated Leaks
Af
Sensor Nodes
© J
		1	

Figure 3-2. Exploratory trials at EPA ORD test range
Controlled-release testing used the early prototype sensors, described in Section 3.2, in both
collocated and dispersed (network) scenarios, with the simulated leak and sensor positions
systematically varied. The specifics of the exploratory tests are summarized in Appendix A with
details of the mass flow controller (MFC)-based controlled releases and supporting M21 and
OGI methods discussed in Appendix D. In these exploratory tests, a total of 40 controlled-release

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page2iof85
USEMOItoglRcuKliaMDlMEVaM
trials were conducted using instrument grade isobutylene (>99.5% pure). The isobutylene tests
utilized controlled-release mass ERs ranging from 0.26 g/hr to 14.2 g/hr, with 75% of the tests
(30 of 40) conducted on the low end of the release range at ERs < 1.42 g/hr. The shortest and
longest duration trials for this subset were 11 minutes and 100 minutes, respectively, with an
average duration of 41 minutes. The separation distance between the simulated leak points and
the sensor nodes ranged from 5 ft to 72 ft and is further discussed in Section 3.2 in context of
LDSN detection performance evaluation. Experiments involving vertical offset of the simulated
leak points and sensors were also conducted in preparation for multilevel process unit field trials
subsequently discussed. A total of three pure ethylene and two pure methane (both instrument
grade gas) releases were also conducted for M21 and OGI comparison purposes, and to explore
the atypically large range in RF values (10 to 40) for ethylene found in PID manufacturers' RF
tables. Understanding the ethylene RF was important since this was the main compound of
interest in the FHR SLOF pilot test (Section 3.3) and represents the low range of RF parameter
space potentially observable with 10.6 eV PID (with dense node deployments).
Figure 3.3 shows an example of an early controlled-release test with five collocated prototype
sensors (several PID types), placed 38 ft from a simulated leak that was continuously emitting
isobutylene test gas at 0.71 g/hr (5 seem). The abscissa (x-axis) indicates the time of the sensor
reading at a data acquisition rate of 1 Hz, while the primary ordinate (y-axis) shows the raw
output voltage from the sensors. The secondary y-axis shows the approximate sensor reading in
units of parts per billion by volume "equivalent" (ppbe). As discussed in Section 2, a leak
detection in a process unit represents a mixture of compounds with different PID response
factors, so the reading is reported in units of ppbe, with reference to isobutylene as a calibration
standard gas.
>
E
145.0
135.0
125.0
Isobutylene leak ER = 0.71 g/hr
Leak to sensor distsance = 38 ft
Peak G1 Leak = 6.2x noise
Peak PI Leak = 2.7x noise
G1
115.0
Q.
>-
O
o
105.0
3 CT = 1.1 mV
I	P
~3cr"= 2.b~mV"
95.0

85.0
<>> ¦<$> -  <$> - - "Q?	'%• °>
Time (hr:min)
Figure 3-3. Example of early collocated sensor node test

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB.	moiex	Pagez>of85
USEMOItoglRcuKliaMDlMEVaM
As is typically observed in collocated studies, the 10.6 eV PID sensors exhibited very similar
time-resolved signals, with offsets in raw voltage baselines that could be corrected (if desired)
through calibration and analytical adjustment in software (called baseline-correction in SPod
fenceline sensor applications). In general, the short-term excursion in a sensor's output voltage
above its respective baseline level (called a peak) represents a potential emission detection
(typically called a leak detection). The peak signal excursion must be above the sensor's noise
floor to be considered a detection. The definition of an LDSN sensor signal peak detection is
typical of other analytical approaches. The LDSN peak must be greater than 3-times the local
standard deviation (3a) in time-resolved data defining the noise floor. Examples of the 3a noise
calculation are shown in the green (G) and purple (P) traces of Figure 3-3. Here the 3a noise
floors of the G and P signals inside the indicated windows (dashed-line boxes) are 1.1 mV and
2.0 mV respectively. Detectable signals (peaks) must exceed these noise levels [i.e. signal to
noise (S/N) ratio >1] to counted. In these cases, peaks G1 and PI exhibit S/N ratios of 6.2 and
2.7, respectively, indicating detections. Complexity arises in real-time tracking of a sensor
node's noise floor in cases where low level leak signal is persistent in the time series data
(convolution of noise and leak signal). This factor can be encountered in process units, and the
3a noise floor is typically higher than the example of Figure 3-3. Some aspects of automated
peak detection from LDSN time series data are discussed in the methods section (Section 5).
The signal registered by a high time-resolution leak detection sensor is always temporally
intermittent. The potential for a leak detection "event" in the sensor's time series data occurs
when a part of an emitted gas plume (an air parcel carrying some number density of target
molecules) physically overlaps (touches) a sensor node. The air parcels carrying the leaked target
molecules are transported from the emission point to the sensor location by constantly changing
wind directions and eddies, creating temporally intermittent plume to sensor overlap. The
instantaneous signal level and temporal duration during a detection event also depends on the
leak's mass ER, physical properties of the emitted gas, the sensor's RF to the emitted gas, the
sensor's time-resolution, and the leak to sensor node separation distance. However, factors
affecting emission plume transport (wind speed and direction, atmospheric dispersion, and
physical structures in the process unit creating eddies and channeling of flow) determine a
sensor's response to a large extent. With other variables fixed, the signal of Figure 3-3 is
completely determined by emission plume transport factors that can affect the observed
instantaneous peak heights and integrated widths by over an order of magnitude.
Shorter-term lulls in windspeed and longer-term low wind speed and compressed atmospheric
boundary layer states (e.g. overnight stagnation/calms) can feature prominently in LDSN signal
analysis. These instantaneous or prolonged low-flow atmospheric conditions can first
accumulate, and then puff-transport gas emitted from leak(s), increasing target molecule
concentrations in the air parcels delivered to the sensor. This temporal integration of emitted
mass assists in LDSN detection of emission issues and has no analog in M21 or OGI LDAR
monitoring. In addition to the potential for detection enhancement due to temporal mass ER
integration (under certain conditions), a single LDSN node spatially integrates mass ER
information transported to it from the upwind areas under observation. A cluster of small leaks
that individually may be below LDSN detection can combine to produce a detectable signal, with
the specific components ultimately identified, assessed, and repaired through DRF procedures.

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
J- r03OurceB. moiex	Page23of85
This feature of LDSN/DRF also has no analog in M21 or OGI monitoring. These factors are
further discussed in later sections.
Whereas Figure 3-3 illustrates collocated prototype sensor detections during a controlled-release
experiment, Figure 3-4 provides an example of a release trial conducted with spatially separated
nodes to form a detection network. Here the sensor data has been baseline-corrected through
simple subtraction of an offset value for ease of viewing. The sensor nodes (black, blue, and red
circles and corresponding timeseries) were positioned with respect to the simulated leak point
(yellow triangle) as indicated in the inset of Figure 3-4. The mass ER in this case was 1.42 g/hr
of isobutylene. With multiple sensor locations around the emission point, the probability of
achieving spatial overlap of the plume and a sensor node increases. In this example, under light
and variable winds, the detection moved frequently between the sensor node positions, producing
a signal that could be triangulated by the LDSN algorithm, which factors-in general wind
conditions.
The excursions in the red sensor's time series at 58 ft distance are generally weak, except for the
peak around 10:14 AM. This is likely an example of temporal mass integration caused by a lull
in wind speed coupled with eddy/channeling effects, resulting in a higher concentration air parcel
delivered to the location of the red sensor node. This detection enhancement around 10:14 AM is
also evident in the black sensor signal that lies closer to the simulated leak and along the same
plume transport line of action as the red sensor. This type of multi-node corroborated detection is
typical in LDSN sensor network applications and provides opportunity for both detection level
and source location performance enhancement and inter-node QA assessment for sensor health
Figure 3-4. Example of a network detection

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page24of85
USfflOltadltawintuHOiwiepwt
Although many scientific questions remained and much engineering development work was yet
to be done, the exploratory tests conducted at EPA ORD convinced the team that a 10.6 eY PID-
based LDSN system could be viable from both business and regulatory perspectives. The
decision was made to pursue this embodiment of the LDSN approach, effectively meeting
CRADANTO 1 (Section 1.3).
3.2 LDSN/DRF Development 1
Starting with exploratory tests and continuing through the 2018 FUR SLOF pilot, initial
LDSN/DRF design concepts and systems engineering were steadily advanced by the CRADA
team. Early LDSN development included progression of both sensor hardware and leak detection
software. In parallel, the team considered the practical use and QA of LDSN by the facility
operations as part of the DRF. Figure 3-5 provides an overview of wireless sensor engineering
advancement. In initial tests at the EPA ORD test range, the LDSN sensors were "breadboard
prototypes", suitable for basic comparisons between different variations and preparations of
several 10.6 eV PIDs. These tests were further informed by collocated EPA SPod fenceline
sensors in controlled-release trials. By early 2018, the LDSN Prototype 1 wireless sensor node
was developed and used for the early sensor network trials at EPA ORD (Section 3.1). With
learning from early trials, an improved wireless senor node (LDSN Prototype 2) was developed
and deployed for the long-term testing at FFIR SLOF (Section 3 .2), producing important real-
world use data in hot and humid conditions. As discussed in Section 3.4, the LDSN sensor node
design was advanced once again to pre-production form, obtaining a certification milestone in
preparation for the long-term, full-scale process unit implementation as part of FUR CC pilot
tests (Section 3.5). The LDSN control and data management software platform called
"mSyte™", along with its facility interface and relation to the DRF are discussed in Sections 3.4
and 3.5. The methodization of the approach is the subject of Section 5.
Test Sensors on
Breadboards
Prototype LDSN 2
Comparisons to EPA
SPod Fenceline Sensor
Pre-Production LDSN
Prototype LDSN 1
Exploratory Trials at
EPA ORD Test Range
Figure 3-5. Evolution of LDSN prototypes
For LDAR monitoring applications, sensor network and other NGEM techniques differ
significantly in operational approach from the M21-based CWP. As part of the LDSN/DRF
development process it was important to understand these differences in a context relevant to
LDAR programs. It was also important to define the expected performance of the LDSN/DRF

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB. moiex	Page25of85
USEMOItoglRcuKliaMDlMEVaM
approach in relation to the CWP, using both simulated and real-world testing. The controlled-
release simulated leak testing conducted at EPA RTP and FHR SLOF, along with the full-scale
pilot testing at FHR CC, provided an information foundation for the LDSN/DRF design and
development process.
As discussed in Section 2, M21 is a sensitive physical contact leak interface inspection method
that assigns an SV in ppm for the assessed emission point (leak interface). M21 is a point
concentration measurement that is correlated with (a proxy for) the mass ER of the leak. When
considering different leak types (e.g. pressure, leak surface, size, shape), an M21 SV
determination for a given ER can vary significantly. As described in Section 3.1, the LDSN leak
detection signal registered by a sensor node is fundamentally determined by the instantaneous
local concentration of the emitted plume at the sensor position (some distance from the leak).
The local plume concentration at a moment in time is a function of atmospheric conditions,
stochastic eddy flow, separation distance, and other factors but is directly related to the ER of the
leak (total number of molecules emitted by the leak per unit time).
Whereas the direct-contact inspection instruments used in M21 feature easily determined method
detection limits (MDLs), the analog for LDSN/DRF is more complex as it is a remote
observation affected by several external factors. The leak detection capability of a sensor node is
not expressed as a single value but forms a range called the detection threshold (DT) band
(Figures 3-6 through 3-8). As illustrated in Section 3.1, an instantaneous leak detection occurs
when the time-resolved signal produced by the leak on the LDSN node exceeds that sensor's
noise floor. Considering a single emission point, a relatively low-magnitude leak (low ER and
M21 SV) has a higher probability of being detected by the LDSN if it happens to be located
close to a sensor node. In this case, the number of molecules emitted by the leak per unit time is
relatively low, but the emission plume has yet to significantly disperse, so the instantaneous
plume concentration seen by the sensor may be robustly detected. Conversely, if a leak is located
far from a sensor node, it must possess a larger ER to be detected (ignoring special
meteorological conditions that can enhance remote detection capability, as previously discussed).
The concept of a DT band is common to many NGEM techniques, for example OGI detection
sensitivity is determined in part by environmental conditions outside of the operators control.26
Leak position (far)
Leak position (close
Sensor response increases as separation distancedecreases
A sensor's minimum detection capability for one
leaking component is not a single fixed value but
forms a range called the detection threshold (DT)
Sensor band.
A small mass ER leak can be easily detected if it is
located close to sensor node. If the same leaking
component is located farther from the sensor it
may be below the sensor's DT band.
Figure 3-6. Illustration of distance effects of detection
Data from the controlled-release simulated leak trials were used to explore the DT band and
implications for default node spacing as part of LDSN/DRF method development. The most
informative tests were releases of isobutylene at 0.36 g/hr (two trials), 0.71 g/hr (five trials) and
1.42 g/hr (23 trials). For this subset of trials, the average release duration was 44 minutes. In
Figure 3-7, the three known isobutylene mass ERs from these 30 controlled release experiments

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB.	moiex	Rage* of as
USEMOItoglRcuKliaMDlMEVaM
are depicted as single points on the x-axis, with individual M21-measured SVs from the
simulated leaking component shown on the y-axis. Typically, the FHR LDAR subject matter
expert (SME) executed more than one M21 measurement per release trial, and the values shown
were corrected for the isobutylene RF of the utilized M21 instrument (Thermo TVA 1000B
FID)27 The average values of the M21 SVs for three ERs are represented by the diamond
symbols, with red bars indicating ± 1 standard deviation (± la) in the data. With a precise
increase in MFC-measured mass ER (moving right along the x-axis), the associated average M21
SV proxy also increases (upward on y-axis). With a mass ER increase, the signal level detected
on a remote LDSN sensor node also increases (detections become easier), since the concentration
of target molecules in the air parcels transported to the sensor also rises (all other factors held
constant).
14000
12000
10000
Q.
Q.
_Q
>
l/l
8000 ¦-
oi 6000
5000
4000
2000 ¦-

T
A single leak ER produces
a range of M21 SV

DTU (+1g)-
DTU=8000
DTA=5000
»
\
<>
Sensor Response Increases with ER

As a leak's ER (and M21 SV) increases, it
becomes easier to detect because more target
molecules physically interact with (touch) the
downwind sensor node

T
u
1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 l.i
Mass Emission Rate (g/hr)
A sensor's minimum detection capability for one
leaking component is not a single fixed value but
forms a range called the DTband.
A single lower ER leak (e.g. =3000 ppm SV) can be
easily detected if it is located close to sensor node.
If the same leaking component is located farther
from the sensor, it must have a higher ER (and
typically larger SV) to be robustly detected.
The DT band for isobutylene (RF=1) is expressed
here on the M21 (y-axis). The DT band also exists
on the ER (x-axis) but is not shown (redundant)
Controlled isobutylene release tests indicate that a
nominal 5000 ppm SV is a reasonable estimate for
the average of the DTband (DTA) for a default 50-
60 ft node spacing for RF = 1 gases.
Figure 3-7. Controlled release ERs, M21 values and DT band
The controlled-release simulated leak experiments at < 1.42 g/hr produced LDSN leak detections
on at least one sensor node in 28 out of 30 trials. For the various spatial configurations of
collocated and dispersed sensor network deployments in these trials (Figure 3-2 inset, Appendix
A), a short time duration release provided potentially detectible plumes to only a subset of
sensors, due to wind direction limitations. This makes overall node detection statistics difficult to
express, since a lack of detection on a node may be due to insufficient plume transport to its
location. This temporal sampling limitation is not present in real-world use as LDSN/DRF that
provides 24/7 monitoring of the process unit, since changes in wind direction over extended
periods of time naturally occur. Factors such as wind direction and distance also affect the DT
for other NGEM approaches, such as OGI, where gas speciation, observation distance, wind

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
J- r03OurceB. moiex	Page27of85
speed, and the relative temperature of leaked gas and background scene determine detection
capability.28"29 In Figure 3-7, LDSN detections were achieved in each of the 0.36 g/hr trials, with
leak to sensor separation distances of 20 ft and 38 ft (Figure 3-3). Four of the five 0.71 g/hr trials
produced detections on at least one sensor node at distances ranging from 20 to 58 ft. The trial
that failed to produce a detection in this subset was relatively short duration (35 minutes) at a 39
ft separation distance, with all sensors grouped at the same location. Ambient winds speeds were
low for this experiment (<1 m/s) and the lack of detection is ascribed to insufficient plume
transport to the single collocated sensor location. Although LDSN node detections were
observed at both 0.36 g/hr and 0.71 g/hr mass ERs, sensor signal was more robust at 1.42 g/hr, so
most of the low release trials in this subset (23 of 30) were performed at this ER. At 1.42 g/hr,
LDSN detections were achieved in 22 of 23 trails at 26 distinct leak point to sensor distances
ranging from 5 ft to 72 ft (mean of 37 ft). The trial that failed to produce detections at this ER
was at 58 ft separation distance and utilized collocated sensors, so insufficient wind transport to
the single sensor location was likely a factor.
The controlled-release trials support the definition of an LDSN DT band in terms of 0.36 g/hr
(weaker detection potential), 0.71 g/hr (moderate detection potential), and the 1.42 g/hr (more
robust detection potential) ERs. These trials indicate that an isobutylene leak of 1.42 g/hr and
higher mass ERs should be routinely detectable with an LDSN of 50-60 ft default sensor node
spacing (farthest distance between nodes -100 ft). More specifically, an isobutylene leak of ER
>1.42 g/hr should be detectable at the maximum sensor to leak separation distance in an LDSN
deployment at default node spacing, given sufficient sampling time to ensure plume to sensor
overlap occurs. Lower mass ERs were also detectable, with signal increasing as the leak to
sensor separation distance decreased. All exploratory experiments utilized a single simulated
leak emission point and sampled a relativity narrow range of lower wind speed day-time
meteorological conditions under the relatively low obstruction complexity scenarios encountered
in EPA ORD trials. As discussed in the methods section, continual information on the LDSN DT
for a specific process unit deployment is provided through M21 measurements of the identified
leaks as part of the DRF. This is a key QA feature that helps ensure LDSN detection capability is
known.
In Figure 3-7, the superimposed DT band range is expressed along the M21 SV axis (y-axis) so
that we may relate LDSN detection capability to M21 historical data for comparisons to the
CWP (Section 4). At ~ 8,000 ppm, corresponding to the approximate mean M21-measured value
for the 1.42 g/hr releases, we define the DT band upper limit (DTU). The DTU provides
guidance on how large a leak must be (on average, as measured by M21) to be detected at the
maximum leak to sensor separation distances in a given LDSN deployment. For isobutylene at a
default 50-60 ft sensor node spacing, the LDSN should detect leaks with M21 SVs above ~ 8,000
ppm anywhere in the monitored unit. Because M21 approximates mass ER, some variability
around the DTU is expected, as indicated by the red-colored ± la y-axis error bars on the 1.42
g/hr ER values. Accounting for this variability, a DTU (+la) value ~ 10,000 ppm is a reasonable
expectation for an RF =1 gas. Also indicated in Figure 3-7, an estimate for center point of the DT
band, called the DT average (DTA) is shown. For an RF = 1 gas, the DTA is ~ 5,000 ppm. The
DTA is useful for a variety of simple comparisons, including the basic equivalency calculations
discussed in Section 4. A DT band lower limit (DTL) may also be defined, but this concept is
less useful in practice due to the effects of leak clusters and factors subsequently described.

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB. moiex	Page28of85
USEMOItoglRcuKliaMDlMEVaM
In real-world application, the gas streams monitored by LDSN/DRF will seldom be single
component in nature. As described in Section 2.0, the 10.6 eV PID has a range of response for
different compounds so the characteristics of detection will depend on the mix of gas streams
potentially emitted from the specific components present in the monitored process unit. In the
CRADA research to date, we have performed testing in the following gas-mix scenarios:
•	Isobutylene: high purity RF = 1 test gas used in controlled release studies. The
approximate DTA, DTL, and DTU of isobutylene are 5,000, 2,000, and 8,000 ppm
respectively (Figure 3-7).
•	Ethylene: High purity, RF ~ 10 to 15 test gas and in impurity-mixed streams found in the
FHR SLOF process unit (Section 3.3). Ethylene is difficult to monitor with the current
10.6 eV PID version of LDSN and requires high node density. The estimated DTA of an
ethylene stream with typical impurities is ~ 50,000 ppm to 75,000 ppm at default node
spacing. The DTA can be reduced if sensor node density is greatly increased.
•	Meta-Xylene Aromatics: Relatively uniform RF ~ 0.5 to 0.8 mix of compounds found in
the FHR CC Meta-Xylene (or m-Xylene) process unit (Section 3.5). Assigned a
conservative RF = 0.8 value. Easily monitored by current embodiment of LDSN. The
DTA, DTL, and DTU of this unit is determined by multiplying 0.8 times the isobutylene
DT band values.
•	Mid-Crude Fuel gas: Methane rich streams with effective RF >30 (not monitored by the
10.6 eV PID version of LDSN).
•	Mid-Crude Light Petroleum Gas (LPG) Mix: RF ~ 3 gas streams present in FHR CC
Mid-Crude that can be monitored by LDSN but with reduced sensitivity compared to
isobutylene. The DTA, DTL, and DTU of this unit is determined by multiplying 3.0 times
the isobutylene DT band values.
•	Mid-Crude Light Liquid Mix (gasoline. jet fuels, etc.): Approximately RF ~ 1 gas
streams present in Mid-Crude that can be monitored by LDSN with similar sensitively to
isobutylene.
•	Mid-Crude Composite Mix: Since the LPG and Light Liquid Mix gas streams of Mid-
Crude are simultaneously monitored by the same LDSN deployment, a composite mix
response factor is of RF = 2.5 is employed (estimated). The overall process unit DTA =
12,500 with the DTL and DTU typically expressed in relation to individual gas streams.
Further information on the calculation of the composite RF is discussed in Section 6.
Figure 3-8 provides a simplified view of the DT band concept for isobutylene, and the FHR CC
m-Xylene and Mid-Crude process units without the complexity of the mass ER and M21 SV data
of Figure 3-7. Using the isobutylene (RF =1) DT band as a reference, the DT bands for the m-
Xylene (RF ~ 0.8) and Mid-Crude (composite RF ~ 2.5) for the default 50-60 ft node density can
be easily expressed. In the case of Mid-Crude, the Light Liquid Mix component is used to
estimate the lowest DTL, whereas the LPG gas stream with a high RF value is used to estimate
the highest anticipated DTU for the unit. The DT band concept primarily applies to single-
component leak detection (as represented by the controlled release trials) and can inform
expected LDSN performance in multiple ways. The DTA is a single average performance value
that can be used in simple calculations and comparisons. The width of the DT band represents

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page29of85
USfflOltadltawintuHOiwiepwt
the distance-specific performance factors that can take into account the spatial distribution of the
sensor nodes with respect to component position (discussed in Section 4 and appendices).
Estimated DT bands for LDSN default node
density expressed in terms of M2.1 SVs. DT band
for m-Xyiene is 0.8 X isobutyiene (Figure 3-7).
Mid-Crude represents a mix of RF values with
lowest DTL determined by RF = 1.0 gas streams
and DTU by RF = 3.0 components (3 X isobutyiene
DTU). Mid-Crude DTA for equivalency modeling is
estimated at 12,500 = 2.5 x 5000 (isobutyiene).
Mid-Crude (RF = 2.5)
]
m mmmmm D~P
a
DTU = 24000 ppm
Highest DTU
set by RF~3.0
gas streams
Isobutyiene (RF=1.0)
m-Xylen e(RF = 0.8)
r DTU = 6400 ppm ^
>•
-Q
>
¦ DTA= 4000 ppm ^
fN
5
DTL = 1600 ppm
DTU = 8000 ppm
¦ DTA= 12500 ppm
\
" DTA= 5000 ppm
DTL = 2000 ppm
DTA set by
RF = 2.5
composite mix
DTL = 2000 ppm
DTBand
Width
500 ppm SV
0 ppm SV
# Ground level sensors
4
M.

s 1®
\ i\
Figure 3-8. DT bands for FHR CC m-Xylene and Mid-Crude
In addition to distance-dependent single leak sensor performance, there is another detection
sensitivity factor that centers on the additive effect of multiple leaking components that must be
considered in LDSN/DRF method
development. Figure 3-9 provides a real-	zc
world example of this effect with
presentation of a layout of the first level of
the m-Xylene process unit in FHR CC,
detailed in Section 3.5. In addition to four
LDSN node locations (blue dots), this figure
• • •
shows the location of historical LDAR leaks ~
that were discovered and repaired as part of i
the CWP from January 2013 through August 5 f
2019, with ranges in M21 SV levels for these
documented leaks levels indicated by colored
circles in the legend (e.g. red > 50,000 ppm). '
The values are "as reported" and have not
been corrected for the M21 FID instrument
RF for this process unit as this correction is
typically not performed in daily LDAR.

M21SV
» >50,000 ppm
5,000 - 50,000 ppm
1,000 - 5,000 ppm
500 - 1,000 ppm

WI
*^ -l - »ip
i
*1
80	120
Distance (ft)
Figure 3-9. Leak clusters in historical CWP data

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 30 of 85
From the over five years of historical LDAR data, it is evident that leaks occur in clusters (over
time), where component density is elevated around major equipment. As discussed in Section 5,
the locations of component clusters (based on historical data or major equipment siting for new
installations), along with prevailing wind directions, must be considered in the initial LDSN site
deployment planning in order to optimally place sensor nodes. Illustrated here are two
component clusters, labeled "A" and "B" that can be observed by LDSN node 3 under different
wind conditions. From the perspective of node 3, the components located in the blue open circle
labeled "A" represent proximate detection targets, with separation distances less than 20 ft from
the closest sensor. This means that detection capability should be enhanced from this component
cluster and M21 SVs obtained as part of the DRF should routinely include values towards the
lower end of the DT band (Figure 3-7). Components in area "B" are much farther removed from
node 3 (~ 50 ft or more), so a single detected leak from this area (found solely by node 3) would
likely possess a relatively large SV (e.g. > 50,000 ppm by FID).
It is important to consider that multiple simultaneous leaks of lower SVs could exist in cluster
"B", with their combined effects producing a detectable signal on LDSN node 3. This real-world
"cluster" detection potential is an important consideration in the development of the DRF and is
detailed in subsequent sections. As an example-scenario in brief, the facility LDAR tech
responds to a low-level LDSN node 3 detection (a PSL notification). The PSL detection box
(Section 3.4) indicates that the emission is originating from Area B, and the technician is led to
an approximately defined search area by the mSyte™ mobile device. The LDAR tech chooses to
use OGI to screen the component cluster, looking for larger SV leaks (e.g. > 50,000 ppm). In this
case, OGI fails to detect an emission so the LDAR tech uses fast-screen hand-held probe survey
gear which has a ppb resolution (Figure 1-4) to check areas of the cluster. The technician
identifies three probable leaking components and enters this information into the LDSN mobile
device (data entry at this step is optional if M21-gear is also carried). The technician then uses
M21 (at that time or at a later time) to pinpoint the three leak issues, quantify the SVs (e.g. 800
ppm, 2,000 ppm, and 5,000 ppm), and enters details into the LDSN mobile device to initiate the
repair sequence under the DRF (as required). In addition to compliance and emission inventory
data, the information provided by the M21 SVs is an important QA check for LDSN and is a
surrogate for controlled-release studies to inform LDSN detection capability over time. In this
case the cluster detection allows leaks that would normally be outside of the DT band (below
DTL) to be found and fixed. The LDSN spatial node network coupled with the presence of
multiple leaking components produces an effective increase in detection sensitivity, extending
the DT band to lower values (in certain cases). This is called the DT band cluster limit or
(DTcluster), and is described further in Section 4, along with other terms that are used in
generation of specific emissions equivalency-modeling scenarios.
The DRF itself can help improve detection capability since even low SV leaks (e.g. 500-600
ppm) found during the PSL survey are repaired. These lower-level emissions may not
significantly contribute to the PSL but represent "opportunistic finds", serving to enhance the
emission control performance of LDSN/DRF. Because PSL notifications will persist if the
subject leak(s) are not found, the DRF will investigate potential leak sources that would be time-
prohibitive under the CWP. For example, leaks that occur from components that are exempt from
regular monitoring (e.g. under insulation) may be found by LDSN/DRF, providing both
emissions reduction and safety benefit. Additionally, emissions from AVO-identified leaks from

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex	Page3iof85
both regularly monitored and unmonitored LDAR components may be found and repaired faster
using LDSN/DRF. Together these factors form the concept of overall emission control efficacy
(ECE) for an NGEM-based innovative LDAR monitoring approach. In a similar manner,
emissions that are not part of the LDAR program may be detected and repaired by LDSN/DRF.
The LDAR ECE concept and the added benefit of non-LDAR program emissions detection and
repair are discussed further in Sections 4 through 6 of this report. A summary of the basic
detection performance terms for NGEM techniques like LDSN and OGI is contained in Table 3-
1 with specific LDSN/DRF detection-related modeling terms contained in Table 4-1.
Table 3-1. Summary of LDSN/DRF detection sensitivity terms
LDSN/DRF Term
Acronym /
Abbreviation
Description
Detection Threshold
(Detection Threhold Band)
DT
A way to define the detection performance of NGEM approaches like LDSN and OGI
(alternative to a single-valued method detection limit). The width of the DT band is
caused by effects such as sensor to node distance for LDSN, gas to background
temperature contrast for OGI, or mixes of gas RF s (for LDSN or OGI)
DT Average
DTA
The midpoint of the DT band representing the average detection performance of a single
LDSN node for a single leaking component1
DT Upper
DTU
The upper limit of the DT band. A leak with mass ER (or M21 SV) > DTU should be
detected at full node to leak separation distance2
DT Lower
DTL
The lower limit of the DT band. A single leaking component with mass ER (or M21 SV)
< DTL is difficult to detect (low signal)3
DT (± 1 Standard Deviation)
DT(±la)
Additional uncertainty in DTU or DTL when leak is documented by M21 SV
measurement (a proxy for true mass ER)
Emission Control Efficacy
ECE
Conceptual term that accounts for all LDSN/DRF detection factors (distance effect, leak
clusters, collaborative node detection, opportunistic DRF detections, etc.)
*For process units with multiple RF gas streams, DTA is estimated as a weighted average (Section 6)
2For process units with multiple RF gas streams, basic DTL is typically estimated using lowest RF component
3For process units with multiple RF gas streams, basic DTU is estimated using highest RF component with specific RFs used for compliance purposes
3.3 FHR SLOF Pilot Test
The next stage of system development involved long-term testing of prototype versions of
LDSN/DRF at a working facility. The FHR SLOF in Port Arthur, TX, was selected for the first
facility pilot test since it possessed real-world challenges but also was relatively small, with an
open configuration that supported safe use of controlled-release testing for research purposes
(not possible in larger process units). The FHR SLOF30 is a bulk storage terminal permitted to
receive, store, and distribute ethylene and propylene. Products are stored underground in natural
salt dome caverns and recovered for sale through displacement by brine. When products are
received from storage, the brine is displaced, degassed (controlled by flare), and placed in
specially lined storage ponds for subsequent use. The SLOF process unit (Figure 3-10) includes
filters, product measurement devices, filter-separators, a solid bed dehydration system, brine
degassing facilities, a flare system, brine injection and transfer facilities, and other equipment.

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page32of85
USfflOltadltawintuHOiwiepwt
§ S ? ¥l I
^rl11 i I
<0- ir-Q-r-a-ir-ff-arr-r
Simulated Leaks /i
Sensor Nodes
Strong Detects
From Fin-fan Leak
FHR SLOF Tests
Controlled Releases
and Real Emissions
Figure 3-10. Overview of the FHR SLOF pilot test
The FHR SLOF pilot test was executed in two parts. A set up test was conducted from 5/14/2018
to 5/25/2018 and used LDSN Prototype 1 wireless sensors that were employed in the exploratory
tests. This test informed aspects of sensor design and deployment as well as controlled-release
point placement in the process unit to support the longer-term test to follow. The phase 1 test
also helped Molex further develop the prototype multi-node wireless data handling hardware
gateway, the initial version of the mSyte™ LDSN software and mobile device, while the team
collaboratively advanced DRF concepts.
The primary SLOF test was executed from 9/10/2018 to 12/7/2018, using ten LDSN Prototype 2
Sensor nodes (Figure 3-5). Similar to the exploratory tests at EPA ORD, 48 controlled-release
trials of 1 to 24-hour duration each were conducted in the SLOF tests to assist in understanding
LDSN performance under various conditions. Figure 3-10 illustrates the controlled-release points
(yellow triangles) and sensor node positions (blue circles) used in SLOF. For sense of scale, the
distance between sensor node 1 and sensor 8 in Figure 3-10 is approximately 166 ft. The ten
LDSN nodes were positioned at two elevations (two levels) within the process unit in preparation
for the larger scale multi-level pilot test at FPIR CC. To aid in source location algorithm
development, wind speed and direction were measured in relatively free-flowing conditions just
outside of the process unit. As with tests at EPA ORD, controlled releases of isobutylene and
ethylene (separately) were executed from a single position (one release point at a time) that was
moved between locations on multiple levels. Additional details on the SLOF test series are found
in Appendix B.

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page33of85
USEMOItoglRcuKliaMDlMEVaM
In addition to controlled release tests using isobutylene (RF =1), the LDSN deployment at SLOF
had the objective to detect real-world process unit leaks of ethylene mix (RF ~ 10). The RF of
real-world leaks at SLOF is only approximately known since the low responding primary
compound (ethylene, RF ~ 10-15) was potentially co-emitted with small amounts of other
compounds having RF < 10. LDSN sensor node spacing in SLOF was smaller due to the lower
RF (average node to node separation of 30 ft). Key goals of the long-term SLOF test series were
to generate important engineering data on LDSN node deployment over variety of
meteorological conditions, further develop the LDSN source detection and location algorithms,
and explore sensor QA response tests (called bump tests) and automated sensor health
monitoring through mSyte™, further described in Section 5. These goals were largely met at
SLOF, contributing advancement towards CRADA NTO 2, NTO 3, and NTO 4 (Section 1.3).
As described in the comparisons of CWP to LDSN/DRF attributes (Table 2-1), a technical
strength of the latter is in the continuous discovery of significant emissions from non-LDAR
program components. An example of this advantage was found at SLOF. At the initiation of the
testing, the LDSN detected a strong emission source that was previously unknown to the facility.
Prototype LDSN source triangulation through the initial mSyte™ algorithm provided location
data to the FHR LDAR SME as a rudimentary PSL box. As part of the draft DRF, the SME
scanned the PSL with OGI, but no leak was found. The OGI survey was followed by area
scanning with fast hand-held probes (Figure 1-4) and further consultation with LDSN readings.
Within two days the identification of a difficult to detect emission buried inside of the cooling
loop of a heat exchanger "fin-fan" set was verified (location F and inset of Figure 3-8). Due to
the nature of this leak, it could not be quantified with M21, as the probe could not directly
contact the buried leak interface. Although M21 was not established, the level of response
observed on multiple sensors to the RF ~ 10 gas stream was indicative of a significant emission.
This equipment is not part of the LDAR program so was not detected in previous M21
inspections. This particular leak could not be detected with OGI but was immediately apparent
under LDSN. This delay of repair (DOR) emission could not be mitigated until the next facility
shutdown (required major equipment disassembly). As a consequence, the fin-fan leak became a
dominant feature in SLOF testing and was considered an "interfering source". The fin-fan
emission was used as a test case for source localization optimization and as an interfering source
in controlled-release trials. Additional information on the SLOF test is found in Appendix B.
The presence of the constant fin-fan leak (as opposed to short-term tracer release tests) facilitated
investigation of LDSN detection capability under a range of meteorological conditions. As
discussed in Section 3-1, overnight lulls in wind speed (calms) coupled with compressed
atmospheric boundary layers can significantly enhance leak detection capability. This effect is
observed in Figure 3-11 where the fin-fan leak and other leaks in the SLOF process unit present
a dramatically larger signal to the LDSN as the wind speed (gray trace, secondary axis) falls to
well below 0.5 m/s on this evening. The presence of a significant leak in the process unit was
made obvious by this diurnal effect with strong detections on a range of downwind nodes
occurring as the temporal and spatial accumulation of emitted molecules in the stagnant airmass
is followed by a brief "puff-flow" to the sensor node locations. Although leak detection is
enhanced under these conditions, source location capability via multimode triangulation becomes
more difficult under extremely low wind speeds and is the subject of future work.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 34 of 85
800
>
E
600
o
u 400

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page35of85
USfflOltadltawintuHOiwiepwt
3.4 LDSN/DRF Development 2
As presented in Figure 3-5, several versions of prototype sensors were developed by Molex and
used in different phases of this CRADA project. Figure 3-13 provides a full picture of the pre-
production version of the Molex Sensor used in the FHR CC test. This version was purposely
designed to withstand harsh outdoor environmental conditions and be safe to install in refineries
and other chemical plants where flammable gases may be present. These sensors take gas
readings every second, and they transmit data via Wi-Fi to a local server which then forwards the
data to the cloud via local wired routing. To uniquely identify sensors in the field, each sensor is
assigned a Device ID based on its factory assigned MAC address. This sensor survived harsh
summer conditions in the Gulf Coast and delivered consistent performance, based on regularly-
conducted QA test data. A slightly improved version of the preproduction model was certified by
Underwriters Laboratory (UL) in late 2019 to Class 1 Div. 2 hazardous location designation. In
addition to gas sensors, the LDSN system has an integrated 3D ultrasonic anemometer mounted
in open field to record the instantaneous wind direction and wind speed. It is currently believed
that one properly sited anemometer that provides accurate prevailing wind data outside of the
process unit is sufficient for most installations. Micro-scale wind flow patterns inside the process
units are highly complex but are fundamentally determined by the prevailing wind outside of the
unit. The leak triangulation (leak location) approach calculates an area of highest probability for
the leak source, usually using multiple of days of prevailing wind data and sensor measurements.
Leaning algorithms to account for leak source triangulation bias due to specific flow obstructions
in a process unit for a given sensor network configuration is subject of ongoing research.
Junction box
SENS®RCON
EQu
-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page36of85
USfflOltadltawintuHOiwiepwt
maintenance activities that may have an impact on the operation or detection results of the
LDSN. Molex mSyte™ is designed to handle both LDSN and DRF requirements with some of
the key elements presented in Figure 3-14.
Figure 3-15(a) is an mSyte™ screenshot of the output graph of a particular sensor node showing
multiple detection peaks. The algorithm on the mSyte™ constantly analyzes the sensor data
including identifying and calculating the strength of
each detection peak and estimates the possible
location of the leak source relative to the locations of
the sensors using simultaneously measured wind
data. Once the detection strengths and the number of
detection events reach preset threshold values within
a pre-defined time window, a notification with a PSL
box is generated and sent automatically to designated
personnel via emails or text messages. Figure 3-
15(b) represents a screenshot of a PSL map on the
mSyte™. The darker pink box represents an area
where one or more leaks are most likely to be located
while the lighter pink color defines an area with
lower probability of leaks.
Figure 3-14. Molex's mSyte™ data platform
Detection &
Visualization
Figure 3-15. Leak detection on mSyte™ (a) sensor signal, and (b) PSL box

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page37of85
USfflOltadltawintuHOiwiepwt
In the DRF following each detection notification, an LDAR technician is dispatched to the area
defined by the PSL box to investigate the source(s) of emissions. While M21 equipment is a
reliable way of pinpointing and identifying leaks, a hand-held ppb instrument has proven to be
very effective in narrowing down the source locations given its high sensitivity, mobility, and
proximity to actual leaks (typically <2 ft) in the search process. During our SLOF testing and
FHR Corpus Christi pilot study, the larger leaks were typically found within 5-15 minutes, and
oftentimes multiple leaks were found at each PSL. In the DRF, an LDAR technician can perform
one or more investigations depending on how many leaks or emission events are identified.
These investigations are numbered in sequence starting from 01 in the format of notification ID -
xx. 19-SD-00005-03, for example, represents the 3rd leak investigation under notification# 19-
SD-0005. For each leak investigation, the component tag ID (if applicable), leak location,
Method 21 ppm reading and time of the measurement, name of the technician etc. were all
documented through a mobile app called mSyte™ mobile (see Figure 3-16). If no leak is found
or a PSL box is found to be related to an authorized emission, for example, this info will still be
entered for records. A PSL notification cannot be closed without all related investigations being
closed out properly and general threshold settings being met. Figure 3-17 is a sample of mSyte™
leak investigation dashboard.
Figure 3-16. LDAR technician demonstrating the mSyte™ mobile device

-------
A rnA United States	_C	n wr. r ,TT, «	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS	January 28, 2021
Q8jjjj>	J" lr«»ourcaa. moiex	Page38of85
/I A/ mSyte | air compliance

Site Events
Trends Settings

FHR" Q - •



-A.


Notifications Investigations
Maintenance

Notifications

Jl. V
17 items [unit: All
•)
Category: A
* Status; All
-)

Q. scarrh
NOTIFICATION ID CATStJOHV
UNIT
LEVEl
CHEATED
PSl UPDATED STATUS WV»
LAST UPDATE ^
DETAILS
19SD-00108 2
MBCrude
1
08/15/2019 06:00 AM
08/15/2019 06:00 AM open 0
08/15/2019 06AO AM
PSl Delected
tMD-ooow 3
Mkl-CiuiJe
2
08/10/7019 06:00 AM
08/15/291906:00 AM open ?
08/15/2019 03 03 PM
(01 >5oiJttp Frximl M?1 Pending
1S-SD-00I02 2
Mlc-Crudo
2
08/12/2019 06:00 AM
08/13/2019 06:00 AM open 1
08/13/2019 01:45 PM
(01 > Source Foon No Source Fcuno
19-SO-OODIO 3
m-Xylene
2
07/22/2019 08:30 AM
08/07/2019 06:00 AM open 1
08/10/2019 01.20 PM
(0U Pending
w-so-nom 2
Mkj-Ciw.lt!
'
08/02/7019 06:00 AM
08/09/7019 06:00 AM 0|}«n 1
O8/09/2019 03 45 PM
(01) Source M21 Completed
19 SD-00078 3
Mia-Crude
'
08/02/2019 06:00 AM
08/08/2019 06:00 AM open 1
08/08/2019 07 16 AM
(01 > Pending
Figure 3-17. mSyte™ leak investigation dashboard
3.5 FHR CC Pilot Test
The FHR CC pilot tests were conducted in the summer and fall of 2019 in the Mid-Crude and m-
Xylene process units at FHR's Corpus Christi West refinery. The primary objective of the FHR
CC pilot set was to deploy and test the most advanced version of LDSN/DRF produced to date in
a real-world setting. Due to safety issues, controlled release testing was not performed in these
tests, but, as an alternative, information from M21-measured leaks was used to further the
understanding of systems performance. Extensive planning and preparation were required to
deploy the LDSN and to interface with the facility on the DRF. Factors such as placement of
sensor nodes within the process units are further discussed in Section 5.
The LDSN was first implemented in the m-Xylene unit with operations beginning on 5/1/2019.
The LDSN was operational in the Mid-Crude unit on 7/1/2019. The pilot test ended in both units
on 11/30/2019. FHR CC Mid-Crude is a typical crude oil refining process unit, similar to that
found in many refineries. The m-Xylene unit is also typical but is a finishing unit (not a full
reforming process), so it exhibits a very high percentage of aromatics in the gas stream. As
described in Section 3.2, these two process units have products and gas streams with 10.6 eV
P1D RFs of 2.5 (Mid-Crude, composite mix) and 0.8 (m-Xylene, aromatics). There are also
refinery fuel gas streams in Mid-Crude that are used for process heaters with methane as a
primary constituent. These fuel gas streams are typically not well suited to LDSN monitoring
using the 10.6 eV PID sensor, so they were treated separately in the process unit's monitoring
plan (Section 5.1). For Mid-Crude, 38 pre-production prototype LDSN sensor nodes (Figures 3-5
and 3-13) were installed in six levels. In the smaller m-Xylene unit, ten LDSN nodes were
installed on four levels (Appendix CI). A view of the two process units is provided in Figure 3-
18, with Figure 3-19 presenting conceptual views inside of an FHR CC process showing the
DRF in action. Exact locations of sensor nodes can be found in Appendix CI.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 39 of 85
Mid-Crude
38 LDSN nodes
on 6 levels
m-Xylene
10 LDSN nodes
on 4 levels
Figure 3-18. View of Mid-Crude and m-Xylene process units
LDSN Nodes
DRF in Action (typically 1 person)
Quantifying a small leak close to a node (left)
Discovering a leak under insulation (right)
Figure 3-19. Examples of DRF leak quantification

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 40 of 85
Unlike the SLOF pilot test, both units were vertically developed, comprising up to six levels of
elevations with maximum height (excluding towers) of 58 ft for m-Xylene and 92 ft for the Mid-
Crude. An accurate 3D model of the process units was not available during the preliminary
sensor placement planning. The CRADA team worked instead with 2D master drawings of main
floors with elevated areas shown as inserts on the side of the drawings. The LDSN network node
positions for the progressive levels of Mid-Crude and m-Xylene units are shown in Appendix
CI, Figures Cl-ltoCl-11, with small numbered red points representing sensor node locations
and the colored circular regions representing a 60 ft detection radius around each sensor node.
The different colors correspond to different elevations of each sensor node and were carefully
chosen based on both location of a high density of LDAR program components and proximity to
a local power source, while avoiding direct impact from steam present in the plant.
As detailed in Section 4, historical LDAR data can be an important tool to assist in
understanding LDSN/DRF performance compared to the M21-based CWP. In brief, FHR CC
Mid-Crude and m-Xylene units have 26,667 (2016-2018) and 9,313 (2014-2018) monitored
LDAR components under the CWP, respectively. Excluding the high RF fuel gas-related
components, there are 24,172 components in Mid-Crude and 9,313 components in m-Xylene that
are relevant to the LDSN pilot studies. For this subset of components, if we consider time
periods where valves, pumps, and connectors were all monitored, a 3-year historical analysis
period in the Mid-Crude unit may be represented. The m-Xylene unit has a longer history of
comprehensive M21 monitoring, so a 5-year historical analysis is appropriate. From 2016
through 2018, a total of 358 reportable leaks (LDSN-relevant) over 500 ppm SV were found in
Mid-Crude (119 per year on average). Over a 5-year period in m-Xylene from 2014 through
2018, 563 reportable leaks were found (112.4 per year on average). Considering leaks with SVs
between 30,000 ppm and 100,000 ppm (highest value reported), there were on average 11.3 and
11 such leaks per year found in Mid-Crude and m-Xylene, respectively.
Figure 3-20 provides an overview of leaks detected in the FHR CC pilot by the LDSN/DRF. A
total of 39 leaks were found by the system in the Mid-Crude unit during the 5-month pilot study.
In the m-Xylene process unit, 67 leaks were found by LDSN/DRF over a 7-month period. The
DT band for isobutylene (RF = 1) from Figure 3-7, superimposed for discussion purposes, is
close to the 10.6 eV LDSN DT band for m-Xylene for the 50-60 ft default node spacing. With an
RF = 0.8, the DTA for m-Xylene is ~ 4,000 ppm, whereas the DT band for the Mid-Crude unit
(RF = 2.75) is shifted higher with DTA of ~ 12,500 ppm (Figure 3-8). It is apparent that in both
process units, the LDSN/DRF system was detecting and repairing leaks that were below the
respective DTLs for the units. As detailed in other sections of this report and Appendix E, there
are several factors that contribute to this expected result, including (1) presence of smaller leaks
located very close to a sensor node, (2) additive effect of groups of smaller leaks DT (cluster)
detections), (3) opportunistic repair of leaks discovered as part of the DRF, (4) enhancements in
leak detection capability due to channeled wind flow or special meteorological conditions and/or
(5) inaccuracies of M21 proxy for mass ER (integrated emissions from leak interface
underestimated by point concentration measurement).

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page4iof85
USEMOItoglRcuKliaMDlMEVaM
100000
50000
Q.
Q.
>
_Q
>
on
2 5000 ¦¦
60
O
500
7 Values.


©
©


©

O24030
©

-
^ 14587

-
©

T—1
"
-
CC
"O
4
A
nj
CD
I—
Q
-
_
1
II	ii 111 i in
III	Mil Mil II llll II II
|
O M21 SV Mean
A M21SV Median
- Measured M21SVs
O OGI Survey Detects
FHR CC leaks found by LDSN/DRF
Mid-Crude: 39 leaks found over 5 months
m-Xylene: 71 leaks found over 7 months
DT band and DTA for RF = 1 is shown
m-Xylene DTA is =4,000 ppm (RF= 0.8)
Mid-Crude DTA = 12,500 ppm (RF = 2.5)
Mid-Crude: 9 leaks with SV > 30,000 ppm
m-Xylene: 11 leaks with SV > 30,000 ppm
In both units combined, 38 leaks with SV
> 10,000 ppm with 12 of these detectable
by routine OGI survey.
Mid-Crude
m-Xylene
Figure 3-20. FHR CC pilot test summary
The ability to find and fix small leaks is important to support environmentally sustainable
operations and also in the emissions equivalency calculations described in Section 4. However,
the key benefit of LDSN/DRF is more rapid identification and repair of the larger leaks, which
disproportionately drive overall process unit emission levels. In the m-Xylene unit, a total of 11
leaks were found that exceeded 30,000 ppm SV with two leaks registering at or near the
maximum reported value of 100,000 ppm. In Mid-Crude, nine leaks possessed SVs above 30,000
ppm with seven of these at 100,000 ppm. In both units combined, there were 38 leaks with SV at
10,000 ppm or higher with a total of 12 of these leaks (32%) judged detectable by OGI survey, as
executed by the FHR LDAR SME. An additional five of these 38 leaks were judged by the FHR
SME be detectable by OGI with "high difficulty" (a term derived by the FHR SME indicating
detection with OGI camera used in extreme proximity to the component with knowledge of the
leak presence and exact location identified with other instruments). The lowest SV leak detected
by OGI with high difficulty had an SV of 9,800 ppm. No other leaks below 10,000 ppm were
detectable by OGI. For the ten leaks with SVs > 90,000 ppm, eight of these were "routinely
detectable" by OGI, one leak at 100,000 ppm was in the "difficult to detect" category, and one
leak at 100,000 ppm was not detectable. The routinely detectable and difficult to detect
categories were derived by the FHR SME with the former indicating probable detection during a
standard OGI survey. In follow-on OGI observations of several of the leaks, some were judged
to be more detectable and some less detectable indicating conditions at the time of survey play a
factor in the OGI DTs (assuming leak mass ER remained constant). Overall, 61% of

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex	Page«of85
LDSN/DRF-discovered emissions originated from connectors, 25% from valves, and 14% from
all other sources including pumps, drains, open ended lines, and components or sources that were
not part of the LDAR monitoring program.
Due to resource limitations, it was not possible to execute a complete M21 inspection just prior
to and directly after the FHR CC pilot efforts. However, M21 under the CWP continued to be
executed (independent from DRF testing) as per standard procedures during the pilot tests in
both process units. A comparison of the SV values (in ppm) for leaks detected under routine
M21 inspection and by LDSN/DRF during the pilot studies is contained in Table 3-2, with "N"
indicating the number found in each category. In both cases, the LDSN/DRF approach found
larger numbers of high SV leaks (reflected in higher mean values). As expected, a significant
number of smaller leaks were found in the areas of the process units that were subject to
comprehensive (100%) CWP inspection (a technical advantage of the M21-based CWP, Table 2-
1).
Table 3-2. Comparison of M21 SVs leaks detected by CWP and LDSN/DRF

Mid-Crude
Mid-Crude
m-Xylene
m-Xylene

found by
found by
found by
found by

LDSN/DRF
CWP
LDSN/DRF
CWP
Average
24030
13302
14587
4352
Median
3463
4892
3020
945
CT
39075
20201
22527
14553
Minimum
582
540
564
500
Maximum
100000
81568
100000
100000
N
39
23
71
59
When assessing this LDSN/DRF to M21 CWP comparison, the following factors must be
considered. Because this was the first test of the technology in a complex refinery unit, some
adjustments to the LDSN detection algorithms acceptance levels were made during the test. As
discussed in Section 5, LDSN adjustments and node placement optimization would be an
expected part of an implementation start up. There were instances where LDSN identified
potential leaks, but draft notification and localization thresholds were not yet reached. This
process was interrupted by the standard M21 sweep through the area finding, documenting, and
repairing leaks. Valuable information was learned regarding some of the larger emissions that
were detected by CWP that were not robustly observed by LDSN/DRF. In total there were four
leaks found by M21 CWP in Mid-Crude that were above the DTU of 24,000 ppm (Figure 3-8).
Similarly, there were four leaks in m-Xylene above DTU of 6,800, using uncorrected M21 data.
A closer examination shows that seven leaks >10,000 ppm went undetected by the LDSN but
were found by M21 CWP in the Mid-Crude unit. Further analyses shows that two of the leaks
took place in areas outside the 50 ft sensor coverage, four were missed due to temporarily
unfavorable wind conditions (a notification was yet to be issued for such a leak but it was found
under the CWP), and one miss was ascribed to an ineffective search in the DRF. As a result of
the analysis, six more sensors are planned to be added to the unit to provide a more complete

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
J- r03OurceB. moiex	Page43of85
sensor coverage, and a more effective hand-held VOC detector was employed in the leak search
of the DRF process. In order to mitigate the seasonal wind effect, the sensitivity of the LDSN
was increased by reducing the number of plume detections required to generate a detection.
Shortly after the sensitivity change, two new PSL notifications were issued in the 4th floor fin-
fan area where three larger leaks were identified by M21 before the DRF (Appendix C3).
4.0 LDSN/DRF to CWP Equivalency
To establish an AWP, the proposed work practice must demonstrate a reduction in regulated
material emissions at least equivalent to the reduction achieved under current federal
requirements [40 CFR 65.8(a) and other regulations]. In this case, LDSN/DRF must deliver the
same or better emissions reductions as obtained under the scheduled M21 LDAR CWP. This is
typically referred to as demonstration of emissions equivalency. This section presents one
possible emissions equivalency analysis approach which utilizes historical M21 SV data from the
FHR CC LeakDAS® (Inspection Logic, Louisville, KY, USA) database acquired over a
multiyear period. While a generalized assessment of the LDSN/DRF concept equivalency to the
CWP is part of CRADA LTOs, the presented analysis considers only the currently realized form
of LDSN that uses a 10.6 eV PID sensor. The following analysis focuses on the FHR CC m-
Xylene (RF = 0.8) and Mid-Crude (RF = 2.5) process units with the nominal 60 ft sensor node
spacing that was employed in the 2019 pilot tests, with four additional sensors that are currently
operating in the Mid-Crude unit. A Monte Carlo emissions simulation approach was selected for
the equivalency evaluation, consistent with the 2000 EPA equipment leak AWP approach and
the later OGI evaluation.31 Two sets of Monte Carlo emissions models are presented, the EPA
ORD analysis and the Molex analysis (see Table 7-2 for acknowledgments). Section 4 consists
of the following subsections:
•	Section 4.1 discusses historical leak inventory data captured in LeakDAS® for the subject
process units. Several types of emissions, their relation to the LDAR work practice, and
how these emissions are tracked are described.
•	Section 4.2 describes the procedures used in the emission equivalency comparison in
general terms, starting from the processing of the LeakDAS® data, followed by the
preparation of the data for Monte Carlo simulations. A nonproprietary version of the leak
data used in the emission simulations is contained in Appendix E2. Section 4.2, with
support from Appendices E and F, describes the EPA ORD and Molex emission
equivalency simulation approaches.
•	Section 4.3 discusses and compares select results for the EPA ORD and Molex emission
equivalency simulations.
•	Section 4.4 describes LDSN/DRF factors associated with different types of LDAR and
non-LDAR program emissions described in Section 4.1. Some of these emission types
that are routinely detected and mitigated under LDSN/DRF were not considered in the
analyses of Sections 4.2-3 and produce additional emission reduction benefits.
•	Section 4.5 summarizes the results presented in Sections 4.3 and 4.4.
As a general note, evaluating AWP performance relative to historic emissions data is not
necessary to demonstrate equivalency to federally required practices (as outlined in 40 CFR 65,

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 44 of 85
Subpart F).32 The FHR CC process units are subject to both federal and state regulations. In some
cases, state regulations may be more stringent than federal requirements, and the inventories also
reflect additionally protective work practices implemented voluntarily by FHR. Therefore,
comparing AWP performance to historical process unit data is less favorable than a strict
comparison to federal regulations. This analysis can be considered a conservative, protective
evaluation, rather than a definitive determination of LDSN/DRF equivalency to the federal CWP.
4.1	Historical LDAR Monitoring Data m-Xylene and Mid-Crude
The FHR CC LDAR team maintains extensive historical records on LDAR activities across the
plant in their LeakDAS® database, including a detailed inventory of M21 SV measurements. For
the emission analyses subsequently described, m-Xylene LDAR program component SV
measurements from January 2013 to June 2019 were consolidated to develop a list of active
leaks in the 2014-2018 and 2016-2018 time frames for the m-Xylene and Mid-Crude process
units, respectively. This analysis included leaks from all component categories (e.g. valves,
pumps, connectors), although AVO-discovered program leaks and non-LDAR emissions are not
considered in the primary analysis (see Section 4.5). The Mid-Crude analysis was limited to
three years because connectors were not included in scheduled LDAR monitoring in this unit
prior to 2016. The procedure for identifying historical leaks is discussed in more detail in Section
4.2	Emissions Equivalency Analysis Approachand is outlined further in Appendix El. The lists
of process unit leaks considered in this analysis are included in Appendix E3.
The LeakDAS database includes records for 27,037 and 9,474 unique component tags between
2013 and 2019 for Mid-Crude and m-Xylene respectively. Connectors, which are typically
monitored on an annual basis, represent 64% and 70% of the tracked components while valves,
usually monitored quarterly, comprise 34% and 29%, respectively. The remaining components,
other than pumps, are typically monitored on a quarterly basis as well. Pumps are monitored
monthly, and the database included 121 and 25 unique pump tags for Mid-Crude and m-Xylene,
respectively.
Figure 4-1 presents a high-level summary of the historical leak populations for both the m-
Xylene and Mid-Crude units for their respective modeling time periods. Figures 4-1(a) and (d)
show the portion of leaks by component category. In both units, more than half of the leaks are
from connectors. Given that connectors are subject to less frequent annual monitoring under the
CWP, this category is likely to show more substantial benefits under the proposed AWP's
continuous monitoring. Pumps are subject to monthly monitoring and therefore present less of a
potential benefit due to long periods of inattention. However, their frequent leaks from a small
population (<0.5% of all tracked components) do suggest that continuous monitoring could be
beneficial if pump locations are accounted for in LDSN design.
Figures 4-1(b) and (e) present the general leak SV distribution while Figures 4-1(c) and (f)
present the portion of leak emissions based on leak SV magnitudes for each unit. In each case,
the portion of leaks with SVs greater than double the LDSN DTA is relatively small (<15%).
However, these leaks, which would be both more likely to be detected and detected quickly due
to their magnitude, are responsible for more than 3/4 of emissions for each unit. By identifying
and repairing these leaks rapidly and effectively, LDSN/DRF can more efficiently distribute
LDAR resources than the CWP.

-------
oEPA
United States
Environmental Protection
Agency
f FMNT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 45 of 85
C
Z>

 8,000 ppm
Leaks by Component Leaks by SV
(d) i% 	2% (®)
4%
Emissions by SV
VALVE i PUMP
CONNECT i DRAIN
RELIEF . COM PR
500 - 6,250 ppm
6,250-12,500 ppm
12,500-25,000 ppm
> 25,000 ppm
Figure 4-1. Historical LDAR leak apportionment by component category, SVs, and emissions
(a-c) m-Xylene, and (d-f) Mid-Crude, respectively
Figure 4-2(a) presents the distribution of primary leak SVs in the process units. The cumulative
emissions profiles are shown in Figure 4-l(b) with 100,000 being the largest reported M21 SV,
as determined by FID. The total number of leaks represented in m-Xylene and Mid-Crude are
568 (5 years) and 299 (3 years), respectively. Note that these do not reflect the total number of
leaks reported, but rather the number of leaks identified as appropriate for LDSN simulation. As
discussed in Section 3.5, fuel gas leaks are excluded from the current analysis. In Figure 4-1(b),
emissions are determined from the as-reported (uncorrected) SV values for m-Xylene units using
the Synthetic Organic Chemical Manufacturing Industry (SOCMI) correlation equations, while
the Mid-Crude units uses the petroleum equations.2 These are the same procedures FHR uses to
report annual emission inventories. In both cases, the process units show similarly skewed
distributions. Within the simulation sample set, leaks with SVs >30,000 ppm represent < 7% of
the detections for these process units and account for approximately 3/4 of the emissions. Leaks
with SVs >10,000 ppm represent about 10% of detections and account for >80% of emissions.
The CWP's systematic approach results in identifying a large population of leaks, which
invariably will include these more significant sources. However, the majority of leaks found are
responsible for a small portion of leak emissions due to the skewed nature of the leak size
distribution.

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 46 of 85
(a)
40%
35%
c
•£ 30%
U
_Q
£ 25%
h
S 20%
"ro
'§ 15%
l£)
X 10%
5%
0%
Mid-Crude
1,000
10,000
100,000
FID Screening Value (ppmv)
0.0001 0.001 0.01
Cumulative Emissions Percentile
Figure 4-2. FHR CC historical leak profiles (a) SV distribution, (b) cumulative emissions
A strength of the LDSN/DRF concept is the potential to detect leaks and other emissions that are
either difficult to find under the CWP or are not covered under the LDAR work practice (called
non-LDAR program leaks). There are a variety of reasons why potential leak interfaces may not
be included in scheduled M21 monitoring, and the emissions contributions of these leaks are
comparatively poorly understood. Some potential leak interfaces would normally be part of the
M21-monitoring program but are exempted because they cannot be properly accessed with the
M21 contact inspection probe. Examples of this category are connectors that are covered by
thermal insulation. Some components are difficult to monitor because they are elevated requiring
temporary scaffolding, so they are monitored at a lower frequency. Some leak interfaces, such as
components in heavy liquid service, are part of the LDAR work practice, but due to low expected
emission potential, are not monitored using M21. Emissions from this category may be detected
by process unit operators using AVO procedures,33"34 with each finding and associated repair
captured in LDAR documentation. Non-LDAR program leaks, for example the fin-fan leak of
Section 3.3, and other emissions may also be detected by AVO, or by safety monitoring
equipment in severe cases. As opposed to AVO-detected LDAR program leaks, non-LDAR
program leaks/emissions are not tracked in LDAR documentation. Instead, they follow the
applicable reportable and non-reportable emission event guidelines (e.g. 30 TAC 101.201).
As previously discussed, emissions from these source types can be relatively large and can go
undetected for significant lengths of time. The historical AVO LDAR record offers some
indication of the LDAR program leaks from unmonitored components under the CWP. For m-
Xylene (2014-2018) and Mid-Crude (2016-2018) there were 24 and 42 total AVO leak
detections respectively, approximately 4% and 12% percent of total leaks, which might be
detectable by LDSN (Figure 4-3). The distribution of AVO leaks is not clear because, in many
cases, an initial SV measurement was not taken prior to repair. However, pilot results suggest
that AVO leaks may be found at a higher rate under LDSN/DRF than CWP. Moreover, the
presence of these leaks indicates that there is a not-insignificant population of leaks which an

-------
Proteotion f. FLINT HILLS molex*
^Agency	J- |r 0 3 Q u r c e B, II1UICA
LDSN system may be able to detect far more rapidly than is currently possible.
Mid-Crude Unit	m-Xylene Unit
Figure 4-3. Historical AVO leak identification: a) Mid-Crude and (b) m-Xylene
While this leak inventory and the associated analysis described in the following sections are
particular to the two process units, the leak data used is typical of, if not fully representative of,
similar process units. Given this, the equivalency results are specific to the two FHR CC units.
However, broader patterns and conclusions about the effect of the LDSN/DRF concept may be
assumed to be generally applicable so long as the method limitations and caveats discussed
across Section 4 are considered.
4.2 Emissions Equivalency Analysis Approach
This section describes an evaluation of leak emissions control under the scheduled M21-based
CWP compared to LDSN/DRF control using a Monte Carlo simulation approach and variations
on detection performance assumptions. This is called the "EPA ORD analysis" and was
developed by Haley Lane as part of her Oak Ridge Institute for Science and Education (ORISE)
NGEM research appointment with EPA ORD (Section 7.2). The EPA ORD analysis differs
slightly in defined terms and approaches to the analysis independently developed by Molex,
described in Section 4.2 and Appendices E and F.
Demonstrating equivalency relative to M21 is difficult due to associated uncertainties of the
CWP and the unknown uncertainties and performance levels of a proposed AWP. EPA guidance
recommends a Monte Carlo simulation approach as a useful tool for evaluating possible AWP
performance. Monte Carlo simulations evaluate a broader array of scenarios and can elucidate
the impact of expected variation on a method's performance. The U.S. EPA's 2000 guidance
presents a method for establishing AWP equivalency and a threshold; "an alternate work practice
should provide at least the same or better environmental protection as the current work practice
in 67 percent of the simulated tests".31 Two different Monte Carlo simulation models were
developed to evaluate the LDSN/DRF concept equivalency relative to scheduled M21 LDAR:
one by EPA ORD and one by Molex ("EPA model" and "Molex model"). Both models simulate
emissions over multi-year periods in order to study the long-term performance of LDSN/DRF
controls. This section is primarily focused on a summary of the EPA model results, some
comparisons to the Molex approach, and a discussion of the equivalency implications. A more
thorough review of both model methods and results is included in Appendix El and Appendix F.
4.2.1 LeakDAS Data Processing
The Monte Carlo models developed by both EPA ORD and Molex evaluated simulated multi-
year emissions using FHR CC historical data. The LeakDAS dataset discussed in Section 4.1 was

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 48 of 85
chosen in order to test LDSN/DRF effectiveness with the most recent, representative dataset
available for each FHR CC pilot test unit. Historical, process unit-specific SV data was chosen in
lieu of a broader approach in order to focus the analysis on the pilot study and evaluate two
specific, existing embodiments of the LDSN/DRF concept. In contrast, the original EPA
Guidance and OGIAWP analysis used a 1993 dataset of SV measurements from 24 different
units in order to characterize the SV distribution and the 1993 Petroleum Industry Bagging Data
to simulate the SV to mass correlation.35 While this analysis seeks to remain consistent with the
EPA Guidance and OGI AWP approaches wherever possible, the approach was adapted to use
unit-specific historical data, better evaluate LDSN/DRF benefits and limitations, and study
emissions over longer time periods.

LeakDAS Data Processing Method
Component locations
estimated byMolex
Leak DAS SV
measurements
Component Addition a nd
Reti rement Records
Leak components list
Tag
Category




State
Location
Added
Date
1
CONNECT
LL
ABC-1
5/1/08
5/1/19
2
PUMP
LL
DEF-2
9/8/17

3
CONNECT
LL
DEF-1
1/1/00

AVO 4
CONNECT
LL
GHI-5
1/1/00

5
VALVE
GV
ABC-2
5/1/08
7/20/18
6
VALVE
LL
ABC-2
5/1/08

Identify
Leaks
W
Tagl
Date
SV
Status



5/20/16
20
Pass
|5/13/17
720
Fail |
5/17/17
53
Pass
5/25/18
37
Pass

AVO 4

Date
SV
Status
10/11/17
25,000
Fail
10/12/17
12.000
Fail
10/12/17
40
Pass
Tag 2
Tag 3
Date
SV
Status
Date
SV
Status
9/10/17
7,540
Fail
5/17/15
21
Pass
9/12/17
100,000
Fail

3/25/16
NA
Fail
9/13/17
12,550
Fail

3/25/16
21,000
Fail
9/18/17
70
Pass
3/25/16
35
Pass
10/12/17
5
Pass
5/20/16
10
Pass

Tag 5


Tag 6

Date
SV
Status
Date
SV
Status



|l2/ll/13
750
Fail |
1/12/18
23
Pass
12/11/13
29
Pass
4/15/18
95
Pass
1/12/14
40
Pass
7/18/18
55,000
Fail
| 4/15/14
1,150
Fail |
7/19/18
100,000
Fail
4/16/14
101
Pass
Compile
Leak Data
Estimate
start/end dates
Remove leaks
inactive in
modeling period
Remove fuel gas
components*

Leak
Tag
Category
Cluster
Location
Chemical
State
Date
Added
Retire menl
Detection
SV

Start
End
1
2
3
4
1
2
3
AVO 4
CONNECT
PUMP
CONNECT
CONNECT
ABC-1
DEF-2
DEF-1
GHI-5
LL
LL
LL
LL
5/1/08
9/8/17
1/1/00
1/1/00
5/1/19
5/13/17
9/10/17
3/25/16
10/11/17
720
7,540
NA
25,00(

11/15/16
9/8/17
10/20/15
NA
5/17/17
9/18/17
3/25/16
10/12/17


vafvr
^nr -




55,00(



7
6
VALVE
ABC-2
LL
5/1/08

1/11/14
7/15/14
1,150

5/29/14
8/16/14


Non-AVOs



AVOs







Leak



Leak
Tag
Location
SV
Start
End
Tag
Location
SV
1
1
ABC-1
720
11/15/16
5/17/17
3
3
DEF-1
NA
2
2
DEF-2
7,540
9/8/17
9/18/17
4
AVO 4
GHI-5
25,000
7
6
ABC-2
1,150
5/29/14
8/16/14




k For the purposes of this example, we assume fuel gas components a re in areas ABC-1 and ABC-2. Gas vapor (GV) components in these a re as a re removed.
Figure 4-4. LeakDAS data processing methods

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page49of85
USEMOItoglRcuKliaMDlMEVaM
The LeakDAS inventory is a rich dataset which details a variety of LDAR activities across the
FHR CC facilities over time. This analysis used Thermo Fisher Scientific TVA measurement and
AVO records between the years of 2013 and 2019 to characterize m-Xylene and Mid-Crude leak
profiles. Individual measurements were determined to be unsuitable for this purpose because
leaking components tend to be overrepresented. While a non-leaking component is generally
measured once a monitoring cycle, a leaking component may be measured several times prior to
a successful repair. The dataset was processed in order to develop a list of active leaks between
the years of 2014-2018 and 2016-2018 for m-Xylene and Mid-Crude, respectively. Leaks were
identified based on individual component historical records, and the dataset development
procedure is presented in Figure 4-4. For further discussion of EPA ORD processing methods,
see Appendix El, and see Appendix F1 for a discussion of Molex methods.
4.2.2 Emissions Calculations Methods
The equivalency analysis presented in this section derives from the methods of the original AWP
Monte Carlo guidance and the subsequent OGI analysis, but it differs in significant ways. The
original guidance focused on comparing the effectiveness of M21 to methods which had no
reliance on M21. Given that the current DRF assumes M21 will be used to quantify each
individual leak that is found, this approach was unsuitable. Emissions presented in this chapter
can be considered a reflection of the SVs and leak inventory reporting rather than a simulation of
true mass emissions. Moreover, the original guidance did not calculate emissions, instead
focusing on total process unit emission rates. This approach would not reflect the potential
benefits, nor the limitations, of the LDSN/DRF concept because its primary benefit lies in
reducing the duration of large emission sources. The OGI analysis also noted this limitation and
updated the Monte Carlo analysis to compare total emissions, rather than emission rates. The
analysis presented in this report made an additional update by extending the length of modeling
time and considering all LDAR component categories. The OGI assessment limited its scope to
emissions from valves within a single quarter. Early LDSN/DRF modeling indicated that longer
time frames would better demonstrate the long-term equivalency of this new AWP, and the
component sample was broadened to better characterize the process units. Finally, this method
differs from the two previous versions in its focus on process unit specific historical data. While
a more broadly transferable equivalency analysis is possible, it was not within project scope.
The LDSN/DRF concept is a new method, and pilot testing was limited to two process units. The
transferability of the approach is complicated by a variety of variables, but the overall theory of
the method was established by CRADA collaborators. What remains uncertain is the exact level
of performance in any given specific embodiment of the concept. Pilot testing provided
important supporting data. However, the equivalency analysis sought to begin examining the
long-term viability of the proposed AWP from an emissions perspective. Given the significant
unknowns, three different LDSN/DRF emissions scenarios were modeled in comparison to CWP
emissions. These scenarios were based on an understanding of the method theory as outlined in
Section 3 and Appendix F2. A description of modeling terms used in the EPA ORD analysis is
contained in Table 4-1, and terms for the Molex analysis are presented in Table 4-2.

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page50of85
USEMOItoglRcuKliaMDlMEVaM
Table 4-1. Description of EPA ORD modeling terms
EPA Modeling Term
Acronym /
Abbreviation
Description
Detection Threshold - Average
(Same as Table 3.1)
DTA
Most conservative emission modeling scenario that uses the average RF for
process unit. Assumes individual leaks with SV > the process unit DTA are
detected within 3 days and repaired within 7 days. See Table 3-1 for
additional notes on DTA determination. This calculation approach does not
require information on the position of the sensors or the leaks.
Detection Threshold - Band
(Same as Table 3.1)
DT
More realistic emission modeling scenario that takes into account distance
to sensor effects on detection threshold (accounts for DT band width) for
single leaking components. See Figure 4.3 for detection distance
calculation. Assumes individual leaks with SV > calculated threshold are
detected within 3 days and repaired within 7 days.
Detection Threshold - Cluster
EPA ORD Modeling Term
DTC
More realistic emission modeling scenario that combines DT with additive
effect of multiple closely-spaced leaks forming a cluster (combined SVs of
cluster can also exceed detection threshold). See Figure 4.3 for calculation
description. Assumes individual leaks or leak clusters with SV > calculated
threshold are detected within 3 days and repaired within 7 days.
Table 4-2. Description of Molex modeling terms
Molex Modeling Term
Acronym /
Abbreviation
Description
Detection Threshold
Average - Single Tag
DTA(tag)
Hie DTA(tag) modeling scenario assumes that single tag leaks with SV > DTA are
detected in the fugitive emission simulation. The DRF process takes 10 days (3 days to
detect and 7 days to repair).
Detection Threshold
Average - Cluster
DTA(cluster)
Hie DTA(cluster) modeling scenario assumes a cluster of leaks with the sum of SVs >
DTA are detected in the fugitive emission simulation. Hie DRF assumes leaks above
the cluster repair threshold (3000 ppm) will be repaired one by one. If no individual
leaks above this threshold are found, the model assumes that up to 3 leaks will be
repaired under the defined DRF, i.e. 3 days to detect and 7 days to repair.
Detection Threshold
Band - Single Tag
DT(tag)
Same as DT in EPA ORD's simulations. This is a more realistic scenario than
DTA(tag) since it takes into account distance to sensor effects on detection threshold
for single leak components.
Detection Threshold
Band - Cluster
DT(cluster)
Same as DTC in EPA ORD's simulations. This is a more realistic modeling scenario
than DTA(cluster) since it takes into account of distance to sensor effects, hi this
scenario, a leak closer to a sensor has a greater contribution to the total detection
signal than another leak of the same size within one cluster and contributions of these
leaks to the detection signal are mathematically modelled.
E qui valency-required
Detection Threshold
Average
eDTA
eDTA represents a target average detection threshold that LDSN/DRF as a whole must
achieve in order to demonstrate M21 CWP equivalency in total fugitive emissions for
a given process unit, hi Mo lex's simulation, eDTA is conservatively calculated as the
average value of DTA(tag) and DTA(cluster).
E qui valency-required
Detection Threshold
Upper limit
eDTU
eDTU is the DT value or smallest leak that should be detected by the sensor network at
the farthest distance away from a sensor in order to achieve equivalency to M21. In
Molex's simulation, eDTA is conservatively calculated as 1.5 times eDTA.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page5iof85
USEMOItoglRcuKliaMDlMEVaM
None of these approaches can be considered the definitive method at this stage of concept
development. Simulations demonstrated that the LDSN/DRF systems would take time to reach
steady-state control (see Section 4.2.3). Therefore, further testing is necessary to establish the exact
mechanism for LDSN/DRF control and the expected performance levels. However, these scenarios
were designed to provide a range of potential performance levels. While the DTA scenario is the
simplest approach, the DTA value itself is an aggregate based on the varying emissions
compounds, the sensor model, and sensor spacing in the process unit. It was chosen as an initial
approach, similar to that applied in the OGI AWP, in order to provide an easy, straightforward
performance test. Note that no modeling tested varying individual leak emissions RFs. However,
testing evaluating varying DTAs is presented in Appendices E and F.
The DT and DTC scenarios were developed in order to consider the additional physical
processes which would impact LDSN detection capabilities. Distance plays an important role in
emission dispersion and has both positive and negative impacts for LDSN/DRF. Smaller leaks
located near sensors, which would be ignored under the DTA scenario, may be detectable in
practice. Conversely, larger leaks, which the DTA scenario would assume are detected, may be
difficult to identify due to location. The facility layout, sensor siting, and local meteorological
conditions would contribute to whether distance effects are a net benefit, and it is important to
account for areas with higher component density when developing LDSNs. The location effects
were modeled using a heuristic distance transformation based on the theoretical distance-
concentration relationship (see Appendix F2 for additional information). The equation below
presents how detection was determined for the DT scenario:
„ n SV ^ DTA
Equation 4-1: — >	
1	D2 1,250
In Equation 4-1, D is the distance between a leaking component and the nearest sensor, and
1,250 is D2 for the typical furthest distance anticipated with 50 ft sensor spacing (50/V2). Note
that exact component locations were not known. Instead, general component areas (clusters)
were determined and leak locations were randomly generated within cluster regions for each
simulation iteration. The DTC scenario also used the distance heuristic with an additional effect.
In practice, it was observed that groups of smaller leaks in a concentrated area could produce
detections despite no single leak being above the DTA. The cluster scenario represents this effect
by assuming a detection occurs when the conditions represented in Equation 4-2 are reached.
Equation 4-2: max (S[^]J>igL
In Equation 4-2, the maximum sum of transformed SV for cluster a relative to any sensor i
should be greater than the transformed DTA. Neither of these approaches is necessarily an exact
representation of these physical processes as they function at FHR CC. However, they are
theoretical portrayals which provide additional insight into how the LDSN/DRF will control
emissions from LDAR components. Both the Molex and EPA models tested additional variations
of these scenarios and several other options which are discussed in Appendices El and Fl.
The Monte Carlo model simulated varying leak profiles in order to test a wider array of possible
emissions. Control performance for these leaks was estimated for all three LDSN scenarios and
compared to simulated control under the CWP. A variety of assumptions were made in order to
simplify emissions simulations. These assumptions and their basis are presented below:

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex pageszofss
(1)	Leak samples are representative
It was assumed that the overall leak datasets created using historical data were reflective of
future leak distributions. Leak samples were created using random sampling with
replacement to test a broader range of potential SV distributions. Leak start times were
randomly generated within the modeling period using a unit variate. Samples were limited to
250 (Mid-Crude) and 500 (m-Xylene) leaks. It was assumed that this would be representative
of potential SV distributions which might be detected by LDSN.
(2)	No variability for M21
No variability was modeled for M21 measurements because both methods rely on M21 to
determine the leak size. The model assumes no difference in SV if a leak is found by CWP or
LDSN/DRF, regardless of differences in detection date. The first M21 measurement of a
failing M21 record for a given leak was assumed to be representative of that leak's size up
until the point of detection.
(3)	No variability for LDSN detection
No variability was modeled for LDSN detection. This is an area for further research, but
there was insufficient data to characterize LDSN variability at the time of model creation.
Instead, three separate scenarios were developed to model a range of potential control
scenarios. Additionally, LDSN detection was assumed to correlate directly to M21
measurements. In practice, LDSN detections may correlate more strongly to true mass
emissions than M21 measurements, given the known variability of M21 and the possible
variability of LDSN. This area of variation was not explored in this analysis to simplify the
number of variables within the Monte Carlo models.
(4)	No repeat leakers
The EPA ORD model assumed no repeat leaking components within the modeling period. In
this case, leaks from the same component were each treated as separate, individual leaks
within the same cluster region. This assumption was made to remain consistent with the
previous OGI analysis and avoid issues which arise when simulated leaks "overlap" due to
randomized start times. EPA ORD assumed that leaks which occur after an original leak
cannot be assumed to occur before the first leak from that component. Molex did not make
these assumptions and modeled components with multiple leaks, assuming this is a better
reflection of process unit emissions. This difference in approaches constitutes the primary
difference in EPA ORD and Molex emissions estimates.
(5)	All components subject to the same CWP monitoring schedule
In practice, some components may be subject to varying schedules due to a variety of
regulation intricacies. However, all components of a given category were assumed to be
subject to the same CWP schedule—monthly pump, annual connector, and quarterly
inspections for all other components. Note that both the OGI and EPA AWP Guidance only
evaluated quarterly CWP monitoring because both analyses were limited to valves. The EPA
ORD model assumed that inspections occurred on the first day of a given month, using dates
between 1/1/2014 and 1/1/2019. The Molex model assumed inspections occurred every 30,
90, or 365 days. These approaches resulted in some differences in emission estimates which
were small in the aggregate Monte Carlo results.

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page53of85
USEMOItoglRcuKliaMDlMEVaM
(6)	Repair occurs seven days after detection
Regulations require that leaks be repaired within 15 days, and most leaks are repaired on the
first attempt within five days of detection. Seven days was chosen as a conservative
assumption and applied to all four modeling scenarios. Three extra detection days were
added to the LDSN leak repair process.
(7)	Analyze leakers only
Emissions for components with SVs less than 500 ppm were not considered, consistent with
the OGI analysis, because the level of control is assumed to be equal under both work
practices. In practice, it is possible that leaks which might be detected under LDSN due to
location or cluster effects would not be identified under the CWP. However, this possibility
was ignored due to the inclusion of Assumption (2). The DRF requires that any detected leak
above the leak definition (typically 500 ppm) will be repaired.
(8)	Constant mass emission rates
For the purposes of this analysis, emission rates were assumed to be constant for the entirety
of a leak's duration. Emission rates were based on the first SV measured as a failing M21
record for historical leaks. For an exploration of non-constant emission rates, see the Molex
analysis presented in Appendix Fl.
While the CWP emissions are generally reflective of what would be reported emissions, they are
not directly comparable to those reported numbers. First, the set of historical leaks used for
simulation was a subset of the overall leaks due to limitations of the LDSN system (e.g.
emissions from sources with unfavorable RFs). Emissions from components with SVs less than
500 ppm were not included because the control level was assumed to be equal under both work
practices. Additionally, the component emissions were based on the first failed screening value
measurement for a given leak rather than the entire history of SVs. This was done both as a
simplifying assumption and to maintain consistency across model scenarios. The Molex and
EPA ORD emissions estimates are also not directly comparable, primarily due to the difference
in approach with respect to components with repeating leaks. For further discussion of these
differences see Appendix El.
4.2.3 Monte Carlo Simulation
The four different LDAR control scenarios (CWP, DTA, DT, and DTC) were incorporated into
an emissions model that was tested using Monte Carlo simulations. The EPA ORD simulation
steps are outlined in Figure 4-5, and Figure 4-6 presents the Molex simulation approach. Both
the Molex and EPA ORD simulations varied the leak population sample, leak start times, and the
component locations (within bounded cluster regions) for each simulation iteration. No error was
applied to the SV measurements or control performance. None of the evaluated scenarios should
be considered a definitive representation of LDSN/DRF or CWP emissions. Instead, they
represent a likely spectrum of possible LDAR leak control efficacy in comparison to CWP
performance. The potential benefits from non-LDAR leaks were not explored in this analysis,
and AVO leaks were excluded due to the lack of preceding inventory measurements, as
discussed in Section 4.2.1.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 54 of 85
Morite Carlo Simulations of Historical LDAR Data for FHRCC
Step 1: Simulate leak screening values for chosen FHR CC processing unit using historical
inventory data
Step 2: Randomly generate leak start time and location
Step 3: Simulate leak detection and repair
Step 4: Calculate leak emissions
Step 5: Sum simulated processing unit emissions for different scenarios and compare
total emissions from AVVP scenarios to CVVP emissions
Figure 4-5. EPA ORD Monte Carlo emissions model
Original leak

profile(leak events)

Sample pool (tag +
leak event index)
Clusters
Sampling
for N
Times
Leak growth model
New leak profile
Updated cluster
N = total # of
cluster leak
events
Emission
calculation
Total emissions
Figure 4-6. Molex Monte Carlo emissions model
In order to better understand the effects of the different emissions scenarios, individual
simulation iterations were reviewed. Modeled emissions presented in Figure 4-7 and 4-8 are
from a single EPA ORD model iteration but can be considered representative of the overall
emissions and control patterns observed.

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 55 of 85
2016
2017
Mid-Crude Unit
2018
2019
2014
2015
2016	2017
m-Xylene Unit
2018
2019
Figure 4-7. Example EPA ORD Simulation of daily emissions: (a) Mid-Crude, (b) m-Xylene
The emission rates shown in Figure 4-7 illustrate several LDSN/DRF strengths and constraints.
Figure 4-7 shows the modeled daily emissions from the 3 scenarios compared to historical
LDAR management. The m-Xylene unit (b) demonstrates faster detection of larger leaks, most
clearly at the beginning of both 2014 and 2015. Similar performance can be seen in the Mid-
Crude unit (a) mid-way through 2016. Additionally, the m-Xylene DTC model appears to reach a
relatively steady state of control where the baseline level of daily emissions is reasonably stable
after an initial adjustment period. This is due to low average DTA and DRF constraints which
require repair of any leaks found with SVs greater than 3,000 ppm. It also illustrates that this
adjustment period may need to be accounted for when developing performance thresholds. The
DTA and DT model scenarios do not have a mechanism for fixing leaks which do not trigger an
alert under each scenario's' detection threshold. This results in a baseline of emissions from
smaller leaks which become more prominent as time goes on. It is a conservative modeling
choice to not allow for repair of some leaks which do not trigger a system detection given the
proposed DRF requirements (see Section 6).
Conversely, the Mid-Crude unit (a), demonstrates less effective control. The AWP scenarios are
more similar for Mid-Crude than for m-Xylene, and the baseline daily emission rate increases
steadily over the modeling duration. There may be some level of steady state, or near-steady
state, achieved for DT and DTC, but a longer modeling period with additional data would be
necessary to confirm. These examples illustrate a less effective embodiment of the LDSN/DRF

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. mole*	Page56of85
concept, and this is apparent in the net cumulative emissions as well (see Figure 4-8(a)). The
modeled m-Xylene unit emissions (Figure 4-8(b)) show significant reductions relative to
historical emissions for two of the three scenarios and modest reductions with DTA assumptions.
Modeled Mid-Crude emissions also demonstrate equivalency for the DT and DTC cases but fail
to demonstrate equivalency for the DTA scenario. For Mid-Crude DTA, the increasing daily
emissions baseline produced by smaller leaks which are not detected becomes dominant relative
to the reductions from quickly repaired larger leaks. The DT and DTC performance demonstrates
the importance of considering distance effects both when designing a LDSN and when
evaluating equivalency.
2016	2017	2018	2019
Mid-Crude Unit
2014	2015	2016	2017	2018	2019
m-Xylene Unit
Figure 4-8. Example EPA ORD simulation of cumulative emissions: (a) Mid-Crude, (b) m-Xylene
Both the Molex and EPA ORD Monte Carlo model were used to evaluate three different
LDSN/DRF control levels and analyze a variety of different system constraints and model
limitations. The historical results presented are reflective of an average EPA ORD simulation
iteration, and cumulative results are discussed in Section 4.3. For a discussion of additional
model and control scenario analysis, see Appendices El and Fl.
4.3 Simulation Results
This section presents results from the EPA ORD and Molex Monte Carlo models which
simulated emissions under LDSN/DRF LDAR programs in order to evaluate if the existing FHR
CC systems would produce equivalent control to the CWP. The simulations focused on leaks
originating from LDAR program, non-AVO components given the relative lack of knowledge

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex Page57of85
about other leak sources and in order to remain consistent with previous equivalency approaches.
For a discussion of emissions sources not evaluated in this simulation, see Section 4.4.
The results of this analysis suggest that the LDSN/DRF systems installed at the FHR CC facility
would be equivalent under the current embodiments. Equivalency summary results are presented
in Table 4-3. In 10,000 iteration simulations, the m-Xylene unit demonstrated equivalency rates
greater than the 67% threshold across all three scenarios for both the Molex and EPA ORD
models. In contrast, the Mid-Crude unit did not establish equivalency for the DTA scenario in
either model. This is due primarily to the unit having a DTA more than double that of m-Xylene
(12,500 ppm). The DTA scenario was the most simplistic; it assumed that no leaks with SVs
below the DTA would be detected by the system. Pilot test results suggest that this assumption is
not born out in practice. However, the lower level of equivalency does reflect the Mid-Crude
system's lower level of control relative to the m-Xylene unit.
Table 4-3. Equivalency simulation results
Processing Unit
m-Xylene
Mid-Crude
Model
EPA
Molex*
EPA
Molex*
n
10,000
10,000
10,000
10,000
DTA
78.0%
100%
20.7%
59.0%
DT
99.8%
100%
72.4%
93.2%
DTC**
100%
100%
92.5%
92.8%
* Non-growth emissions model
** EPA and Molex cluster models reflect different DRFs
While the summary results indicate that the LDSN/DRF concept can demonstrate equivalency,
the net emission results provide additional insight. Figure 4-9 presents boxplots of the EPA ORD
simulated net emissions for the LDSN/DRF scenarios compared to scheduled-M21 LDAR. In
this presentation, an equivalent simulation iteration would be less than or equal to 0 kg net
emissions. The relative levels of control under each LDSN/DRF scenario are clearly
demonstrated in the differences in median net emissions. All six datasets display significantly
more left-tail outliers with skewness ranging from 0.26-0.42. Additionally, the net emissions
illustrate the added emissions reductions needed to achieve equivalency or additional benefits of
LDSN/DRF. The 67th percentile of net emissions can be considered an indicator of the
additional reductions necessary to achieve the equivalency threshold. The 67th percentile of
Mid-Crude DTA net emissions is 1,377 kg, approximately 40% of average M21 emissions. In
contrast, this same metric for the DT and DTC scenarios is -151 and -835 kg, approximately 5%
and 25% of M21 emissions respectively. It would take a significant reduction in overall
emissions from LDAR component leaks for the DTA simulation to demonstrate equivalency.
However, this simulation does not consider non-LDAR or AVO leaks, which provide additional
benefits (see Section 4.4 for discussion). The median net emissions reductions for m-Xylene
represent 15%, 49%, and 76% of scheduled-M21 for DTA, DT, and DTC, respectively. While
the results indicate that the pilot study LDSN/DRF systems can demonstrate equivalency, the
significant differences in scenario simulated net emissions indicate that further research is
necessary to better understand the practical levels of control achieved under LDSN/DRF.

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 58 of 85
(a)
60
O
O
O
cn
Ci
i/i
Q
4
2
0
-2
-4
-6
(b)
Mid-crude Unit
35
25
15
5
-5
-15
-25
-35
m-Xylene Unit
Figure 4-9. LDSN/DRF simulated net emissions compared to M21 LDAR (a) Mid-Crude and (b) m-
Xylene
Evaluating LDAR performance relative to scheduled-M21 without conducting handheld
monitoring of all components is complicated. Tracking the number of leaks repaired under
LDSN/DRF may present one avenue for evaluating emissions control performance in practical
implementations, and simulation results can provide guidance in this area. Figure 4-10 displays
boxplots of the percentage of leaks repaired in the Mid-Crude (a) and m-Xylene (b) units from
10,000 iterations of the EPA ORD model. While all but one of the m-Xylene LDSN/DRF
scenarios demonstrated 67% or greater equivalency, only the m-Xylene DTC scenario repaired a
similar number of leaks to simulated scheduled-M21. The broader range of the Mid-Crude DTC
scenario suggests that the m-Xylene unit DTC scenario was limited by the number of simulated
leaks, rather than LDAR performance. The m-Xylene DT scenario was able to demonstrate
equivalency for 99.8% of the simulation iterations despite fixing on average just over half the
leaks repaired under DTC and CWP scenarios (averages leak repair counts are 253 for DT, 431
for DTC, and 435 for CWP). The m-Xylene DTA and Mid-Crude DT and DTC scenarios also
demonstrated equivalency with average repair rates of 25%, 21%, and 43%, respectively. These
results demonstrate one of the theorized strengths of the LDSN/DRF concept. Despite generally
responding to and repairing significantly fewer leaks than under the CWP, the proposed work
practice still results in greater emissions reductions due to its ability to quickly respond to larger
leaks. This further emphasizes the importance of understanding the impacts of these significant
sources.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page59of85
USEMOItoglRcuKliaMDlMEVaM
03
Q.
CD
C£
fU
CD
100%

90%

80%

70%
"T
60%

50%

40%

30%

20%
-	CWP
-	DTA
10%
- DT

DTC
0%
Q.
CD
en
J*.
100%

90%
80%

70%

60%
.
50%
e£
40%

30%

20%
-	CWP
-	DTA
10%
- DT

DTC
0%
Mid-crude Unit	m-Xylenellnit
Figure 4-10. Emissions scenario leak repairs (a) Mid-Crude, (b) m-Xylene
Pegged leaks, those with screening values greater than the upper limit of the portable device in
use, represent a significant portion of leak inventory emissions. This is due both in part to their
magnitude and to the way in which their emissions are estimated. While other leak emission rates
are calculated using correlation equations, pegged leak emissions are characterized using
constant emission rates for specific component categories. The pegged emission rates provided
in guidance are significantly larger than what would be used under the correlation equations.
Much like for M21 overall, these leaks have been shown to vary widely in mass emission rates.
However, the guidance methods were used in this analysis to remain consistent with inventory
practices. Pegged leaks are both the most likely leaks to be detected under LDSN, so long as the
emitted compounds have a favorable RF, and the most important to repair quickly. When
considering leak repair rates as a method for evaluating performance, it is important to assess the
effects of these larger sources explicitly.
The original leak datasets (methods outlined in Section 4.2.1 and distribution discussed in
Section 4.1) included 18 and 12 pegged leaks in m-Xylene and Mid-Crude, respectively. Due to
the Monte Carlo model's randomized sampling with replacement, the simulations tested a wider
range of pegged values. A summary comparing equivalency rates to the number of simulated
pegged leaks is presented in Figure 4-11. Both DTA scenarios show notable correlation between
performance and the number of pegged values in a simulation, although it is not a perfect
correlation [see m-Xylene (b) at 15 pegged leaks]. The m-Xylene (b) DT and DTC scenarios do
not show much correlation. However, this is due to the overall high rates of equivalency. The
Mid-Crude (a) DT and DTC scenarios do illustrate a correlation similar to that observed for the
DTA scenarios. Notably, the scenarios indicate diminishing returns above a certain level. The
shift begins when approximately 85% equivalency is reached, at 11 and 15 pegged leaks
respectively. Therefore, results suggest the LDSN/DRF systems in these units are likely
achieving equivalency if pegged leaks are being found at a rate similar to historical records,
despite an overall reduction in the number of leaks repaired. Tracking the number of pegged
leaks may be a useful performance target because a well-functioning system should be able to
detect large leaks at rates equal to or higher than those achieved under CWP, and that metric is a
strong indicator for equivalency.

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS rnolex
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 60 of 85
(a)
100%
80%
60%
^ 40%
20%
0%
Simulations
< DTC
A DT
~ DTA

1400
(b)
~
• 9
r Ai ~*
a ~~
		
A	~
~
1200
iooo ¦£
800 £
600
400
200
100%
80%
60%
15 40%
20%
0%
•A««fftAaAAAAAA£4***n
A	~
A _L_ ~
Simulations
•	DTC
ADT
~	DTA
~ ~~~~
1200
1000
800 -5
600
400 E
200
0 5 10 15 20	5 10 15 20 25 30
Number of Pegged Leaks	Number of Pegged Leaks
Figure 4-11. Pegged leak relation to LDSN/DRF equivalency (a) Mid-Crude, (b) m-Xylene
4.4 Additional LDSN/DRF Benefits
The emission equivalency discussion presented here considers only the projected performance of
LDSN/DRF in comparison to fugitive emission control achieved using the CWP. In reality, 24/7
emissions monitoring can provide additional emission reduction benefits over a traditional M21-
based LDAR monitoring program that is executed on a specific set of components on a periodic
schedule (with high temporal latency).
As one example of added benefit, AVOs are leak detections that are determined by facility
operators or, in more serious cases, by safety monitoring systems indicating a potentially
dangerous situation. Some LDAR program components (e.g. equipment in heavy liquid service
or connectors that are part of instrumentation or are covered by insulation) are not routinely
monitored under the CWP. Emissions from these components may be found by AVO, but not
until they reach a detectable threshold that could be well above the leak definition SV. Under the
proposed LDSN/DRF approach, current AVO detection capability will not decrease. Since it can
be assumed (and has been demonstrated in this CRADA) that sufficiently sensitive LDSN/DRF
can detect emissions below the AVO threshold, a positive emissions reduction benefit is inherent
in the approach. This added benefit is not accounted for in the current equivalency modeling.
A second example of added emission reduction benefit not considered in the current equivalency
analysis centers on timely detection and mitigation of emissions from components or equipment
that that are not covered by the CWP. A good example of this is the fin-fan leak described in
Section 3.3. This leak was significant in size, was not a monitored LDAR component, was not
detected by AVO, could not be detected by OGI, but was immediately obvious to LDSN/ DRF
monitoring. This LDSN/DRF detection will likely lead to emissions reductions (when the leak
source is ultimately repaired or replaced) compared to the "undetected scenario". Several other
examples of detections and repairs of emissions from non-LDAR program components were
documented in pilot testing.

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex	Page6iof85
4.5 Conclusions
This section presented an approach for evaluating the equivalency of the FHR CC LDSN/DRF
systems compared to CWP at the facility. Monte Carlo simulations demonstrated that high levels
of LDAR control and emissions reduction can be achieved under the proposed work practices.
Models for the m-Xylene unit were shown to be equivalent to or better than CWP for all
LDSN/DRF scenarios, and significant emissions reductions were observed when distance effects
were incorporated into the simulations. The Mid-Crude unit also demonstrated greater than 67%
equivalency for two of the three emissions control scenarios in comparison to scheduled-M21.
Non-LDAR and AVO sources are another significant source of emissions that are less effectively
controlled under the CWP, and this presents an additional benefit to the continuous, extensive
monitoring employed by LDSNs.
This equivalency analysis was limited to the process units included in the CRADA pilot study and
was not designed to provide conclusions about other potential LDSN installations. However, the
results did provide insight into what should be considered when developing broader, more
transferable analyses. The SV distribution, in particular the portion of emissions generated by
larger and "pegged" leaks, is especially important to evaluate when considering the proposed work
practice. Additionally, the portion of emissions which originate from components subject to longer
monitoring intervals or no monitoring will influence the impact of LDSN/DRF. The site layout
and the feasibility of siting sensors close enough to a sufficient majority of components may also
determine whether a system is cost effective. Distance plays an important role in equivalency
determinations. The typical emissions compounds are also key, and a poorly designed system may
not be able to achieve a sufficiently low DTA with the sensor used in this study. Future equivalency
analyses and testing may be able to develop a more transferable relationship between eligible gas
streams, DRF procedures, distance, and system DTAs.
Ultimately, modeling demonstrated that the LDSN/DRF approach may take time to reach a level
of steady-state control. While smaller leaks may not be found at high rates initially, pre-existing
AVO and non-LDAR leaks may be more prevalent during the early stages after installation.
Additional and longer-term research is necessary to better understand the variation of and degree
of control possible under LDSN/DRF. However, Monte Carlo simulations established that the
LDSN/DRF concept can demonstrate significant emission reductions relative to CWP despite
repairing fewer leaks than under CWP, thereby improving LDAR efficiency.
5.0 LDSN/DRF Methods and Quality Assurance
This section describes important aspects of LDSN/DRF methods and QA procedures that were
developed under the CRADA through 2019. These procedures capture current knowledge and
development status, with focus on the specific pilot implementations at the FHR CC process
units. The purpose of this section is to summarize the procedures that were used, and are
continuing to be refined, that will ensure ongoing LDSN/DRF emission control performance.
The information in this section, along with supporting appendices, completes the documentation
of research and development activities in pursuit of NTOs 1, 2, 3, and 4, described in Section
1.3. With the delivery of this report, all near-term CRADA objectives have been successfully
met.

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page62of85
USEMOItoglRcuKliaMDlMEVaM
Work remains to develop a transferable embodiment of LDSN/DRF and to refine system design
criteria (e.g. optimal node density) and automated QA and calibration procedures for general
applications. We must further our understanding of some regulatory aspects of LDSN/DRF, such
as emission inventory reporting, which would necessarily change under any potential NGEM-
based LDAR AWP (Section 2). The CRADA team will continue to learn from and implement
feedback received from facility operations during the pilot tests of LDSN/DRF. The team must
also continue to communicate these environmentally beneficial NGEM advances to the
international technical community as part of scientific discourse,12"13 and through peer-reviewed
literature, with this report representing a significant milestone. The work that remains to be done
is reflected in the LTOs described in Section 1.3, with progress towards these LTOs the subject
of Section 6 of this report.
Section 5 comprises five subsections that provide an overview of method progress to date.
•	Section 5.1 discusses sensor placement and LDAR program component coverage design
under the LDSN/DRF monitoring plan that is part an overall fugitive management plan
for a facility.
•	Section 5.2 describes the LDSN node preparation, factory calibration, and in-field sensor
functionally checks (called bump tests) that were developed and performed as part of the
pilot test to help ensure sensor operation. The use of the mSyte™ mobile interface to
assist the operator in performing the bump test is also described and mSyte™ system
health monitoring is also discussed.
•	Section 5.3 explains the procedures followed under the DRF by facility personnel in
response to an LDSN leak detection generating a PSL. The combination of fast survey
checks to identify emission points combined with M21 to document the leak level is
described. The DRF rules to help diagnose and document cluster detections and other
real-world factors are discussed.
•	Section 5.4 discusses advanced QA concepts that leverage the automated data archiving
and analysis inherent in the LDSN/DRF approach along with the QA feedback provided
by the M21-documented leaks under the DRF. This valuable information can help ensure
ongoing systems performance and assist in understanding inventories.
•	Section 5.5 provides an example of how LDSN-type data (from mSyte™ or any similar
such system) may be analyzed by third parties as part of an independent auditing QA
function.
5.1 LDSN/DRF Monitoring Plan Design
LDSN/DRF can represent an important part of a facility's overall fugitive emission management
plan but is likely not the only LDAR tool employed. In complex industrial facilities, not all areas
and types of chemical service will be monitored by LDSN/DRF at a given time. Some examples
of these situations include phased LDSN deployments (LDSN coverage not yet present), large
distance between few LDAR components (LDSN cost/benefit not present), chemical detectability
(insufficient sensor RF), and other similar considerations. An example from the FHR CC pilot test
relates to the fuel gas components for a high methane-content stream that could not be easily
monitored with the current 10.6 eV PID embodiment of LDSN. In these cases, a facility could

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	page63of85
USEMOItoglRcuKliaMDlMEVaM
implement an LDSN/DRF solution for portions of a facility while continuing to follow the LDAR
CWP using traditional M21 or the OGI AWP in other areas. Figure 5-1 provides an overview of
different elements that could potentially comprise a facility's fugitive emission management plan.
Facilities would maintain records that clearly demonstrate which portions of the facility were
complying with M21 CWP and which were utilizing the LDSN/DRF, for example.
Key Elements of a Fugitive Emission Management Plan
Leak Detection Sensor Network
(LDSN)
Detection Response Framework
(DRF)
Traditional LDAR M21 Monitoring
& Work Practices
(M21CWP)
AVO Programs
OGI AWP
Voluntary/Supplemental OGI
Other "non-LDAR" Monitoring
Carbon Canister Breakthrough
Cooling Tower (El Paso Method)
RCRA Containers, etc.
Non-LDAR Monitors
Fenceline, Personnel, IH, etc.
(Not intended to convey that sites should or must do all of these. The flexibility to determine and implement the appropriate approach for various situations is important.)
Figure 5-1. Key elements of a fugitive emission management plan
The design and implementation of the LDSN monitoring plan is the first step in successful use of
LDSN/DRF as a key part of a facility's overall fugitive management strategy. The installation of
LDSN nodes, supporting wireless communication infrastructure, and wind measurements in a
complex refinery unit requires significant planning and engineering execution in collaboration
with facility operations. Most critical among the many details that need to be considered is the
strategic placement of sensor nodes within the process unit to achieve optimal coverage for the
subset of components that are defined to be part of the LDSN monitoring plan. In general,
component density, historical LDAR results (if available), and area meteorological conditions to
optimize sensor placement must be considered. Additional factors such as emitted gas properties,
wind flow obstructions and flow channeling within the process unit, multi-level placement
strategies, and potential interfering effects (e.g. proximity to steam emissions) are also
considered. The pilot tests in SLOF and FHR CC assisted the team in a better understanding of
these factors with an LTO to form general method guidance for LDSN implementation planning
(Section 6).
Figure 5-2 provides an example of a 3-dimensional view of sensor node placement planning for
the FHR CC Mid-Crude pilot test, with further details provided in Appendix C. The spacing
shown is based on the default 50-60 ft criteria determined from tracer release studies discussed in
Section 3.2 with vertical separation differences of ± 20ft, determined by the process unit levels.
One important aspect of the LDSN monitoring plan is that it may be adjusted as information on
LDSN performance becomes available. For example, the initial deployment plan for FHR CC
Mid-Crude (Appendix CI) was adjusted based on LDSN performance information provided by
the characterization of the leaks as part of the DRF (Section 3.4, Appendix C2). This LDSN QA
information, coupled with further spatial analysis of sensor placement near key component
clusters, resulted in the addition of 6 strategically placed sensor nodes to ensure complete LDSN
coverage (Appendix C3).

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS nriolGX
resources*
Progress on LDAR Innovation
January 28, 2021
Page 64 of 85
1<1 :
"	-¦> i ¦ ¦ k ?
-1
¥
1 =-li
-ii
	i—**3 -¦ i—-
1
\
i .
Figure 5-2. Example of LDSN node placement in FHR CC Mid-Crude
5.2 LDSN Node Pre-deployment and In-field QA Testing
Once LDSN node locations were planned and supporting infrastructure placed, the next step was
to install Molex-certified sensors. The team utilized a pre-production version of the sensor for
the 2019 pilot tests (Figures 3-5 and 3-13). Although still approaching final commercial form,
this sensor and the mSyte™ platform were highly developed, allowing for a realistic assessment
of system preparation, deployment, and long-term operational QA methods. Figure 5-3 outlines
the major procedural steps that were followed in the construction and operation of the LDSN. As
the manufacturer of the technology, Vi ol ex QA-screened all incoming components, produced the
sensors, and performed comprehensive pre-deployment operation checks on all sensor nodes
deployed in the FHR CC pilot. For these tests, Molex used a zero point and 500 ppb isobutylene
(two-point) factory calibration procedure to prepare the sensors for use. Higher range tests on the
10.6eV PID utilized were performed to establish linearity up to 3 ppm (the extent of expected
range), but these tests were not applied to 100% of the FHR CC pilot sensors. The establishment
of uniform minimum requirements for factory calibration of LDSN sensors is further discussed
in Section 6 on generalized methods.
Incoming
Factory
Factory
Component
Assembly
Calibration
Screen
Check
Calibrate
Test LDSN.
Produce and
sensor system
node parts
verify the
0 and 500 ppm
against
LDSN nodes
used for pilot
specifications

tests
Field
Deployment
Tests
Functionality
checks of
electronics and
bump test of
sensor
Field
Validation
Bump test on
schedules
Record keeping
on sensor
response trends
mSyte™
Node Flealth
Monitoring
Fault error
codes, and
abnormal
performance
detection
Figure 5-3. LDSN fabrication and implementation procedures

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page65of85
USfflOltadltawintuHOiwiepwt
The LDSN nodes were installed in the process units, and comprehensive operational tests of the
sensors and wireless communication capability were conducted. As part of these commissioning
tests, an in-field sensor functionality and calibration check called a "bump test" was performed
(Figure 5-4, Appendix D6). In brief, the operator attaches a special bump test fixture to the
sensor inlet. The fixture is connected via a Vi inch diameter polyurethane tubing to a small format
(easily carried), certified 500 ppb isobutylene cylinder that is fitted with an on-demand regulator.
The operator uses the mSyte™ mobile device to verify the identity of the sensor node to be
tested, starts the bump procedure test within mSyte™ mobile app, and then releases the test gas
for a short time period administered by the mobile app.
As indicated in the Figure 5-4, the sensor responds to the sudden change in concentration (the
bump), and the amplitude of change in the sensor reading is compared to a preset threshold value
determined in factory calibration. A successful bump test is shown by a "Pass" message on the
mobile app, which currently indicates the response of the sensor exceeds 50% of the nominal
value of the standard. If the first test was not successful, the test was repeated up to two
additional times. If the sensor continued to fail, it was recalibrated or replaced with a pre-
calibrated sensor. All the bump tests are segregated from leak detections, and the test time, test
gas, test results as well as operator who performs the test are automatically logged in the
mSyte™ database as a QA record and displayed to the operator.
Due to the modular design of the Molex sensor element, replacing a m alfun ctioned sensor with a
factory-certified replacement in the field is an easy procedure, requiring no tools. To change a
sensor, the lower metal cylinder PlL) housing shown in the "Pre-production LDSN" image of
Figure 3-5 is unplugged, and a replacement sensor and housing are plugged back in. Information
on the sensor change is updated in mSyte™, and the sensor is allowed to equilibrate for two
hours before the commissioning bump test is performed.
LDSN Node Bump Test
Node with bump test fixture
Operator viewing test progress
on mSyte™ mobile device
650
V 600
c 550
500
0) ^450
c "S00
o 350
% 300
O 250
rv onn
asgssss
N-' CO 00 CO CO c
in in 55 io C?5 u
inioih'iniointhioiod
| Sensor response to 500 ppb isobutylene bump test
analyzed, recorded, and reported by mSytem
Figure 5-4. LDSN node burrip test QA check

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page66of85
USEMOItoglRcuKliaMDlMEVaM
For CRADA research, bump tests were conducted more frequently than will be required in a
standardized LDSN method. For the exploratory tests and SLOF pilot, bump tests were
conducted once per day, in part because the test team was on site and the number of sensors were
low. Based on the demonstrated reliability of the sensors during early testing and the
implementation of additional automated sensor health monitoring parameters, the bump tests
were conducted less frequently during the FHR CC pilot tests (once per week). For the long-term
SLOF and FHR pilot tests, over 180 individual bump tests were performed. These tests show an
average of a 12% drop in sensor sensitivity over three months of testing, suggesting sufficient
stability in sensor performance for continual use.
For larger LDSN installations that might include hundreds of sensors spread over significant area
and acreage, the resources expended to perform bump tests can be significant. For example, it
could require up to one full time equivalent (FTE) just to conduct weekly bump tests for 1,000
sensors across a refinery. With final-form continuous sensor health monitoring, cross-node data
comparisons, and associated automatic notifications, it is envisioned that once per quarter bump
testing will be sufficient to maintain LDSN operational performance for this specific sensor type
(10.6 eV PID). The health of each sensor is continuously monitored for power outage, loss of
data transmission, and sensor baseline levels. The current status of each sensor is available on
mSyte™ Historical data is also logged in the database. Any failure or significant deviation from
preset threshold values will result in a notification being sent to appropriate facility personnel.
Failed sensors should be reset, repaired, or replaced.
5.3 The DRF Procedure
The DRF describes the LDAR tools and procedures facility personnel use to respond to PSL
notifications delivered by the LDSN. The DRF and the LDSN work together to locate, assess,
and produce auditable records of discovered leaks and associated repairs under the envisioned
AWP. As illustrated in Figures 1-2 and 1-4, fast survey gear like hand-held probes and OGI,
along with calibrated M21, are part of the DRF and assist in identifying and documenting
emissions. Through the DRF, facility personnel gather and input key metadata into the LDSN to
inform of maintenance activities and document performance over time. The interaction of the
LDSN system and facility personnel is facilitated by the mSyte™ mobile device carried by the
LDAR technician into the process units (Figure 5-5). Just as the LDSN system improved over
time with design iterations and continued use, the long-term pilot tests informed aspects of the
DRF. Captured here are key components of the DRF employed in the pilot tests along with easily
envisioned augmentations that will be incorporated into the DRF as use continues. Advanced
LDSN and DRF topics are described in Section 6 in the context of CRADA LTOs.

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page67of85
USfflOltadltawintuHOiwiepwt
Bos/'c DRF Procedures
The LDSN system monitors a process unit 24x7
Upon repeated low level detections, the system
estimates a potential source location (PSL) and sends a
notification to designated managers.
An LDAR technician is dispatched to the PSL to
inv estigate the emissions event using a fast survey tool
Leak source is validated and leak size recorded via
mobile app and aM21 analyzer.
The leak source is repaired and the leak event is
documented
k The DRF includes specific procedures to continue
the manual leak search if smaller leaks are first
detected.
All leaks identified are documented and repaired.
Figure 5-5. Basic DRF procedures and mobile interface
The LDSN system automatically detects, categorizes, and approximates the location of emi ssions
in the monitored process unit based on VOC and meteorological measurements. The LDSN
notifies selected facility personnel of detected emission anomalies so that appropriate action can
be taken under the DRF. Those notifications are automatically classified as one of three separate
categories.
Category 1 notifications are for larger potentially impactful emissions that need prompt response
by facility personnel. These are anticipated to likely be associated with significant maintenance
activities or unplanned emissions. The Category 1 notification protocol may include direct
alarms for operators, for example phone notifications, and/or e-mail notification to appropriate
personnel. During the FHR CC pilot test, all Category 1 notifications were the result of planned
maintenance activities that were authorized by the facility's Maintenance, Start-up, Shutdown
permits.
Instead of direct alarms to operators, Category 2 and Category 3 notifications are accompanied
by a PSL box. The PSL box concept is introduced in Sections 1.4.2 and 3.4, with examples from
FHR CC provided in Appendix C2. A discrete serialized identification number is automatically
assigned to each new PSL. The PSL box serves as a visual representation of the area in which
there is high probability that a fugitive emission anomaly has occurred. Category 2 notifications
are typically medium to large leaks that can be efficiently identified during the DRF process.
Category 3 notifications are often smaller and/or intermittent emissions that are more challenging
to locate. Category classifications are useful when prioritizing the order in which multiple PSLs
should be investigated. PSLs continue to update periodically as additional sensor readings and
meteorological data are collected and analyzed. These updates can be especially helpful for
Category 3 PSLs which can require more resources to pinpoint the potential leak source.
The purpose of the PSL investigation is to locate the associated potential leak source(s) so that

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	mole*	Page68of85
USEMOItoglRcuKliaMDlMEVaM
source identification can be made, and repair protocols initiated as required. To most efficiently
locate the leak target, the technician uses emission screening gear that is easily deployed. This
includes hand-held probe equipment such as VOC sniffers, OGI, or other appropriate detectors
for the chemicals of interest (Figure 5-6). Once identified, the source is measured with calibrated
M21-equipment to document the leak's peak concentration value and initiate repair procedures.
Relevant information is entered into the mobile device and is recorded for regulatory,
operational, and QA purposes. The mobile app and electronic records are the primary
documentation method. However, paper records can be also used. Other metadata, such as
maintenance activities which may generate emissions, can be entered into the system to facilitate
intelligent LDSN operation. Through machine learning and other capability improvements, the
efficiency and effectiveness of LDSN operation is expected to increase over time. Although
electronic data is the primary record source, paper format is allowed to prevent delays when
mobile devices or electronic systems are temporarily unavailable.
As the technician searches for the leak, each component measured by M21 that exhibits a peak
SV above the regulatory leak definition is considered an actionable leak and is repaired. Once a
leak has been detected that in the judgement of the technician is presumed to be the leak
responsible for the generation of the PSL, the search portion of the investigation is complete.
Once the leak has been effectively repaired, the PSL is allowed to close automatically after a
specified time unless additional sensor data triggers the PSL to update. If additional impactful
leaks are undetected during the investigation, the system will automatically generate a new or
updated PSL thereby triggering a new investigation.
In some cases, a PSL notification is triggered by multiple small leaks that are in close proximity
to each other. This is the result of the leak cluster aggregation effect. While each of the leaks in
this leak cluster may individually be below the lower band of the DT, the combined effects of
their fugitive emissions can be detected by the LDSN. Once leaks have been detected that in the
judgement of the technician are presumed to be the leaks responsible for the generation of the
PSL, the search portion of the investigation is complete. Once the leaks have been effectively
repaired, the PSL is allowed to close automatically after a specified time unless additional sensor
data triggers the PSL to update. If additional small leaks were undetected during the
investigation, the system will automatically generate a new or updated PSL once the leak cluster
aggregation is detected. A new or updated PSL would trigger a new investigation.

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page69of85
USfflOltadltawintuHOiwiepwt
Figure 5-6. DRF in action on a leak found under insulation
5.4 Ongoing QA of LDSN
In addition to automated sensor health monitoring and periodic bump tests, a powerful QA
concept inherent in the LDSN/DRF approach is the systematic review of archived data. Since all
sensor, wind, and DRF leak diagnostic information (measured by M21) are retained for a five-
year period, it is possible to recreate and analyze specific PSL notifications and the effects of
repair procedures, for both operational and QA benefit. Figure 5-7 provides an example of
LDSN/DRF informetrics where time-resolved leak detection data for two sensors in the jet fuel
(RF < 1) area of the Mid-Crude process unit are examined along with wind data and DRF
findings. In the upper panel (a), hourly average wind speed and direction data are plotted on the
primary and secondary y-axes, respectively. In the lower panel (b), summations of all hourly
peak detections are plotted on the y-axis, with the temporal location of several events noted. For
this LDSN metric, the mSyte™-determined amplitudes of all leak detection peaks (in units of
ppbe) are summed for each hour. In this real-world example, sensor nodes (A) and (B) were
relatively quiet until 10:00 on 8/5/2019 when leak signal appeared on both units, emanating from
an upwind source located to the southeast of the centroid of the nodes. A PSL notification was
issued by mSyte,™ and the LDAR technician under the DRF identified and adjusted an 18.9-
inch circumference control valve in heavy liquid service (on 8/8/2019, reducing the initial M21
leak SV of 3,600 ppm to < 700ppm). The valve was formally repaired (tested below 500 ppm
SV) on 8/22/2019. As a method development factor of note, the leak signal was lower on both
sensors during high wind speeds (hourly average over 6.0 m/s) prior to the DRF action. Sensor
(A) remained relatively quiet under similar wind conditions indicating this leak mass ER was
reduced by first repair attempt.

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 70 of 85
(a)
(b)
o
X
14000
12000
10000
8000
6000
4000
2000
~ Node A
O Node B
Valve Packing
3560 ppm 2 Connectors
Tightened 2500, 3100ppm
PSL Notification
Time (hr:min
Figure 5-7. Example of LDSN data review: (a) meteorological data, (b) LDSN data
The simultaneous observation of detected leak signal on two spatially separated LDSN nodes,
followed by a directed DRF action that likely resulted in the significant cessation of emissions
and sensor signal, represents a powerful QA check of both LDSN detection capability and
mSyte™ algorithm triangulation fidelity. On 8/10/2019, signal again appeared on Sensor (B),
and the DRF response found two connector leaks (2.4-inch plugs), with SVs of 2,500 ppm and
3,100 ppm. One leak was repaired immediately and the other within one day. Under changing
wind conditions sensor signal from a new direction begins to appear days later (green arrows).

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex	Page7iof85
Because the automated processing and summary of LDSN/DRF data is easy to accomplish
through programing code, many types of performance metrics, summary statistics, and
comparisons can be envisioned and are the subject of ongoing methods development research.
Table 5-1 provides an example summary using the data from Figure 5-7 for the two primary
periods of interest from 8/5/2019 to 8/9/2019 and 8/10/2019 to 8/13/2019. For the first leak event
ascribed to the valve (3,600 ppm SV), around 60% of the 96 total monitoring hours exhibited at
least one leak detection peak for Sensor (A) and (B), with a total number of detects of 1,077 and
922, respectively. A metric called "LDSN Signal Rate" can be defined as the total summation of
all hourly LDSN peak detections, divided by the number of hours where detections occurred (for
a subject time period). The rate is around 1,600 and 1,300 ppbe/hr for Sensors (A) and (B),
respectively, for the valve leak event. The average and maximum peak heights are also easily
calculated and, with the signal rate data, indicate a slightly greater plume to sensor overlap for
Sensor (A) under these conditions.
Table 5-1. Summary of LDSN data review for Figure 5-6
Figure 5-6
Node
Peroid of
Interest
Monitoring
Period
(hrs)
Percentage
of Hours
with Detects
(%)
Total
Number of
Detects
(N)
LDSN
Signal Rate
(ppbe/hr)
Average
Peak Height
(ppbe)
Max Peak
Height
(ppbe)
A
8/5/19 0:00 to
8/9/19 0:00
96
65
1077
1581
89.6
4865
B
8/5/19 0:00 to
8/9/19 0:00
96
56
922
1298
76.0
4381
A
8/10/19 0:00 to
8/13/19 0:00
72
26
49
60
23.3
364
B
8/10/19 0:00 to
8/13/19 0:00
72
76
1601
3029
104.1
6084
Whereas the first leak event from 8/5/2019 to 8/9/2019 produced somewhat similar signal on the
two sensors, the second leak event was much more strongly observed by Sensor (B). This a
consequence of the exact position of the leak and wind flow specific within the unit. Although a
comparison of the Sensor (B) hourly LDSN peak detection sum for events one and two does not
show a major difference to the eye, the LDSN signal rate metric is a factor of two greater for the
latter. This comports with the presence of two connector leaks (2,500 ppm and 3,100 ppm SVs),
for event two with separation distances from the leaks to Sensor (B) approximately the same.
5.4 Independent QA Audits of LDSN/DRF
One of the more powerful aspects of a performance-based sensor network leak detection
approach is that the raw time-resolved data from the sensor nodes during 24/7 monitoring is
continually archived and available for later analysis by either the company supplying the sensor
network, the facility operating the network, or a third party for compliance or other auditing
purpose. Although there is cost associated with data storage, the need for transparency and

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB.	moiex	Page72of85
USEMOItoglRcuKliaMDlMEVaM
auditability takes priority. Based on the amount of data generated during the pilot and the current
cost to store and access this type of data, an expectation to maintain up to five years of sensor
reading raw data (mV or ppbe) does not seem unreasonable. That data could be maintained in a
manner that is easily viewed electronically and reasonably available for export upon request by
the appropriate authority.
The spectrum of independent analysis options for the recorded raw data is the subject of ongoing
research, but a variety of statistical calculations on individual nodes as well as comparisons
between nodes are easily envisioned. Figure 5-8 provides an example of how sensor network
data, from the current or any similar system, may be independently analyzed. In this case a
modified version of EPA's open-source SPod fenceline sensor analysis software17- 25" 36 is used to
process 24 hours of data from the ten sensor nodes present in the m-Xylene units during a
particular day when a significant leak signal was present on two sensor nodes (4 and 5).
(a)
4000 -
2000 -
Third party data check
0 -
4000 -

2000 -
Node 2
4000 -

2000 -
Node 3
	" 	
400°" Leak signal on nodes 4 and 5 I


1 Node 4
4000 -

2000 -
Node 5


2000 -
Node 6
4000 -
2000 -
Node 7
4000 -

Node 8
4000 -
2000 -
Node 9
4000 -

2000 -
Node 10
Node 1 Node 2 Node 3
Example of third p
~	Open-source Signal Metric (Mean)
~ Open-source Signal Metric (Median)
~	Molex Signal Metric
Turn time-resolved data
into daily stats summary
for comparisons
J	a	i	L
:5 Node 6 Node 7 Node 8 Node 9 Node 10
arty LDSN data check
(a)	Stacked plots of 24 hours of LDSN data from the
10 sensor nodes of m-Xylene unit processed by an
open-source EPA SPod fenceline sensor algorithm.
Significant leak impacting nodes 4 and 5 on this day.
(b)	One example of a data daily summary indicating
"leak signal strength" that can be directly compared
with installed system detection data (without
knowledge of Molex LDSN proprietary algorithm)
Supports performance-based method development
l ime ihnrnin)
Figure 5-8. Assessment of Molex LDSN data with an open-source approach
In this example, the EPA ORD open-source algorithm automatically processes 24-hours of 1 Hz
time-resolved data from the ten independent nodes and applies a baseline-correction algorithm
that may be used to analyze baseline offsets if desired. The time series analysis from the open-
source software is shown in Figure 5-8(a). The open-source software can calculate a variety of
node-specific statistics and can determine sensor noise floors for time periods when leak signal is
not present. The software can look for sensor operational anomalies indicating possible sensor
maintenance issues (e.g. baseline discontinuities or unrealistically low or high sensor noise
floors). In Figure 5-8(b) the mean, median, and standard deviation (± a error bars) of the overall

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB.	moiex	Page73of85
USEMOItoglRcuKliaMDlMEVaM
24-hour times series for each of the nodes (called the open-source signal metric) is compared to a
Molex signal metric, a normalized form of 24-hours of summed peak data, similar to that
discussed in Figure 5-7. In this case, data derived from the Molex proprietary sensor signal
processing are similar to that independently determined from the raw data.
With this approach, Molex, or another leak detection sensor network company, can provide the
archived data and a representation of their leak detection analysis, without disclosure of
proprietary signal processing algorithms, to a third party who can assess and document system
performance through independent analysis of the data. This third-party verification potential is a
significant advantage of many emerging NGEM approaches. Variations of this independent
analysis is the subject of method development efforts for potential use in a performance based
Other Test Method (OTM).
6.0	Summary and Future Work
An executive summary of LDAR Innovation CRADA work to date and areas for future research
are provided in the following sections:
•	Section 6.1 provides a bullet-list overview of major accomplishments and findings.
•	Section 6.2 describes future work related to the development of transferable methods.
•	Section 6.3 discusses design criteria for development of standardized methods.
•	Section 6.4 discusses aspects of emission reporting and permit representations.
•	Section 6.5 illustrates the power of LDSN/DRF found in forward research.
6.1	Major Accomplishments and Findings to Date
The major accomplishments and findings of this CRADA to date are as follows:
•	A specific embodiment of an innovative LDAR approach called LDSN/DRF was
developed and real-world tested in three working process units. Operational and QA
methods were established, and the emission reduction performance against the CWP was
determined. The business case for LDSN/DRF was evaluated. With the completion of
this report, all CRADA NTOs described in Section 1.3 were successfully met.
•	Key aspects of LDSN leak detection performance include the sensitivity of the sensor to
the emitted gas stream, the node density of the network, and the factors that affect plume
transport. These factors must be fully evaluated in the formulation of a LDSN monitoring
approach as part of a facility's overall fugitive emissions management plan (Section 5.1).
For example, the current embodiment of LDSN based on a 10.6 eV PID cannot
effectively monitor methane-dominant streams such as fuel gas, so other LDAR
monitoring is required in these areas of the process unit.
•	Although the LDSN/DRF system focuses on rapid detection and mitigation of larger
leaks soon after they occur, many smaller leaks are also found and fixed as a result of
node proximity effects (DT band), leak clusters, and the DRF design.

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB. moiex	Page74of85
USEMOItoglRcuKliaMDlMEVaM
•	Tools such as hand-held probes and OGI provide value in rapidly locating leaks detected
by LDSN. The facility's monitoring plan and the process unit's DRF specify the tools
and procedures used in a process unit to maximize operational efficiency and data
quality. The mSyte™ mobile device is a key enabling technology for LDSN/DRF.
•	LDSN/DRF found and fixed leaks of various sizes (M21 SVs). Approximately 30% of
leaks found with M21 SVs > 10,000 ppm were judged detectable by OGI survey.
•	Feedback from facility operations on LDSN/DRF was positive. The detection of
potentially hazardous emissions from non-LDAR program components (such as leaks
under insulation) was described as a major benefit by facility operations.
•	When considering all aspects of emission reduction performance, safety and operations
benefit, and total cost of ownership, LDSN/DRF is judged to be superior to 100% manual
M21 inspection under the CWP (see Section 2 for attribute comparison).
•	As described subsequently, additional work remains to understand and develop
transferable methods for LDSN/DRF and better understand regulatory hurdles and
additional benefits found in this NGEM approach. These subjects represent LTOs for the
CRADA (Section 1.3).
6.2 Development of an LDSN/DRF OTM
Whereas the NTOs of this CRADA focused on the development and assessment of an innovative
LDAR approach for specific FHR process units, the LTOs of this research center on
transferability and exploration of the ultimate potential of this NGEM method. The primary
aspects of these objectives can be phrased in the form of a question.
•	Is it possible to develop a standardized performance-based methodfor LDSN (that works
with a DRF) that could be generally applied to refineries, chemical facilities, and
potentially smaller facilities like oil and gas production operations using other forms of
sensors?
This question can be expressed more efficiently as follows.
•	Is it possible to develop an EPA OTMfor LDSN DRF?
The EPA's Office of Air Quality Planning and Standards (OAQPS) Category C OTM system37
allows for submission of novel approaches for application-specific emissions quantification and
source management work practice measurements. The submitted OTM is reviewed by OAQPS,
and potentially by outside reviewers, and feedback to the submitter(s) is provided. When (if)
OAQPS deems appropriate, the OTM is posted on the EPA's Emissions Measurement Center
public website,37 along with all relevant supporting information (e.g. this report and follow-on
work from this group and others).
To date there are 42 OTMs posted on a wide range of topics. A posted OTM may be downloaded
and used by any interested party, and feedback on the method is encouraged in the spirit of open
exchange and scientific advancement. One purpose of the OTM repository is to serve as staging
area for "methods in process" that OAQPS may draw upon (with appropriate modification) for
promulgation in support of future rule making activities or work practice development. Other
organizations or regulatory bodies may download and utilize a posted OTM for their own

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB. moiex	Page75of85
USEMOItoglRcuKliaMDlMEVaM
purposes and may consult with OAQPS on the appropriateness of a given OTM for a specific
application and intended use of data. An OTM that is found to have technical performance or
implementation issues may be modified or removed at the discretion of EPA OAQPS.
EPA ORD, Molex, and FHR may work together under the LDAR Innovation CRADA to create
and submit an LDSN/DRF OTM for posting consideration by EPA OAQPS. It is also possible
for another group outside of this CRADA to submit a similar concept to OAQPS at any time.
The work performed in this CRADA to date provides a solid foundation for exploration of a
standardized, performance-based form of the LDSN/DRF approach that may be used beyond the
specific process units that were part of the work described in this report. Some critical aspects for
consideration in development of an OTM, as well some aspects potential future benefits of this
technology are subsequently described.
When considering elements of a performance-based OTM for leak detection sensor networks, an
issue of scope of application must be considered. This CRADA has focused on leak detection
relevant to LDAR monitoring programs in refineries as an alternative to the M21-based CWP,
but other applications are possible. For example, the same 10.6 eV PID sensor node and mSyte™
infrastructure would be useful for monitoring condensate storage tank emissions at upstream oil
and gas sites. We do not yet know if a future OTM developed by this team will include this or
other similar applications because even though the hardware and software are useful, the
procedures and action levels will differ significantly from the designed use case. The same
argument can be made for chemical facility fugitive applications that involve high-toxicity
hazardous compounds where the sensitivity of a sensor network approach may be outside of the
performance envelope thus far investigated. The reminder of this section focuses on the primary
application of LDSN/DRF for refinery process units.
6.3 System Design Criteria and Method Standardization
The most critical elements of LDSN/DRF monitoring plan design for facility process units
include sensor selection and sensor node placement. This is a core feature for development of an
OTM. As detailed in Section 3, the DT of an individual sensor is a function of several factors
including the distance of the leak from that sensor, the responsiveness of the chemicals of
interest, the wind conditions, etc. In general, leaks that are closer to a sensor can be detected at
smaller emission rates than leaks that are farther away. Instead of having a single method
detection level like most analytical test methods, the LDSN sensors have a "DT band" that can
be represented by M21-type ppm values across the sensor coverage radius.
The DTU is the upper limit of the detection threshold band and is the DT value (in ppm) that
represents the smallest leak (on average) that could be detected by the sensor network at the
furthest distance away from a sensor. A DTU of 18,000 ppm would indicate that on average, all
ongoing leaks > 18,000 ppm of the target chemicals of interest would trigger a PSL notification
to facility personnel. In practice, the DRF process can identify actionable leaks (e.g. >500 ppm)
that are still below the lower band of the DT range. Detections below the DT lower band are a
function of proximity to the sensor node (in general the closer to the sensor the lower the
detection limit), leak cluster aggregation effect (multiple smaller leaks whose combined
emissions trigger PSL notifications), and/or opportunistic detection of leaks during the
investigation portion of the DRF process.

-------
A rnA United States	_C n wr. r ,TT, «	/	\ Progress on LDAR Innovation
Environmental Protection	X	FLINT HILLS	January 28, 2021
WU rVAgencvQg^	J-	resourcGa.	mOICX page76of85
USfflOltadltawintuHOiwiepwt
Another critical aspect of LDSN/DRF OTM development is sensor selection. Sensor selection is
based on the responsivity of the chemicals of interest. The sensor(s) used in an effective LDSN
should have response factors of < 10 for the targeted LDAR applicable process streams. Note
that this is the same response factor required by M2'l. For response factors >10, leaks either
could not be detected or the node density necessary to achieve acceptable DT performance would
be resource prohibitive. As described in Section 2.3, the response factor is the ratio of the known
concentration of a VOC compound to the measured reading of an appropriately calibrated sensor.
If the process stream is a mixture, the response factor is calculated for the average composition
of the process stream. Average stream compositions might be based on sample data, feed/product
specifications, or process knowledge. Response factors should be based on published data, test
results, or generally accepted calculation methodologies.
Sensor node placement should follow a site assessment, design, optimization, and installation
process such that all components within the LDSN boundary that were previously M21 CWP
applicable will have sensor coverage. Sensor coverage is defined by a DTU of 18,000 ppm or
less. Figure 6-1 shows an example sensor layout plan.
Figure 6-1. Typical sensor layout plan
6.4 Emission Reporting and Permit Representations
Another area for future research is emission reporting and permit representations for NGEM
systems in general. The generally accepted practice for estimating fugitive emissions from
components that have been monitored in accordance with EPA M21 is to utilize correlation
equations for converting the screening values into mass emission rates. This approach is included
in Section 2.1.2 as an advantage of the M21-based CWP due to mandated 100% screening. For
non-monitored components, the mass emission rates are estimated as the product of emission

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB. moiex	Page77of85
USEMOItoglRcuKliaMDlMEVaM
factors and control efficiencies. Additional background and details for each of these methods can
be found in Reference 2, EPA Protocol for Equipment Leak Emission Estimates. Since most
industrial facilities will include monitored and unmonitored fugitive components, the air
emission inventory will include both methodologies.
New Source Review and Prevention of Significant Deterioration permit applications use the
latter approach. For air permits that must be issued prior to the commencement of construction
activities, there is no opportunity to conduct M21 monitoring in advance of the permit
application submittal.
For the portions of the facility in Texas complying with M21 CWP, fugitive emission estimates
can continue to follow the methodology specified in Texas Commission on Environmental Air
Quality's (TCEQ's) Emissions Inventory Guidelines (RG-360).38 For portions of the facility
utilizing LDSN/DRF, the estimated fugitive emissions can be based on an estimate of the
number of fugitive emission components, the emission factors found in TCEQ's Air Permit
Technical Guidance for Chemical Sources - Fugitive Guidance (APDG 6422),39 and LDSN/DRF
control efficiencies that are appropriate for a continuous fugitive monitoring system (28LAER
level control percentages).
For future projects or activities that require new or amended permits and include additional
LDAR type components that will be covered by an LDSN/DRF system, the permit application
should include fugitive emission representations based on an estimate of the net fugitive
emission component count changes, emission factors found in TCEQ's Air Permit Technical
Guidance for Chemical Sources - Fugitive Guidance (APDG 6422), and LDSN-DRF control
efficiencies that are appropriate for a continuous fugitive monitoring system (28LAER level
control percentages).
6.5 The Power of LDSN/DRF - Forward Research
Future research may enable new capabilities in emission monitoring as additional potential
benefits associated with this and other NGEM technologies can be envisioned. Once a facility
has an operating LDSN that can automatically record and analyze information on a 24/7 basis,
powerful data use concepts can be explored. As described in Sections 2 and 4.4, not only are
more effective LDAR programs delivered from a business perspective, but emissions from leaks
and malfunctions that are not part of the LDAR program can be found and mitigated, providing
for significant environmental benefit. Future advanced forms of process monitoring and
emissions accounting can be considered where transient events, such as actuating pressure relief
devices or planned maintenance activities can be observed from both an emissions and
operations point of view. An example of such a transient event is shown in Figure 6-2 where an
emission is observed and tracked over multiple sensors in the network over time. This type of
analysis serves as both an advanced QA check of LDSN operation and potentially the basis for a
whole-facility monitoring concept for the future, benefiting industry, regulators, and
communities alike.

-------
oEPA
United States
Environmental Protection
Agency

f
FLINT HILLS molex
resources*
Progress on LDAR Innovation
January 28, 2021
Page 78 of 85
TE observations do several things:
(1)	Provide QA check for subset of nodes
(2)	Inform wind flow patterns within unit
(3)	Provide information on a sources of
emissions outside of the LDAR program that
can improve operations and inventory
knowledge (future research)
© ©
(a)	Example of a TE
detection in Mid-Crude
on September 3, 2019
O with winds from the east
at ~ 5 m/s.
(b)	No signal on upwind
nodes then TE appears
and propagates through
O the LDSN over time
Figure 6-2. Example of a TE detection
7.0 Acronyms and Acknowledgments
Table 7-1 lists the Acronyms and abbreviations used in this report. Table 7-2 acknowledges the
contributions of individuals working on this project.
Table 7-1. Acronyms and abbreviations
Acronyms or
Abbreviation
Full Detail
AVQ
audio, visual, or olfactory leak/emission detection
AWP
alternative work practice
AWS
Amazon Web Services
CC
FHR Corpus Christi West Refinery (or FFIR CC), the site of pilot testing
CEMM
Center for Environmental Measurement and Modeling (EPA ORD)
CFD
computational fluid dynamics (flow field model simulations)
CFR
Code of Federal Regulations
CRADA
cooperative research and development agreement
CWP
current work practice
DOR
delay of repair
DRF
detection response framework
DT
Detection Threshold (DT band), general - Table 3.1, modeling - Table 4.1
DTA
The midpoint of the DT band, general - Table 3.1, modeling - Table 4.1

-------
rnA Uniied States	_C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
J-	r03OurceB.	moiex	Page79of85
USEMOItoglRcuKliaMDlMEVaM
Acronyms or
Full Detail
Abbreviation
DTC
DT band with cluster detects, EPA modeling term - Table 4.1
DTL
DT band lower limit, general term - Table 3.1
DTU
DT band upper limit, general term - Table 3.1
DT(+lc)
DT band uncertainty, general term - Table 3.1
DT(cluster)
Molex equivalency modeling term - Table 4.2
DT(tag)
Molex equivalency modeling term - Table 4.2
DTA(cluster)
Molex equivalency modeling term - Table 4.2
DTA(tag)
Molex equivalency modeling term - Table 4.2
eDTA
Molex equivalency modeling term - Table 4.2
eDTU
Molex equivalency modeling term - Table 4.2
ECE
Emissions Control Efficacy, general term - Table 3.1
EPA
U.S. Environmental Protection Agency
ER
emission rate (typically by mass)
eV
electron Volt
FHR
Flint Hills Resources Corpus Christi Refinery LLC (the company)
FHRCC
Flint Hills Resources Corpus Christi West Refinery, site of pilot testing
FHR SLOF
Flint Hills Resources Sour Lake Olefins Facility, site of pilot testing
FID
flame ionization detector
FLIR
forward looking infrared
FTE
full time equivalent
g/hr
grams per hour
hr
hour(s)
HAP
hazardous air pollutant
HC
hydrocarbon
Hz
Hertz (1 Hz is a rate of once per second)
ID
identification
IP
ionization potential
IR
infrared
LDAR
leak detection and repair
LDSN
leak detection sensor network
LDL
lower detection limit
LPG
light petroleum gas
LTO
longer-term objective (for CRADA 914-16)
M21
EPA Method 21
MDL
method detection limit
MET
meteorological
MFC
mass flow controller
min
minutes
Molex
Molex, LLC
mSyte™
Molex prototype LDSN data upload and management platform
NGEM
next generation emissions measurement
NTO
near-term objective (for CRADA 914-16)
OAQPS
Office of Air Quality Planning and Standards
OAR
Office of Air and Radiation
OGI
optical gas imaging

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. mole*	Page80of85
Acronyms or
Full Detail
Abbreviation
ORD
Office of Research and Development
ORISE
Oak Ridge Institute of Science and Education
OTM
Other Test Method
PDF
portable document format
PID
photoionization detector
ppbv
part(s) per billion by volume (also listed as ppb)
ppbe
ppb equivalent (a sensor's digital output unit)
ppmv
part(s) per million by volume (also listed as ppm)
PSL
potential source location (leak detection output from mSyte™ LDSN)
QA
quality assurance
QAM
quality assurance manager
QC
quality control
R6
EPA Region 6
RF
response factor
RH
relative humidity
RPD
relative percent difference
RSD
relative standard deviation
RTP
Research Triangle Park
seem
standard cubic centimeter per minute
SV
screening value (M21 -dtermined peak concentration)
SLOF
FHR Sour Lake Olefins Facility (or FHR SLOF), the site of pilot testing
SME
subject matter expert
S/N
signal to noise ratio
SOCMI
Synthetic Organic Chemical Manufacturing Industry
Stdev
standard deviation
c
standard deviation
3c
3 times the standard deviation (sensor noise floor)
SSD
solid state drive
TBD
to be determined
TCEQ
Texas Commission on Environmental Quality
UL
Underwriters Laboratory
U.S.
United States
voc
volatile organic compound
Table 7-2 provides a summary of contributor effort using the following project-specific role key:
(A) report authorship, (B) report editing (C) concept development, (D) EPA ORD equivalency
model development (E) Molex emission simulation and equivalency model development, (F)
LDSN/mSyte™ hardware and software development, (G) data acquisition and analysis (H)
project/program management, (I) quality assurance, (J) technical consultation, (PI) organization
principle investigator.

-------
rnA Uniied States _C	_ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X	FLINT HILLS	mOl^X§	January 28, 2021
j-	r03OurceB.	moiex	Page8iof85
USEMOItoglRcuKliaMDlMEVaM
Table 7-2. Acknowledgment of contributions
Name
Affiliation
Contribution Area
Eben Thoma
EPA/ORD/CEMM
A, B, C, D, G, H, I, PI EPA
Haley Lane
ORISE (EPA ORD)
A, B, C, D*, G, I
Carlowen Smith
ORISE (EPA ORD)
B
Megan MacDonald
ORISE (EPA ORD)
B
Libby Nessley
EPA/ORD/CEMM
B, I, J
Casey Bray
EPA/OAR/AQAD
I
Jason Dewees
EPA/OAR/AQAD
J
Wenfeng Peng
Molex LLC
A, B, C, E, F, G, H, I, PI (Molex)
Ling-Ying Lin
Molex LLC
A, B, C, E, F, G, I
Alex Chernyshov
Molex LLC
A, B, C, E, F, G
Miao Xu
Molex LLC
A, B, C, E
Dave Massner
Molex LLC
C, F, G,
Arun RajuGanesan
Molex LLC
C,F
Ray Haskell
Molex LLC
F,H
Ben Madoff
Molex LLC
F
Barry Kelley
Koch Industries
A, B, C, E, F, G, H, I, PI (FHR)
Mike Clausewitz
Koch Industries
A, B, C, G, I, J
Deb Cartwright
Flint Hills Resources
A, B, C, G, H, I
Kurt Anderson
Flint Hills Resources
A, B, C, G, H, I
Roland Guzman
Flint Hills Resources
G, I
Austin Narverud
Flint Hills Resources
B, C,J
Brent Sorensen
Flint Hills Resources
B, H
*H. Lane led the development of the EPA ORD equivalency analysis as part of her N
ORISE research appointment with EPA ORD.
GEM
8.0 References and Endnotes
1.	EPA, U. S., Reference Method 21, Determination of volatile organic compound leaks. U.S.
Government Printing Office, Washington, D.C.: Federal Register 40 CFR Part 60, Appendix
A-7 , Electronic Code of Federal Regulations, 1990.
2.	EPA, U. S., Protocol for equipment leak emission estimates. U.S. EPA, Office of Air and
Radiation, Office of Air Quality Planning and Standards, Research Triangle Park, North
Carolina 27711: 1995.
3.	EPA, U. S., Leak detection and repair, A best practices guide. U.S. EPA, Office of
Compliance, Office of Enforcement and Compliance Assurance, 1200 Pennsylvania
Avenue, NW, Washington, DC 20460 2007; pp 2014-02.
4.	EPA, U. S., Alternative work practice to detect leaks from equipment. U.S. Code of Ferderal
Register: 40 CFR Parts 60, 63, 65; 73 FR 78199, EPA-HQ-OAR-2003-0199, p78199-78219,
2008.

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
j- r03OurceB. moiex Page82of85
5.	EPA, U. S., Technical support document, optical gas imaging protocol U.S. Code of Federal
Register: 40 CFR Parts 60, Appendix K, EPA-HQ-OAR-2010-0505-4949, 2015.
6.	Furry, D.; Harris, G.; Ranum, D.; Anderson, E.; Carlstrom, V.; Sadik, W.; Shockley, C.;
Siegell, J.; Smith, D., Evaluation of instrument leak detection capabilities for smart LDAR
application: Refinery testing. Environmental Progress & Sustainable Energy: An Official
Publication of the American Institute of Chemical Engineers 2009, 28 (2), 273-284.
7.	Ravikumar, A. P.; Wang, J.; Brandt, A. R., Are optical gas imaging technologies effective
for methane leak detection? Environmental Science & Technology 2016, 51 (1), 718-724.
8.	Robinson, D. R.; Luke-Boone, R.; Aggarwal, V.; Harris, B.; Anderson, E.; Ranum, D.;
Kulp, T. J.; Armstrong, K.; Sommers, R.; McRae, T. G., Refinery evaluation of optical
imagine to locate fugitive emissions. Journal of the Air & Waste Management Association
2007,57(7), 803-810.
9.	Albertson, J. D.; Harvey, T.; Foderaro, G.; Zhu, P.; Zhou, X.; Ferrari, S.; Amin, M. S.;
Modrak, M.; Brantley, H.; Thoma, E. D., A mobile sensing approach for regional
surveillance of fugitive methane emissions in oil and gas production. Environmental science
& technology 2016, 50 (5), 2487-2497.
10.	Snyder, E. G.; Watkins, T. H.; Solomon, P. A.; Thoma, E. D.; Williams, R. W.; Hagler, G.
S. W.; Shelow, D.; Hindin, D. A.; Kilaru, V. J.; Preuss, P. W., The changing paradigm of air
pollution monitoring. Environmental science & technology 2013, 47 (20), 11369-11377.
11.	Somov, A.; Baranov, A.; Spirjakin, D.; Spiijakin, A.; Sleptsov, V.; Passerone, R.,
Deployment and evaluation of a wireless sensor network for methane leak detection.
Sensors and Actuators A: Physical 2013, 202, 217-225.
12.	W. Peng, D. M., L. Lin, A. Chernyshov, B. Kelley, M. Clausewitz, D. Cartwright, A.
Narverud, K. Anderson, E. Thoma, In A Sensor Network System for Process Unit Emissions
Monitoring, Preceedings of the 2019 Air and Waste Managment Association Air Quality
Measurement Methods and Technology Conference, Durham NC, April 2-4. U.S EPA
Science Inventory, Record 345869, https ://cfpub.epa.gov/si/, Accessed Februray 18, 2020.
13.	W. Peng, D. M., L. Lin, A. Chernyshov, B. Kelley, M. Clausewitz, D. Cartwright, K.
Anderson, H. Lane, E. Thoma, In An Automated Sensor Network System and Innovative
Approach for VOC Leak Detection, 2019 American Fuel and Petrochemical Manufactures
(AFPM) Environmental Conference, Salt Lake City, UT, October 27 - 29, 2019; U.S. EPA
Science Inventory, Record 348039, https://cfpub.epa.gov/si/, Accessed Februray 18, 2020.
14.	Weimer, J.; Krogh, B. H.; Small, M. J.; Sinopoli, B., An approach to leak detection using
wireless sensor networks at carbon sequestration sites. International Journal of Greenhouse
Gas Control 2012, 9, 243-253.
15.	Murphy, C. F.; Allen, D. T., Hydrocarbon emissions from industrial release events in the
Houston-Galveston area and their impact on ozone formation. Atmospheric Environment
2005, 39 (21), 3785-3798.
16.	Smith, R., Detect Them Before They Get Away: Fenceline Monitoring's Potential To
Improve Fugitive Emissions Management. Tirfane Environmental Law Journal 2015, 433-

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 83 of 85
453.
17.	Thoma, E.; George, I.; Duvall, R.; Wu, T.; Whitaker, D.; Oliver, K.; Mukerjee, S.; Brantley,
H.; Spann, J.; Bell, T., Rubbertown Next Generation Emissions Measurement
Demonstration Project. International journal of environmental research and public health
2019, 76(11), 2041.
18.	Webster, M.; Nam, J.; Kimura, Y.; Jeffries, H.; Vizuete, W.; Allen, D. T., The effect of
variability in industrial emissions on ozone formation in Houston, Texas. Atmospheric
Environment 2007, 41 (40), 9580-9593.
19.	Zavala-Araiza, D.; Lyon, D.; Alvarez, R. n. A.; Palacios, V.; Harriss, R.; Lan, X.; Talbot,
R.; Hamburg, S. P., Toward a functional definition of methane super-emitters: Application
to natural gas production sites. Environmental science & technology 2015, 49 (13), 8167-
8174.
20.	Endnote l:, The EPA ORD National Risk Management Research Laboratory (NRMRL)
was the original signatory for CRADA #914-16. EPA ORD NRMRL became EPA ORD
CEMM in October 2019, with no change in personnel, for continued execution of the
CRADA.
21.	Endnote_2:, Information on the Federal Technology Transfer Act and the CRADA process
with EPA ORD can be found at https://www.epa.gov/ftta/collaborating-epa-through-federal-
technology-transfer-act. Accessed February 18, 2020.
22.	Endnote_3:, Elements of standardized methods generally refer to factors such as instrument
technology performance (e.g. sensor sensitivity), siting criteria, calibration and QA, general
documentation, and use limitations, presented in the context of performance-based method
(not limited to the specific embodiment demonstrated in this CRADA). The DRF is
envisioned to be application specific (differs by process unit) but the presence of a DRF to
for example, provide QA feedback to ensure ongoing operation of an LDSN, is believed to
be required and therefore will likely be part of a standardized method in some form.
23.	API, Analysis of Refinery Screening Data, API Publication No. 310, American Petroleum
Institute: Washington, DC, 1997.
24.	Endnote_4:, Published 10.6 eV RFs for individual compunds can vary by manufacuturer.
Table 2-2 contains 10.6 eV PID RF values from Technical/Application Article 02 by Ion
Science Inc., 2019 https://www.ionscience.com/wp-content/uploads/2017/03/TA-Q2-Ion-
Science-PID-Response-Factors-UK-V1.14.pdfElements , Accessed March 21, 2020
25.	Thoma, E. D.; Brantley, H. L.; Oliver, K. D.; Whitaker, D. A.; Mukeijee, S.; Mitchell, B.;
Wu, T.; Squier, B.; Escobar, E.; Cousett, T. A., South Philadelphia passive sampler and
sensor study. Journal of the Air & Waste Management Association 2016, 66 (10), 959-970.
26.	Stovern, M.; Murray, J.; Schwartz, C.; Beeler, C.; Thoma, E. D., Understanding oil and gas
pneumatic controllers in the Denver-Julesburg basin using optical gas imaging. Journal of
the Air & Waste Management Association 2020, 70 (4), 468-480.
27.	Thermo, TVA 1000 Response Factors, P/N 50039, Version 8-23-2000, Thermo
Environnmental Instrumnets Inc., Franklin Massachusetts;

-------
rnA Uniied States	_C _ __ _ . ,TT, „	/ \	Progress on LDAR Innovation
Environmental Protection X FLINT HILLS mOl^X§	January 28, 2021
J- r03OurceB. moiex	Page&4of85
http://www.petersonenvironmental.com/ThermoTVA100QResponseFactors.pdf, Accessed
Febrary 18, 2020. .
28.	Zeng, Y.; Morris, J., Detection limits of optical gas imagers as a function of temperature
differential and distance. Journal of the Air & Waste Management Association 2019, 69 (3),
351-361.
29.	Zeng, Y.; Morris, J.; Sanders, A.; Mutyala, S.; Zeng, C., Methods to determine response
factors for infrared gas imagers used as quantitative measurement devices. Journal of the
Air & Waste Management Association 2017, 67 (11), 1180-1191.
30.	Endnote_5:, In the fourth qaurter of 2019, Motiva (headquartered in Houston TX, a
subsidiary of the Saudi Arabian national oil company Saudi Aramco), acquireed the SLOF
site from FHR
31.	EPA, U. S., Monte Carlo Simulation Approach for Evaluating Alternative Work Practices
for Equipment Leaks. Office of Air and Radiation, Office of Air Quality Planning and
Standards, Draft report. Attachment C to EPA-HQ-OAR-2003-0199-0008, Statistical
Methods and Analysis of Sample Size Determination for a Generic Alternative Method.
1999.
32.	EPA, U. S., Alternative Means of Emission Limitation, (40 CFR 65.102), Subpart F -
Equipment Leaks, Electronic Code of Federal Regulations, https://gov.ecfr.io/cgi-bin/text-
idx?SID=9f493e42caba260d8ee3ce742f537eec&mc=true&node=se40.17.65 1102&rgn=di
v8 , Accessed 04/29/2020 2020.
33.	EPA, U. S., Standards: Pumps, Valves, Connectors, and Agitators in Heavy Liquid Service;
Instrumentation Systems; and Pressure Relief Devices in Liquid service., (40 CFR 63.169),
Subpart H—National Emission Standards for Organic Hazardous Air Pollutants for
Equipment Leaks, Electronic Code of Federal Regulations, https://gov.ecfr.io/cgi-bin/text-
idx?SID=09693ea334c7dc0a99aafcd5d381c353&mc=true&node=se40.11.63 1169&rgn=di
v8 , Accessed 04/29/2020 2020.
34.	EPA, U. S., Instrument and Sensory Monitoring for Leaks.[65.017(b)(4), 65.107(e)(l)(v), &
65.104(a)(2)], Subpart F -Equipment Leaks, Electronic Code of Federal Regulations,
https://gov.ecfr.io/cgi-bin/text-
idx?SID=9f493e42caba260d8ee3ce742f537eec&mc=true&node=sp40.17.65.f&rgn=div6#se
40.17.65 1104. Accessed 04/29/2020 2020.
35.	API, 1993 Study of Refinery Fugitive Emissions from Equipment Leaks, Volumes I, II, and
III, Radian DCN No. 93-209-081-02, Prepared by Radian Corporation for Western States
Petroleum Association, Glendale, CA, and American Petroleum Institute, Washington, D.C.
1994.
36.	Thoma E.D., M. B. A., Squier B., Daly R., Brantley H. L., Deshmukh P., Wille, G., Cansler
J., U.S. EPA Science Communication, Science in Action Fact Sheet: SPod Fenceline
Sensors Under Development, Next Generation Emissions Measurements, 2016,
https://www.epa.gov/air-research/spod-fact-sheet, Accessed February 18, 2020.
37.	EPA, U. S., United States Environmental Protection Agency Air Emission Measurement
Center (EMC), EMC Other Test Methods, https://www.epa.gov/emc/emc-other-test-

-------
oEPA
United States
Environmental Protection
Agency
Q39
f FLINT HILLS molex*
resources" i	.
Progress on LDAR Innovation
January 28, 2021
Page 85 of 85
methods , Accessed Frebruary 18, 2020.
38.	TCEQ, 2013 Emission Inventory Guidelines, Appendix A: Technical Supplements, 2015,
https://www.tceg.texas.gov/assets/public/comm exec/pubs/rg/rg360/rg36012/appendix a.pd
f, Accessed Februray 18, 2020.
39.	TCEQ, Air Permit Technical Guidance for Chemical Sources, Fugitive Guidance, APDG
6422, 2018,
https://www.tceq.texas.gov/assets/public/permitting/air/Guidance/NewSourceReview/fugiti
ve-guidance.pdf , Accessed Februray 18, 2020.

-------
molex
\
FLINT HILLS
resources-
SEPA
United States
Environmental Protection
Agency
0iI9
Progress on LDAR Innovation
Appendix A: Exploratory Tests
Page A1 to A51
Appendix A: Exploratory Tests
Appendix A consists of two summary presentations reviewing select aspects of the two
exploratory tests conducted at the EPA RTP outdoor test range. Appendix A1 (pages A2 to A16)
reviews RTP Test 1, conducted 11/27/17 to 12/7/17). Appendix A2 (pages A17 to A51) summarizes
RTP Test 2, conducted 2/26/18 to 3/7/18.
The following was originally developed as part of internal CRADA team communications and has
been modified in places to remove proprietary information and to improve clarity for use in this
report. This appendix does not intend to capture all efforts performed in these tests but provides
the reader with a sense of the type of work accomplished.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix A Page 1

-------
Appendix A1
RTF Test 1 Summary
Tests Conducted November 27, 2017 to December 7, 2017
r	A	United States
molex \ F.LKr„HLL.Li v>EPAisrr~n
8H*>wtn>>»n win m m
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 2

-------
Outline
• Intro to PID and sensors used in Test 1
• Review and analysis of test data
• Summary of Test 1
•	What we have learned
•	Next steps
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 3

-------
What is PID?
Photoionization detector (PID) is a type of sensor that utilizes vacuum UV to ionize gas
molecules, and then measures the electric current to determine the concentration of the gas.
Gas Molecule
\
Insulation
M
,
		~ M+ e ) ^
Electrode
Window ¦
I I
Signal ->
Electrode for Lamp
Illumination
Lamp
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix A Page 4

-------
CMI Technologies Evaluated
TVA-1000B by Thermo Fisher (Method 21)
OGI camera by FUR (AWP)
Photoionization Detector (PID)
t
•	Falco Area Monitors by Ion Science
•	Molex unit, white cap sensor, diffusion model
•	EPA unit, white cap sensor, sample draw model
•	Falco Area Monitor by Ion Science (sample drawing, EPA unit)
•	S-Pod fenceline monitor by EPA
•	PID data acquisition systems by Molex
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 5

-------
EPA Test Range
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 6

-------
Test Set-up
•	100% Isobutylene was used as the test gas.
•	Mass flow controllers were used to control the leak rate.
•	Every leak was measured by TVA-1000B/M21 (some also by FUR OGI).
•	All sensors were placed together in one or two locations, while the leak
Mass flow controllers
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 7
Valves
location differs.

-------
Overall Layout
• 22 tests (one aborted) with 15
sensors. Of them 18 tests have all
sensors at 4.5ft off the ground, and 4
tests have half the sensors at 4.5ft
and half at 13ft above the ground.
• Bubbles mark leak locations-size
corresponds to maximum mass flow
rate. Dotted lines connect leaks to
sensors (iocated on grey bar).
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 8

-------
Statistics of Wind Data
There was positive (from leak towards sensor) and negative wind of varying speed during each test.
Wind speed in direction from leak to sensor
Positive: from leak to sensor
Negative: from sensor to leak
Blue: sensors at 4.5ft high
Brown: sensors at 13ft high
Wind speed (ft/miri)
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 9

-------
Sensor Output Plots
•	Multiple detects were observed during this 1 hour test session.	Test conditions:
¦	lOsccm leak rate
•	All sensors responded in the same way regardless of types and sensitivities.	¦ Drygasmeter
¦	TVA = 8,000ppm
¦	Leak was 4' high, and 7'
east of sensors
29 NOV2017 Test 1
o
[*-.
<3-
CO
0
00
1
CO
cn
00
CO
0
00
1
CO
cn
00
CO
0
00
1
CO
CO
00
CO
0
00
T—1
^r
CO
cn
^r
0
cn
1
^r
CO
cn
^r
0
cn
1
^r
CO
cn
<3-
^r
0
cn
1
^r
CO
(6
<3-
cri
<3-
1
LO
co
LO
LO
LO
CO
LO
0
0
csi
0
0
r->"
0
cri
0
T—1
T—1
CO
1
(6
1
CO
T—1
0
CM
c\i
CM
LO
CM
CM
cri
CM
1
CO
CO
UD
CO
CO
CO
0
CO
LO
^r
00
00
00
CO
CO
CO
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 10

-------
Distance and Leak Rate Test
•	Valve Bonnet as the leak source 3'6" above the ground and 37'10" NE of sensors.
•	Significant gas detects were observed at a leak rate of 5.0sccm or even as low as 2.5sccm.
29NOV2017 Test 6, 5.0sccm
(TVA=11,000ppm)
0.35
0.3
0.25


0.05
0
OLOOLOOLOOLOOLOOLOOLO
C\J-t--t-OOLOLO^^COCOC\JC\J-t-
OLOOLO
1- O O LT)
t— OJCO^LDLDCDNCOCDOi— CM CO IX) CD CD
UT) UT) UT) UT) UT) UT) UT) UT) UT) UT) C> C> C> C> C> C> C> C)
66666666666666666666^^^^^^^^
0.35
29NOV2017 Test 7, 2.5sccm
(TVA=5,300ppm)
0.3
0.25

£ 0.2
o
> 0.15
0.05
to 00
r~\ O


tO 00
lo ^r
to oo
O (N ^ L/l |N (Jl H
^ ^ ^r
LO LO LO LO m
o
LO
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 11

-------
Foggy Morning (High RH) Test
•	20sccm leak, Valve Bonnet, TVA>100,000 ppm, leak was 22" high and 40' SE of sensors. Intermittent fan.
•	Sensors responded to gas while their baseline output was drifting.
5DEC2017 Test 1
00
T—1
LO
LO
CM
en
o
i
^r
^r
(N
i
o
00
m
LO
T—1
CM
LO
cn
(N
tD
o
CO
<3-
o
(N
LO
LO
m
CM
1
cn
tD
CM
ro
o
o
i
^r
LO
i
ro
00
o
LO
CM
CM
cn
LO
ro
^r
i
i
LO
00
CM
LO
o
cri
CO
i
^r
^r
^r
cri
r\i
LO
LO
LO
LO
o
o
r\i
o
LO
O
00
o
o
i
ro
i
LO
i
00
i
1
CM
ro
CM
d
CM
cri
CM
l
ro
ro
d
ro
cri
ro
CN
^r
^r
cri
<3-
CN
LO
LO
LO
LO
o
o
ro
O
cri
cri
cri
cri
cri
cri
cri
cri
o
i
o
i
o
i
o
i
o
i
o
i
o
i
o
i
o
i
o
i
o
T—1
o
i
o
i
o
i
o
i
O
i
o
i
o
i
o
i
o
i
o
i
o
i
o
i
i
i
I
I
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 12

-------
Forced Air Flow Test
• Directional flow of air generated by a fan from the leak to sensors created a large response to gas on some
sensors
30NOV2017 Test 4
4	 Fan On
Test Conditions:
¦	13sccm leak rate
¦	Valve Bonnet
. TVA = 43,000ppm
¦	Leak was 22'high, and
33' SE of sensors
L/irsOlHfOlAI^Ol
oooooooo
CO LO
r\i ^r
o o
Ol	H	L/l	|N
O IN	u")	T-i	cn	u")
w oo	oo	oi	6i	cri
o o	o	o	o	o
iroLor^o^^—iroLor^CT)*—i
m lo r-v
o cm

Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 13

-------
Wind Effects
• Below are plots of sensor readings (Blue, left y-axis) and wind speed from leak to sensor (Red, right y-axis)
from Test 1, 2, and 3 on November 29. Test 1 and 3 seem to show a possible relationship between wind
speed and detection, but Test 2 has no obvious relationship.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 14

-------
Summary (what we have learned)
•	The PID does detect gas leaks remotely in fluctuating wind conditions. Each detection is a
statistically significant change in the sensor output within a short time period, e.g. seconds to
minutes.
•	Our preliminary test results show the PID can reliably detect gas leaks at 5sccm at 20C within a
radius of 30ft.
•	The preliminary test shows the correlation between the leak rate and the signal response size.
The relationship between them is, however, dependent on many factors.*
•	Limited trials with intentional obstruction between emission source and sensors show sensors
were still able to detect leaks.
•	Some sensors have showed drift in the baseline due to environmental changes. Even so, the PID
sensors were still found to respond to gas.
•	The PID sensor was found to respond to the exhaust from a forklift truck running nearby.
•	The PID was able to detect small leaks outside the normal operating regime of the OGI.
* Factors may include atmosphere stability, wind speed, plume sensor overlap statistics, distance between sensor and
source, sensor responsivity for a mixture of compounds, time resolution of sensor, impedance of gas transportation, etc.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 15

-------
Next Steps
•	Study correlation between gas response and wind etc.
•	Build test prototype for further study
•	Evaluate each type of PID sensors in the lab
•	Develop test plans for the next few months
•	Identify action items prior to each test
^ , resources*
vvEPA
United States
Environmental Protection
Agency
0i)9
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 16

-------
Appendix A2
RTP Test 2 Summary
Tests Conducted February 26, 2018 to March 7, 2018
f	f\	United States
molex f	«EPA&-^-
llliCMlMwMWtMVM
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix A Page 17

-------
Outline
molex

FLINT HILLS
resources-
oEPA
United States
Environmental Protection
Agency ,
oa»
Test Plan
Test Results


Objectives
Procedures
Set-up
Data Method

Geospatial
Impedance
Wind
Gas Type
Device Ops
Summary
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix A Page 18

-------
Test Objectives
• Further validate a highly sensitive photoionization detector (PID) technology for detecting plumes of
volatile organic compounds (VOC).
• Explore spatial distribution application of sensors and feasibility of triangulating back to the leak
source. Gain insights into variables that will help establish a relationship between each gas response
and the leak source, and the significance of variables impacting remote sensor leak detection.
• Obtain knowledge about ethylene detection and wireless sensor operation, and information required
for EPA's Quality Assurance Project Plan (QAPP) to further document the CRADA's Phase 1 activities.
molex \	sepa
United States
Environmental Protection
Agency ¦¦im
0 «*9
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix A Page 19

-------
Test Matrix
Test ID
1 Test# Planned
Layout
Gas
Leak Rate, seem
Variable 1
1
1
A-0
iso-butylene
2.5
Leak Rate
2
2
A-0
iso-butylene
5.0
Leak Rate
3
3
A-0
iso-butylene
10
Leak Rate
4
4
A-0
iso-butylene
20
Leak Rate
5
9
A-l
iso-butylene
10
Leak Point
6
10
B-0
iso-butylene
10
Spatial Distribution
7
11
B-l
iso-butylene
10
Leak Point
8
12
C-0
iso-butylene
10
Spatial Distribution
9
13
C-l
iso-butylene
10
Leak Point
10
14
D-0
iso-butylene
10
Spatial Distribution
11
15
D-l
iso-butylene
10
Leak Point
12
16
E (5ft)
iso-butylene
10
Spatial Distribution
13

E (5ft)
Ethylene
10
Gas
14
17
F (10ft)
iso-butylene
10
Spatial Distribution
15

F (10ft)
Ethylene
10
Gas
16
18
G (20ft)
iso-butylene
10
Spatial Distribution
17

G (20ft)
Ethylene
10
Gas
18

H (30ft)
iso-butylene
10
Spatial Distribution
19
19
N/A
N/A
N/A
Heat Strip/Operation
Note: Tests 5-8 (ethylene leak rate test) in the original plan was replaced by Tests 13,15 and 17.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 20

-------
Sensor Layout Examples
Blue dots: sensors @ 5' height
Green dots: sensors @ 10' height
Red dot: leak point
Gray boxes: buildings
Scale: grid lines 1 Oft
Proprietory Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 21

-------
Test Set-up
•	Either 100% iso-butylene or 100% ethylene was used as the test gas, and a 4" valve was used as the leaking component,
The leak rate was controlled by an Alicat mass flow controller.
•	The test area structure was de-energized and monitored constantly with area gas monitors, A manual power shutoff
device was placed at the power output in case one of the monitors goes off at 10% LEL,
•	Sensor placement, leak source location, gas or leak rate were adjusted according to the test plan and actual test results.
•	Detection results were monitored via data acquisition system and displayed for real time evaluation.
Isobutylene Gas Tank
Mass flow controllers
Leaking Valve
Wireless Sensor Transmitters
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 22

-------
RTP test site with wireless sensors mounted on tripods
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 23

-------
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 24
Leak partly surrounded by impedance
Leak in open field & Method 21 inspection
EPA SPod and Molex sensors at same location

-------
Molex Data
Processing Method
RAW Data
•	Sensor data
I • SPod data
•	Test plan, layouts,& notes
Tableau Visualization
• Interactive dashboards to
visualize many aspects of
the test data
AWS Database

* Consolidateandorganize
files into one location
• Allow for searching full
data base by test numbers,
assets, timestamps, etc.
r 1
Python Analysis


r
• Qualification of test res u
Its
%



between variables


• Visualization & illustration

of results



j
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 25

-------
Leak Rate Interpretation
Iso-butylene (C4H8, MW=56.11)
•	10 seem = 24.2mg/min, or 1.452g/hr @50F, latm
•	10 seem = 23.7mg/min, or 1.422g/hr @60F, latm
•	1 gallon liquid = 2.226kg @ 588g/ml = 1565 hours or 65 days @10sccm, 60F
Ethylene (C2H4, MW=28.05)
•	10 seem = 12.1mg/min, or 0.726g/hr @50F, latm
•	10 seem = 11.8mg/min, or 0.708g/hr @60F, latm
•	FLIR GasFindIR minimum detection for ethylene: 4.4g/hr in a controlled setting (source:
http://www.flir.co.uk/ogi/display/?id=30866)
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 26

-------
Sensitivity Checks
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 27

-------
Bump Test,
Diffusion
Mode
¦ In passive sampling mode, sensors still
showed similar gas responses from each
other.
Proximity test, sensors grouped together 17ft from Leak, isobutylene @lOsccm,
2/28
300
250
200
150 ;
100 B
50
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
CO
o
CT>
LO
CO
i
(N
p^
o
CO
LO
CT>
ro
LO
CM
i
i
p^
LO
ro
<3-
CT>
CM
LO
t—1
T—1
o
p^

t—1
LO
o
i
LO
p^
ro
ro
CM
CT>
O
LO
LO
i
p^
CM
ro
i
CT>
LO
LO
i
ro
p^
i
ro
O
CT>
LO
ro
i
CM
P-.
o
ro
LO
CT>
ro
LO
CM
i
i
p-.
LO
ro
CT>
(N
LO
i
i
o
'sT
ro
ro
cr>
T—1
LO
o
i
LO
p^
ro
ro
CM

O
CN
ro
CN
en
ro
cn
CO
LO
CO
LO
CO
d
CO
ro
00
CO
00
CO
d
CO
o
t—1
r\i
CN
ro

LO
LO
d
P^
'sT
00
00
d
o
LO
i
LO
t—1
LO
r\i
LO
ro
LO
LO
LO
LO
LO
LO
d
LO
P^
LO
00
LO
00
LO
d
LO
o
o
i
o
t—1
o
r\i
o
ro
O
o
LO
o
LO
o
d
O
o
00
o
00
o
d
o
o
i
1
1
(T>
cri
cri
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
o
i
o
i
o
i
o
i
o
T—1
o
i
o
i
o
i
o
1
o
i
o
i
o
i
o
i
o
i
o
i
Sensorttl	Sensor#2	Sensor#3	Sensor#4	Sensor#5
Sensor#6	Sensor#7	Sensor#8	Sensor#9	SensorttlO
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 28

-------
Leak Rate
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 29

-------
Leak Rate Tests
Test 1, 2.5sccm
ISO
160
Q_ Q_
cn oo
tH	I O
Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_
tDLn^rocN^HOcnoor^
o o
oo r\i Cd
"sf LO LH
(N(N(NIN(N(N(N(N(N(N(N(N(N(N(N
Q_ Q_
O ^r
O O
CO (N ID
O *—I
200
180
160
140
Test 3, lO.Osccm

Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_
cn^cn^c^^cn^cn^cn^cn^ra^
oj Osj -  u~» uo "=^r ro ro r^j osj v-i
HLnoimisH^cotNUDd^coiNti)
ooo^YH(Nicrird[s
oooHdfNjrMtNmmt^r^inm
200
180
160
140
Test 4, 20.0sccm
iIhAaJ*XiA^
Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_
iniNOM^lDCOOtN^lDCOON't
o^6d(N^o^
-------
Leak Rate vs. Detection
30
2S
		Linear Fitting
•	Background
•	Test 1
•	Test 2
•	Test 3
Although weather condition kept
changing during the tests, the graph
below still shows that the detection signal
increased with increasing leak rate
overall.
Sensor #5 in Tests 1-3, Layout A-
0, 20ft, isobutylene, rain, 2/26
2.5 5.0 7.5
Mass Flow Rate (seem)
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 31

-------
Leak Distance
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 32

-------
Distance Change
Sensor #5, 5ft height,
Isobutylene, lOsccm
270
220
170
120
Test 16, layout E(5ft), 3/5
. I
























<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
(N
CO
^r
LO
tD
p*.
cn
o
T—1
CM
CO
^r
LO
to
p*.
00
cn
O
t—1
CM
CO

LO
O
m
o
CO
o
m
o
<3-
T—1

T—1

t—1
<3-
1

1
LO
CM
LO
CM
LO
(N
d
o
r\i
ro
LO
d
00
d
T—1
r\i

LO

00
0
1
ro

d
r->"
d
0
r\i
0^diCTlCTl0ld)d)CTlj)j)CT)CT)CT)dlj)j)0^0^0^(TlCnCTl0^
260
240
220
200
180
160 jmm
140
120
Test 18, Layout G (20ft), 3/7
























<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
to
CM
00
^r
O
tO
CM
00
^r
O
to
CM
00
^r
0
tO
CM
00
^r
O
tO
CM
00
ro
t—1
<3-
(N
0
CO
t—1

(N
O
CO
t—1

fM
0
CO
t—1
<3-
(N
O
ro
t—1


P^
d
r\i
LO
p^
O
r\i
LO
00
0
ro
LO
CO
1
ro
d
00
t—1

d
d
t—1
o o
lo lo lo
dodo
OOOO^H^H^H^HrMrMrMrMrorororo^
220
170
120
Test 17, Layout F (10ft), 3/7
1
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
1
0
00
LO
ro
CM
CM
cn
0
to
LO
ro
0
ro
p^
1
^r
0
1
LO
00
ro
LO
(N
CM
t—1
cn
LO
to
<3-
ro
ro
O
CM
r->
0
^r
LO
1
'sT
00
CM
d
ro
1
^r
^r
P^
^r
0
LO
r\i
LO
LO
LO
CO
LO
1
0
0
d
0
d
O
CN
t—1
LO
t—1
r->"
1
0
CM
ro
CM
d
CM
d
CM
1
ro
ro
r->"
ro
d
d
d
d
d
d
d
d
0
1
0
1
0
1
O
1
O
t—1
O
t—1
0
1
O
t—1
O
1
O
t—1
0
1
O
1
O
1
0
1
220
170
120
Test 19, Layout H (30ft), 3/7/2018
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
1
O
tD
1
ro
tD
1
1
O
tD
1
ro
tD
1
1
O
tD
1
ro
tD
1
1
O
tD
1
ro
tD
1
1
O
tD
1
ro
tD
1
1
O
tD
LO
1
1
O
CN
ro
(N
d
(N
00
(N
1
ro
ro
P^
ro
d
ro
CN
LO
00
O
LO
ro
LO
d
LO
d
LO
1
0
O
0
O
1
r\i
t—1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
r\i
r\i
CN
r\i
r\i
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 33

-------
Multi-Peak Detection
¦ In a close range, some detections have two or more

overlapping peaks.

Test 16, layout E, 5ft, isotutylene, lOsccm, 3/5/2018
P17
P R

1 2
n

3 4 5 7 8 9 1
0



















































<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
(N
ro
^r
LO
tD
p^
00
cn
o
1
CM
ro
LO
to
p^
00
cn
o
i
(N
ro
^r
LO
tD
p^
00
cn
o
i
CM
ro

LO
to
p^
00
cn
O
i
CM
ro
^r
LO
tD
P-.
00
cn
o
1
CM
ro
O

fM
o

CN
o

ro
1
LO
ro
t—1
LO
ro
T—1
LO

(N
O

(N
o

(N
o

ro
i
LO
ro
i
LO
ro
i
LO
ro
(N
o

CM
o

CM
o
<3-
CM
1
LO
ro
t—i
cri
cri
o
T—1
i
r\i
ro
ro

LO
LO
d
r->"
p^
00
cri
cri
o
t—1
r\i
r\i
ro


LO
d
d
P^
00
00
d
o
o
I
r\i
r\i
ro

LO
LO
d
p^
P^
00
d
oS
o
1
1
r\i
ro
1
i
CM
CM
CM
CM
CM
CM
CM
CM
CM
CM
CM
CM
CM
CM
CM
ro
ro
ro
ro
ro
ro
ro
ro
ro
ro
ro
ro
ro
ro



<3-


^r



^r
^r

<3-

LO
LO
LO
LO
LO
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
cri
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
oS
oS
oS
oS
oS
cri
oS
Sensor 6
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 34

-------
Sensors Aligned w/ Leak
Sensor 8 showed lower detection than sensor 10,
which is closer to the leak point
Test 10, B-0, isobutylene, lOsccm, sensor #10 and #8 are lined up at 0, 20ft and 50 ft,
respectively at 5ft height.
200
180
160
140
120
100
80
60
40
20
0
20ft
a 30ft a
9—O




















































<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
Q_
00
p-
tD
LO
^r
CO
CM
i
o
o
cn
00
p-
tD
LO
^r
CO
CM
i
o
cn
00
P-.
tD
LO
^r
ro
(N
i
o
cn
CO
P-
tD
LO

ro
CM
i
O
cn
CO
p-
to
LO
^r
ro
CM
CM
1
o
CD
CM
CN
CN
CN
fM
CM
(N
(N
CN
CM
i
T—1
i
T—1
1
i
T—1
t—1
i
i
o
o
o
o
o
o
O
O
O
o
LO
LO
LO
LO
LO
LO
LO
LO
LO
LO


'sT


^r



^r

ro
cri
i
ro
LO

cri
1
ro
LO
P^
cri
T—1
ro
LO
P^
cri
T—1
ro
LO
p^
cri
T—1
ro
LO
ps"
cri
i
ro
LO
ps"
CO
o
r\i

d
CO
o
r\i

d
CO
o
r\i

(6
00
o
r\i

(6
CO
o
1
CM
CM
CM
CM
CM
m
CO
CO
ro
CO
^r


^r
<3-
LO
LO
LO
LO
LO
o
O
o
o
o
i
i
t—1
T—1
i
CN
(N
(N
CN
(N
ro
ro
ro
ro
ro


'sf


LO
LO
LO
LO
LO
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
o
T—1
I
T—1
T—1
i
i
i
t—1
T—1
i
t—1
t—1
t—1
T—1
t—1
i
i
i
i
i
1
i
1
i
i
1
i
1
i
1
r\i
i
i
T—1
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
i
T—1
I
T—1
T—1
i
i
i
t—1
T—1
i
t—1
t—1
t—1
T—1
t—1
i
i
i
i
i
1
i
1
i
i
1
i
1
i
1
t—i
•Sensor#8
•SensorttlO
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 35

-------
Long Distance
Sensors 58 ft away showed visible detections.
160
150
140
130
120
110
100
Test 10, B-0, Isobutylene, lOsccm, Overcast,
2/28


<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
CO
r\i
00
tH
00
o
00
LO
00
¦nT
cn
m
cn
CN
cn
tH
cn
o
cn
to
cn
^r
cn
ro
cn
(N
cn
t—i
cn
o
cn
LO
(J)
cn
ro
cn
CM
cn
t—i
cn
o
CD
lo
CD
CD
ro
cn
fN
o
fN
o
T—1
cri
tH
ro
fN
CM
o
ro
CO
00
ro
fN
¦5J-
(6
^r
o
LO
ro
LO
lo
t—i
O
LO
o
cri
o
ro
i
(6
T—1
o
CN
^r
CN
00
(N
fN
ro
(6
cn
cri
ro
ro
<3;
¦vf
i
LO
LO
LO
cri
in
o
t—i
o
T—1
o
tH
o
i
o
T—1
o
t—i
o
tH
o
tH
o
t—i
o
T—1
o
T—1
t—1
I
t
vH
i
t i
i
T—1
T—1
T—1
t—1
1
t—1
1
t
vH
i
t—i
i
T—1
T—1
i
t—i
i
vH
1
T 1
1
t—1
t—i
T—1
A
B
C
D
E
F
Sensor#3
»Sensor#9
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 36

-------
Leak
Distance vs. Detection
¦ The longer the distance, the lower the detection magnitude. The width of the detection peak appears to increase with
distance.
220
200
180
160
140
¦*->
f 120
X
100
<11
O-
80
60
40
20
°0	10 20 30 40 50 60
Distance


Polynomial
Fitting


• Test 11



• Test 12
Test 13




• Test 14




Test 15







•




•
•





•
•
•
•
•
•




r\
it
p
i!
» •



ii
mtH
t ,
& t •
•
¦ '
F i
P 1
—
°0	10 20 30 40 50 60
Distance
"O
c
nj
60
50
40
30
20
10
Linear Fitting
Test 11
Test 12
Test 13
Test 14
Test 15.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 37

-------
Sensor Height
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 38

-------
Sensors
@
Different
Heights
Test 17, Layout F, isobutylene, lOsccm, 3/7
Sensor 3	Sensor 7	Sensor 10
270 ¦ Sensors at 0, 5, and 10ft heights all detected gas, though #7 showed higher
250	detection because of shorter distance to the leak point.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 39

-------
30ft
The sensor at 10ft height had lowest detection
possibly because the molecular weight of gas became
a bigger factor in gas distribution with increasing time
(longer distance means more time to travel).
Test 19, Layout H, isobutylene, lOsccm, 3/7
Q_
Q_
Q_
Q_
Q_
Q_
Gi-
Q_
Q_
Q_
Q_
Gi-
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
LO
CM
m
m
1
^r
CT>
LO
LO
O
ro
i
1
CM
CT>
CM
ro
LO
<3"
ro
LO
1
O
CT>
O
P-.
i
LO
(N
ro
CO
1
^r
CT>
<3"
LO
LO
O
ro
i
t—1
(N
CT>
(N
ro
LO

O
i
LO
CN
ro
ro
t—1
'sT
CT>
LO
LO
O
ro
i
1
CM
CT>
(N
r--
ro
LO
ro
LO
1
O
CT>
O
i
LO
CM
ro
ro
1
CT>
LO
00
1
d
i
o
CN
T—1
(N
r\i
CN
(N
LO
(N
d
(N
P^
(N
00
(N
d
(N
o
ro
r\i
en
ro
CO
CO
LO
CO
d
CO
p^
ro
00
CO
d
CO
1
r\i
ro

LO
d
P^
d
o
LO
i
LO
r\i
LO
ro
LO
LO
LO
LO
d
LO
00
LO
d
LO
o
o
1
O
r\i
O
ro
o
o
d
o
P^
O
00
o
d
o
o
i
1
1
r\i
i
ro
t—i
1
i
1
1
i
1
1
1
1
1
1
i
t—i
T—1
T—1
T—1
T—1
i
T—1
T—1
1
i
T—1
t—1
i
i
i
i
t—1
t—1
i
i
t—1
T—1
i
1
i
r\i
r\i
r\i
CN
r\i
CN
r\i
CN
r\i
r\i
r\i
r\i
r\i
Sensor 3
¦Sensor 7
¦Sensor 10
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 40

-------
Impedance
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 41

-------
Sensor
Behind
Building
The sensor at 10ft height had lowest detection possibly because the molecular weight of gas
became a bigger factor in gas distribution with increasing time (longer distance means more
time to travel).
Test 3, A-0, isobutylene, lOsccm, Rain,
10/26/2018
160
140
120
100

cn
CN
LO
t—\
O
i
ro
m
cn
LO
O
1
(N
r**.
m
ro
LO

O
LO
CN
1
^r
un
ro
i
CD
CN
LH
1
o
rH
ro
ro
cn
^r
m
o
i
CN
ro
ro
LO
CD
O
i
o
ro
O
UD
O
6
-------
Leak Inside Mezzanine
250
200
150
100
50
Test 13, C-l, Isobutylene, lOsccm, Rain, 3/1
Sensor is 5ft height and behind impedance
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
o
o
r**.
o
i
i
(N
00
fN
lo
m
CM
cn

LO
ro
tH
cn
hs
LO
ro
t—1
cn
r->.
un
ro
t—i
cn

fN
t—i
o
LO
¦nT
fN
rH
o
in

-------
Wind
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 44

-------
Ethylene
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 45

-------
Ethylene Leak @ 10ft Radius
Ethylene was detected from sensors placed 10
ft away from the leak point.
Test 17, layout F, Ethylene, lOsccm, 3/5
200
180
160
140
120
100
80
60
40
20
0




















































Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
<3"
00
CM
tD
O
^r
00
CM
tD
O
^r
00
CM
tO
O
^r
00
rM
tD
O
^r
00
CM
tD
O
^r
00
CM
tO
O
^r
00
CM
tD
O
^r
00
CM
tD
O
^r
00
CM
tD
O
^r
00
rM
tD
O
^r
00
O
00
1
d
m
o
i
O
ro
i
(N
LO
d
LO
1
d
fM
o
CO
T—1
LO
r\i
O
CN
LO
m
d
<3"
P^
o
d
1
o
ro
i
r\i
LO
ro
1
LO
CM
d
<3"
P^
LO
00
O
o
(N
t—1
ro
r\i
LO
ro
o
LO
t—1
d
ro
P^

-------
Ethylene vs. iso-Butylene @20ft
Radius
Visible detections were observed from sensors 20ft away from ethylene leak. The detection signals were about 1/10-1/5 of isobutylene's.
Test 18a, Layout G, isobutylene, lOsccm,
200	3/7
V.Vi'V'V.v
mm
80















5








<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
to
LO
^r
CO
CM
T—1
o
CT>
00

to
LO
^r
CO
CM
1
o
CT>
00

to
LO
^r
CO
ro
o
m
o
ro
o
m
LO
(N
LO

<3-
r\i
LO
LO
r->"
LO

LO
1
O
O
d
o

o
T—1
T—1
1
d
i

1
1
(N
(N
d
(N

(N
l
cn
cn
d
CO

CO
t—i
o
i
o
i
o
i
o
i
o
i
o
i
o
i
1
1
1
1
i
i
T—1
T—1
T—1
T—1
1
1
i
i
1
1
t—1
t—1
t—1
t—1
t—1
t—1
t—1
t—1
t—i
i
t—i
i
T—1
T—1
T—1
T—1
t—i
t—i





Sensor 3

'Sensor 5
Sensor 7




200
180
160
140
120
100
80
Test 18b, Layout G, Ethylene, lOsccm, 3/6
¦Sensor 8
¦Sensor 10
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
LO
CM
t—1
CT>
ro
tD
O
ro
ro
O
O
CM
^r
LO
t—1
(N
00
LO
t—1
CM
<3"
CT>
O
tD
ro
ro
O
O
ro
r--
LO
^r
(N
t—1
LO
00
1
LO
<3"
CM
1
CT>
cn
tD
O
ro
ro
O
O
i
o
O
d
O
cri
o
i
i
t—1
d
i
00
i
t—1
CM
ro
(N
d
CM
00
CM
1
ro
ro
ro
d
ro
00
ro
o
ro
^r
LO
^r
00
^r
o
LO
ro
LO
LO
LO
00
LO
O
o
ro
O
ro
ro
ro
ro
ro ro
ro ro ro
Sensor 3
ro
ro ro ro ro ro ro
Sensor 5
ro
ro ro
ro ro ro
Sensor 7
ro
ro


Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 47

-------
Device Comparison
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 48

-------
Method 21 Techniques
¦ Data suggests current M21 measurements are inaccurate. TVA
has better accuracy than MiniRAE. Dilution tubes should not
be used when gas concentrations are within TVA measuring
range.
Isobutylene, ppm
MiniRAE, ppm
TVA, ppm
RF
TVA w/ RF, ppm
TVA w/ Dilution Probe, ppm
TVA Dilution w/ RF, ppm
101
104
137
0.66
90
140
92
510
412
692
0.64
443
650
416
2011
1258
3070
0.58
1781
2500
1450
5050
2465
10200
0.45
4590
6750
3038
9999
3300
31800
0.24
7632
14400
3456
Notes: 1. Testing was performed by Mike Clausewitz on March 1, 2018.
2. MiniRAE was calibrated to lOOppm isobutylene; TVA was calibrated to 500, 2000, and 10,000ppm methane.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix A Page 49

-------
Summary (what we have learned)
•	The PID has continuously demonstrated near-source detectability of VOC emissions. At 40-60°F and low wind speeds,
visible detections were observed for isobutylene leak at lOsccm, or approximately 1.4g/hr at a distance of 50ft without
obstruction.
•	Sensor detection is affected by leak rate, distance, height, impedance, and wind etc. Sensors close to the leak source
have large detection signals. Sensors farther away and /or behind impedance have smaller detection signals.
•	Detection of gases 5ft below and 5ft above the leak point suggests that a gas not only travels laterally, but also disperses
vertically.
•	There appear to be good correlations between sensor response and wind direction. In general, sensors downwind
respond to gas and when there are multiple sensors aligned downwind, the one closer to the leak shows higher
response than the one farther away.
•	A sensor network provides enhanced detection coverage, and info about probable location of the leak source.
•	Ethylene leak at lOsccm or 0.71g/hr @60F was detected by sensors 20 ft away at 1/10-1/5 of the magnitude of
isobutylene's response. This leak rate is about 6 times of the reported minimum detection limit (4.4g/hr) of FLIR gas
imaging cameras.
•	Double or multi-peak detection often occurs when a sensor is close to the leak source.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix A Page 50

-------
Next Steps
•	Continue studying impact of different variables and relationships between them
•	Update Test 2 Summary
•	Develop plan for Test 3 Phase 1 test
•	Sour Lake safety plan
•	Sour Lake Management of Change (MOC)
•	Update EPA Quality Assurance Project Plan (QAPP)
•	Develop Molex PID transmitter for Test 3
•	Develop algorithms to calculate detection signals on mSyte
•	Continue testing PID sensors in the lab
f	t\	United States
molex f
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix A Page 51

-------
lYinlPY	f. FLINT HILLS
III \J I	T~ resources*
Appendix B: FHRSLOF Tests
Appendix B consists of two summary presentations reviewing select aspects of the two LDSN Pilot
tests conducted at FHR SLOF (called Test 3). Appendix B1 (pages B2 to B26) reviews SLOF Test 3.1,
conducted 5/14/18 to 5/23/18. Appendix B2 (pages B27 to B55) summarizes SLOF Test 3.2,
conducted 9/10/18 to 12/7/18. These presentations were developed as an internal CRADA
communication and have been modified for use in this report.
The following was originally developed as part of internal CRADA team communications and has
been modified in places to remove proprietary information and to improve clarity for use in this
report. This appendix does not intend to capture all efforts performed in these tests but provides
the reader with a sense of the type of work accomplished.
Progress on LDAR Innovation
United States	.	__
Environmental Protection	Appendix Bi FHR SLOF TeStS
Ul "A9encv fT |	Page B1 to B55
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix B Page 1

-------
Appendix B1
Sour Lake Test 3.1 Summary
Tests Conducted May 14, 2018 to May 23, 2018
r	A	United States
molex \ F.LKr„HLL.Li v>EPAisrr~n
m m
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 2

-------
Outline
Test Plan
Objectives
Test Site
Info
Procedure
and Layouts
Test Results
Background
Controlled
Releases
Wind
Correlation
Elevation
Summary
Results and
Conclusions
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix B Page 3

-------
Test Plan & Setup
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 4

-------
Test Objectives
•	Conduct experiments to optimize sensor setup, including total number of sensors and geospatial grid
arrangement.
•	Continue to validate feasibility of triangulation of leak source location.
•	Conduct experiments to evaluate the background level of ethylene and possible impact to Test 3.2.
Validate and improve the test method and experimental plan for Test 3.2.
•	Continue to gain additional knowledge about the design of a robust field sensor device.
•	Test WIFI and other hardware and infrastructure required for Test 3.2.
•	Develop infrastructure SOW for Test 3.2 - sensor locations, mounting requirements, conduit/wire
scope, remote power relay scope, other scope.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 5

-------
Test Site Introduction
•	Flint Hills Resources Port Arthur's (FHR's)
Sour Lake Olefins Facility (SLOF) is a bulk
storage facility located in Sour Lake, TX.
The facility is permitted to receive, store,
and distribute ethylene and propylene.
•	Products are stored in underground
storage caverns and recovered from the
caverns for sale through displacement by
brine.
•	The equipment used at the SLOF includes
filters, product measurement devices,
filter-separators (liquid coalescers), a solid
bed dehydration system, brine degassing
facilities, a flare system, brine injection
and transfer facilities, brine ponds, air
compressors, utility equipment, and a
control building.
* The marked location is the designated test area.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 6

-------
Additional Info During Test
Plant shut down, but ISOOpsi pressure of ethylene in pipes.
Maintenance activities on and off.
Interference from adjacent plant.
High temp, high humidity; low wind speeds most of the time.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 7

-------
General Test Procedure
Position sensors per planned layout.
Calibrate equipment and bump test sensors.
Place leak valve in designated location and
barricade the leak area.
Record sensor background readings.
Release gas at a preset flow rate and record
sensor readings.
Perform Method 21 measurements.
Stop gas flow and bag the leak point.
Move on to the next leak location or test.
EPA SPod Wind
Sensor
Omega Transmitter &
Refitted Housing
Gas & Mass Flow Controllers	3/4" Valve (Leak Component)
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 8

-------
Layout
A
Note: All sensors were placed at 5' above grade except sensors #9 at 11' and #10 at 21'
respectively. Leak points L2,L4,L8 were 3' above grade, the rest were 1' above grade.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 9

-------
Test
Matrix
Test#
Layout
Leak Location
Gas Type
Leak Rate, SCCM
Test Time, Minutes |
0(5/15)
A (1)
N/A
Background
0
120
1 (5/15)
A (1)
LI
Iso-butylene
10
30
2(5/15)
A (1)
L2
Iso-butylene
10
30
3(5/16)
A (1)
L3
Iso-butylene
10
30
4(5/16)
A (1)
L4
Iso-butylene
10
30
5(5/16)
A (1)
L5
Iso-butylene
10
30
6(5/16)
A (1)
L6
Iso-butylene
10
30
7(5/16)
A (1)
L7
Iso-butylene
10
30
8 (5/15,16)
A (1)
L8
Iso-butylene
10
30
9(5/16)
A (1)
L9
Iso-butylene
10
30
10(5/16)
A (1)
L10
Iso-butylene
10
30
1 (5/17)
A (1)
L10
Iso-butylene
200,0
15, 15
2(5/17)
A (1)
L10
Iso-butylene
150,0
15, 15
3(5/17)
A (1)
L10
Iso-butylene
100,0
15, 15
4(5/17)
A (1)
L10
Iso-butylene
50,0
15, 15
5(5/17)
A (1)
L10
Iso-butylene
20,0
15, 15
6(5/17)
A (1)
L10
Iso-butylene
10,0
15, 15
1(5/18)
C
Lll
Iso-butylene
100,0
60, 15
2(5/18)
C
L12
Iso-butylene
100,0
60, 15
1(5/21)
D
L13
Iso-butylene
30,0
180, 15
2(5/21)
D
L14
Iso-butylene
30,0
90, 15
1(5/22)
D
L14
Iso-butylene
30, 0
60, 15
2(5/22)
D
L15
Iso-butylene
30,0
120, 15
1(5/23)
E
L16
Ethylene
100,0
60, 30
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 10

-------
Layout
C
Note: Same as layout A except sensor #9 was relocated and set at 5' above grade. The
leak points Lll-12 were 1' above grade.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 11

-------
Layout
D
Note: All the sensors and leak points were place at 5' and 1' above grade, respectively.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 12

-------
Background Test
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 13

-------
Overnight Background Test (Layout A, 5-15-2018)
Significant detects were witnessed without controlled gas releases in the area.
900
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 14

-------
Overnight Background Test
2018)
(Layout D, 5-21-
2500
2000
1500
1000
500
Sensor detects tracked each other most of the time. Sensors #9 and #10 were
consistently the highest suggesting a major leak source by these two sensors.





















































Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
tD
tD
tD
tO
tD
tD
tD
tD
tD
tO
tD
tD
tD
tD
tD
tO
tD
tD
tD
tD
tD
tO
tD
tD
tD
tO
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
to
CO

LO
O
t—1
CM
m

LO
O
t—1
CM
m
^r
LO
O
t—1
CM
m
^r
LO
O
t—1
CM
CO
^r
LO
o
T—1
CM
CO
^r
LO
o
T—1
CM
CO
^r
LO
o
T—1
CM
CO
^r
LO
o
T—1
CM
CO
^r
LO
o
T—1
vbfN6di^^r^(V)a^i^(N6o^OvijrvioiLn^r^moiujrsioo^o^rocnun^r^(v^o^rsioo^or^rocnLn^Hr>.^"OtD(Noo^"x-ir,««
^l^9!^^l^TlC^^9!Tl^l^9!9^^l^T-!C^^l^Tl^'?!O(N(Y)LnO(N'::J"Ln^(Y)^o^(Y)LnOrsl(Y)Ln^rNl^"C)v~lr0^"C)rN|u>u>u>r^r^r^wwwwcr>cr>cr>oooo^^^(N(N(N(N^^^^(N(N(N"
SI ^ ^ ri ^ ^ H H S3^ 	S4	S5
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 15

-------
Leak Source Identification
Fin fan
Flare
Top of fin fan
Initially, we thought a flare on the west
side had caused the detection signals.
Upon investigation with portable
devices including the TVA, we were able
to pinpoint the leak source to a fin fan
bank above a degassing drum.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 16

-------
Overnight Background Test (layout D, 5-22-2018)
3000
2500
2000
1500
1000
500
A close examination of one cluster reveals
that all the sensors showed detects,
including sensor #1 and #4 that were 120-
150ft away from the fin fan leak.




100 		'

u












A-







Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_ Q_
5
Q_
Q_
Q_
Q_
Q_ Q_
5
Q_
Q_
Q_
< <
<
<
cn
ro
CD
fN
O
CM
O
1
O
O
O
LO
O
<3"
O
ro
O
CM
O
1
O
O
O
LO
O
O
ro
O
fN
O O
r! 9
O
LO
O
<3"
O
ro
O
fN
O O
rH o
O
LO
O
O
ro
o o
in y1
o
o
o
LO
LO
r\i
o
d
1
d
CO
ro
LO
d
O
d
fN
ro
o
o
1
ro
o
LO
o
fN
1
00 LO
LO *-1
T—1
ro
00
LO
O
r\i
fN
cri d
ro lo
r\i
i
d
fN
d
ro O
O rsj
CO
ro
LO

LO
LO
LO
LO
d
d
d




00
00
00
6o d
-SI
cri
d
o
i
o
i
o o
¦S2rt
i
i
t—1
t—1
i
i
in fN
r\i
i
r\i
i
< < <
S4
<
<
< <
<
<
<
<
<
<
<
<
<
<
<
<
<
<
o
LO
o

-------
Controlled Gas Releases
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 18

-------
30sccm Isobutylene Leak Test
L15, 5-22-2018 Test 2)
(Layout D,
While the fin fan leak (represented by sensors #9 and #10) continued to show up in the sensor data, we observed
significant detections of sensors #4 and #8 that were not part of the fingerprints of the fin fan leak.
800
700
600
500
400
300
200
100







<
<
<
<
<
<
<
o
ro
tD
cn
CM
LO
00
<3-
O
CM
^r
t—1
m
LO
<<<<<<<<<<<<<<
ei-asK
Q_
Q_
Q_ Q_
5
Q_
Q_
Q_
Q_
Q_
Q_
Q_
tD
!
o
o
i
r\i
t—i
qL
t—1
_d_
i
•Slfl
r\i
i
i
i
t—1
i
i
t—i
Sensor #8
(55' N of leak ]
Sensor #4
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix B Page 19

-------
Leak Source Confirmation
N
The controlled release I "15 was located 5' north of sensor Ml, but south of #8 and southwest of #4. Therefore,
w—^-E we are confident that the detection signals of sensors #4 and #8 came from this leak source.
Spod
0
Legend
1 Leak
^ Sensor
Spod
Fin Fan
v# Leak
©
0 0 P P : H
¦		/p?	.yp?	-jV]^ • ,y£.
MMBMWPfeJ.' „* 8-:
i i
$>j



2SF-107
I:
u/c u*
9
I/j oVS"
iifiqy-r. v-rf-rii'-rf-rr-r
C
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 20

-------
Wind Correlation
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 21

-------
Elevation
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 22

-------
Layout E
•	Test was performed in an open area
upwind of the fin fan.
•	Sensors were paired and placed at
varying distances from a lOOsccm
ethylene leak at 1' above grade,
•	In each pair, sensors were set at 1' and
5' above grade respectively.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 23

-------
lOOsccm
ene lest (Layout E, L16, 5-23-2018 Test 1)
700
o





























5



















<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
< <
<
<
<
< <
<
<
<
< <
<
<
<
< <
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
o
^r
00
CM
to
o
^r
00
CM
to
o
^r
00
CM
to
o
oo
CM
tO
o
oo
(N
tO
o
oo
CM
tO
o
oo
CM
tO
o
^r
00
CM
tO
o
^r
00
CM
to
o
^r
00
CM
to
o
^r
00
o
m
o

i
LO
(N
LO
CO
o
"
p^
00
00
d
CO
CO
m
m
m
m
m
m
m
m
m
m
m
m
m
ro
ro ro
<3-
<3-
^r

<3;
<3-


<3"

-------
Sensors at the same elevation as the leak, i.e. 1' above grade, appeared to have higher detection signals than those
at a higher elevation (5') within the tested distance of 34' from the leak location. Gases heavier than ethylene are
likely to follow the same rule.
300		S3	S8
s
	S5
15V	150 \
34'
100	u'			 	 5' above grade
8	K,'	8	!«,'	8	Hi " A
6
320	^	450		s2	600		S1
1' above grade
8
WF 220	250
350
100	- t
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 25

-------
Results Summary
•	Leak detectability is determined by 1) leak size, 2) sensor performance, 3) gas transport (meteorological
conditions) and 4) capability of sensor collaboration in the network.
o Test 3.1 continues to advance our understanding of elements 1 through 3 while we begin work on element 4.
o Test 3.1 yielded positive, encouraging results supporting the area sensor concept.
•	An unplanned, significant leak which was not part of Method 21 program was discovered immediately
at the beginning of test 3.1 by our sensor network.
o This leak was not detectable by OGI due to its location (fin fan).
•	Overnight (extended time period) testing shows diurnal meteorological conditions that will play an
important part in detection strategies.
o Night or stable conditions appear to be more favorable for detectability.
o Data shows that sensors placed as far as 130-150 ft away in the plant responded to the fin fan leak.
•	Proven capability to decipher 30 SCCM (4.0g/hr@90F) isobutylene leaks against background emissions
(fin fan leak).
•	The equivalency argument for regulatory method use is developing and remains viable.
f	t\	A United States
molex \®EPAs;zr^~
AMCKkHmiOMShmpm
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report — Appendix B Page 26

-------
Appendix B2
SLOF Test 3.2 Summary
Tests Conducted September 10, 2018 to December 7, 2018
f~	f\	United States
molex |	oEmss^g-
«wOi m m
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 27

-------
Outline
k Test Plan
^ Test Results
\ Summary
Objectives
New Leak Detection
Conclusions &
Sensor Info
Controlled Releases
Recommendations
Sensor Layout
Sensor Elevation

Test Matrix
Sensor Performance

Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix B Page 28

-------
Test Plan & Set-Up
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 29

-------
Test Objectives
•	Field Test Molex 1st Gen Sensor Hardware System
•	Validate performance & durability of 1st Gen Sensors
•	Validate MET sensors correlation to detection & leak location
•	Test upload data to Molex mSyte data platform
•	Continue to develop algorithms to detect and triangulate new leaks (offline)
•	Single & Multiple leak releases
•	Test for mobile vehicle emissions
•	Longer release times - (1) 24-hour test
•	Blind leak test - challenge data science model
•	Capture requirements for a site assessment for sensor array configuration
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 30

-------
Test 3.2 Layout
•	Yellow pins - Sensor Locations
•	Red Triangles - Leak Locations
(Sensors are about 3' above grade)
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 31

-------
Test 3.2 Schedule
Test Name
Date
Test Focuses
Test 3.2.1 Sep 10 - Sep 21 2018 Collected sensor performance data
Collected data for algorithm development
Validated new bump test method
Test 3.2.2
Oct. 15 - Oct. 19, 2018
Tested sensor durability
Tested leak elevation
Test 3.2.3
Dec. 3 - Dec. 7, 2018
Tested sensor durability
Investigated sensor placement vs. gas species
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 32

-------
Test Matrix (1)
Test ID
Leak Location
Leak Elevation, ft
Gas Type
Gas Leak Rate, SCCM
Test Duration, Hr
Coverage Validation
2-1
LL1
3
Isobutylene
50
1.5
2-2
LL2
21
Isobutylene
50
1
2-3
LL3
10
Isobutylene
50
1
2-4
LL4
4.5
Isobutylene
50
1
2-5
LL5
6
Isobutylene
50
1
Min Detect of Isobutylene
3-1
LL1
3
Isobutylene
10
1
3-2
LL1
3
Isobutylene
20
1
3-3
LL1
3
Isobutylene
30
1
3-4
LL1
3
Isobutylene
40
1
Min Detect of Ethylene
4-1
LL1
3
Ethylene
25
1
4-2
LL1
3
Ethylene
50
1
4-3
LL1
3
Ethylene
75
1
4-4
LL1
3
Ethylene
100
1
4-5
LL1
3
Ethylene
150
1
4-6
LL1
3
Ethylene
200
1
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 33

-------
Test Matrix (2)
Test ID
Leak Location
Leak Elevation, ft
Gas Type
Gas Leak Rate, SCCM
Test Duration, Hr
Data for Algorithm Development
LL6
LL7
LL8
LL9
LL10
Extended Release Test
Multiple Leak Test
LL1
LL1,LL7
LL3,LL8
LL2
Extended Release Test @ High Elevation
LL12
2.5
4.9
3
4.5
4.5
3 & 4.5
3 & 4.5
6
3
4.5
3
4.5
21
23
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
Isobutylene
30
30
50
30
30
30
30, 30
30, 50
30
30
30
30
50
50
50
2
2
2
1
2
24
2
2
2
2
2
2
2
24
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 34

-------
Test Matrix (3)
Test ID
Leak Location
Leak Elevation, ft
Gas Type
Gas Leak Rate, SCCM
Test Duration, Hr
Data for Algorithm Development
12-1
LL7
4.75
Isobutylene
50
1
12-2
LL8
3
Isobutylene
50
2
12-3
LL10
4.5
Isobutylene
50
2
Blind Test
13-1

10
Isobutylene
30
2
13-2

5
Isobutylene
30
2
13-3

5
Isobutylene
20
2
13-4

5
Isobutylene
30
2
Response Factor Test
14-1
LL7
4.75
Isobutylene
30
2
14-3
LL7
4.75
Ethylene
150
2
14-6
LL7
4.75
Ethylene
300
1
14-7
LL7
4.75
Ethylene
300
2
14-8
LL7
4.75
iso-butylene
30
2
17-1
LL5
6
Ethylene
500
1
17-2
LL5
6
Ethylene
250
1
17-3
LL5
6
Ethylene
100
1
Leak Elevation Test
16-1
LL12
23
Isobutylene
50
4.5
16-2
LL12
23
Isobutylene
100
2.5
16-3
LL3
10
Isobutylene
50
7
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 35

-------
Sensors and Equipment
/\/^ niSyte
Search Channels
Asset Integrity Monitoring Solutions
A Peng ••
9 S
Jt East
1 | SoufLAke_1
n Sou*taJ
-------
Controlled Gas Release Set-Up
Method 21 Measurement
•	Controlled gas release testing was done the same way as in Test 3.1.
•	Isobutylene or Ethylene was used as the test gas.
•	Each gas release was quantified and recorded with EPA Method 21.
Gas and Mass Flow controllers	4" Valve as Leak Component
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 37

-------
Test Observers
Eben Thoma (EPA ORD
CRADA PI) and Jason
Dewees from the EPA
discussed sensors and
investigate the fin fan
leak. Mike Miller, EPA R6
conducted a QA audit.
• Beth Akers, Xuan
Zhao, and Emily
Johnson from TCEQ
watched sensor
validation tests.
Lucinda Legel
dropped in to
provide guidance.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 38

-------
New Leak Detection
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 39

-------
New Leak Detection in September
9/11/2018, LL1, 30 seem, Iso
1200
1000
800
600
400
200
0
Sensor 4 showed some small
peaks, though it was pretty far
away from the fin fan and the
controlled release point.
Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_Q_
h in in cn
cn cn cn cn cn cn
ocNcnLnus«x)CTiTHfM^-Lnr^«x)
o*-icY->*d-iDr^cr>o
OOOOOOOtH
INminiDOOOIHIN^lDN
*h*-i*-i*-i*-i*-ic\ic\ic\ic\ic\i
CN CM CM CN CN CN CNI CNI CNI CNI CNI CNI CNI CNI CNI CNI CNI CNI CNI H H H H H H H H H H H H H H H H H H H
¦sensor_l
¦sensor 7
¦sensor_2
¦sensor 8
¦sensor_3
¦sensor 9
sensor_4
¦sensor 10
¦sensor_5
¦sensor 11
¦sensor 6
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 40

-------
Valve Bonnet Leak Found
•	Portable PID and AVO led us to a 6" valve (LDAR tag #18542) between
SI and S4.
•	Leak was found within 5-10 min.
•	TVA = 60,000ppm. OGI difficult.
View from SI back to valve leak (~40')
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 41

-------
New
Leak Detection in
October
Random detects from SI, which
didn't seem to correlate with the
ononon^j-^j-^j-^j-^j-^j-LnLnLnLnLnLDUDUDUDUDUDUDr^r^r^r^r^r^oooooooooooooioioioioioi
LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD LD
MMMMMMMMMMMMMMMMMMMMMMMMMMMMOOOOOOOOOOOOOOOOOOOOOO
sensor_l sensor_2 sensor_3 sensor_4 sensor_6
sensor 7 ^—sensor 8 ^—sensor 9 sensor 10 sensor 11
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 42

-------
Valve Plug Leak Found
•	Within 5-10 min, we were able to narrow down the leak location via AVO and a portable RID
(PID>100ppm).
•	TVA flame-out; IR Camera recorded a plume from a vent hole in the valve plug.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 43

-------
Controlled Releases & Triangulation
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 44

-------
Wind data comparison of all wind sensors
¦	All 5 wind sensors showed consistent
wind patterns.
¦	The "A" Met station would be the
best location to represent the wind
condition of the Sour Lake Facility.
200
R (SW)
Q (NE)
(29.5 ft height)
(13 ft height)
(15 ft height)
S (NW)
• (13 ft height)
T(SjE)
(13 ft heTght)
250 300
feet
Data source : 2018/10/15 00:00 - 2018/10/17 00:00
 ^
Ol c
ra —'
&> 1
>
<
00:00
00:00
c
.2 c
4-1	
u £
o> £-
.t LO
T3
c Q-180
5 a»
qj 2?
CD O)
03 
<
00:00
24:00
00:00
Proprietory Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 45

-------
Wind vs. Temperature
In general, low temperature, low wind speeds, and more diverse wind directions were seen during night times.
Data source : 2018/12/2 18:00 - 2018/12/7 06:00
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 46

-------
Leak Location Approximation
> Molex's algorithm of leak location approximation has successfully triangulated a fin fan leak,
a valve leak as well as a controlled release leak of 30 seem Isobutylene marked as "LL1".
100
80
a>
.fu
60
40
20
FinFan Leak
Sensor 11
Sensor 10
Sensor 9
Sensor 8
Sensor 7
Sensor 3
Sensor (^li
Sensor 2
c;pn<;nr s	Sensor 4
bensorb Va)^ Legk
Sensor 1
0 20 40 60 80 100 120 140 160 180 200 220 240 260
feet
: Possible leak area
Data source : 2018/9/19 9:30 - 2018/9/19 12:00
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix B Page 47

-------
Elevation Study
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 48

-------
23'
Elevation
Test 16-1, LL1.2 (23' elevation), 50 seem isobutylene, 12/5/2018
^^=»sensor_9
:: ... ^
65
60
55
50
















































<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
<
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
Q_
CM

ro
00

o
un
i
[*>.
rM
00
ro
cn
un
o
U)
(N
r^
ro
cn
^r
o
un
tH
r-->
CN
00
^r
cn
un
t—1
tO
CN
r-»
CO
cn

O
tO
T—1

CN
00
^r
cn
un
rH
to
¦vT

LO
un
o

T—1
CN
rM
m
m
^r

un
o
O
tH
tH
CN
(N
ro


un
un
o
o
vH
i
cn
ro
ro
-vf
¦xf
un
un
O


CN
CN
ro
ro

<3;
un
O
o
cri
cri
cri
cri
6
o
o
o
o
o
d
o
o
o
^ i
T—1
tH
t—1
t—i
1
^ i
rH
T—1
tH
T—1
CN
r\i
CN
CN
CN
CN
CN
CN
CN
CN
CN
tH

T—1
r-i
tH
t—i
t—I
tH
rH
T—1
CN
CN




i
i
t—i

-------
Gas Plumes of Different Leaks
•	Light molecules tend to go up, heavy molecules tend to go down in the air
•	The molecular weight effect decreases with increasing wind speed.
L-	_ __ h.	_	L.
Leak size =3.6g/hour	Leak size = 36 g/hour (xlO)
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 50

-------
Sensor Performance
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 51

-------
Bump Test Data
Date
Sensor 1
Sensor 2
Sensor 3
Sensor 4
Sensor 5
Sensor 6
Sensor 7
Sensor 8
Sensor 9
Sensor 10
Sensor 11
9/10/2018
316
286
242
225
296
448
280
225
211
259
398
9/11/2018
259
256
259
257
268
321
230
275
369
301
270
9/12/2018
275
203
269
209
275
354
233
258
314
156
272
9/13/2018
301
307
342
236
344
372
317
316
329
271
299
9/14/2018
268
286
314
191
307
325
278
266
314
254
285
9/17/2018
273
158
310
213
283
318
280
289
289
206
275
9/18/2018
280
326
311
218
279
324
287
288
309
222
284
9/19/2018
274
280
306
199

312
282
281
300
234
266
9/20/2018
200
223
242
164

265
220
235
243
208
247
9/21/2018
197
227
243
208

242
221
229
240
221
223
10/15/2018
236
253
261

275*
268
237

237
207
222
10/16/2018
253
234
275
181
294
293
261
251
261
243
212
10/17/2018
265
138
280
83
144
288
267
241
267
227
191
10/18/2018
276
289
282
203
311
328
285
284
285
274
267
10/19/2018
271
282
290
181
296
316
279
276
279

260
12/3/2018
260
259
269
210
259
313
257
259
254
260
230
12/4/2018
155
155
166
139
157
192
159
238
155
163
194
12/5/2018
130
161
167
143
159
203
161
230
159
166
204
12/6/2018
102
155
148
131
160
192
154
242
138
158
207
12/7/2018
125
138
156
74
132
150
147
213
128
136
197
* Sensor was replaced due to an issue with wireless communication.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 52

-------
Sensor Stability over 3 Months
Sensor Response to 500ppb Isobutylene vs. Time
500
f~a
200 ™
+-» —
o 100
te	
y = -0.3307x+ 14604
u
^ A-P	^	o.^	<&J? c\^ cfi
/ /	/ /	/ ,/	/ / /	,/ / / /
•	Sensor 1	• Sensor 2	• Sensor 3	• Sensor 4
•	Sensor 6	• Sensor 7	• Sensor 8	• Sensor 9
•	Sensor 10	• Sensor 11	• Avearge	•••• Linear (Avearge)
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix B Page 53

-------
Summary & Recommendations
Proprietary Information has been removed from this section
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix B Page 54

-------
Summary of Findings and Progress
¦S Found two large component leaks between LDAR cycles at SLOF. Through exercises, we developed a
preliminary leak-search process involving a portable PID and Method 21 instruments, that allows us to
quickly narrow down, pin-point, and validate the leak source.
•S Sensor density determines sensor sensitivity and time to detect large leaks, and the ability to differentiate
new leaks from authorized emission sources. The sensor detection signal is a function of leak size, distance
and the gas type under similar wind conditions.
•S Developed and validated software algorithms for leak location approximation. Operating offline, Molex's
model has successfully triangulated a valve bonnet leak, a valve plug leak, and the fin fan leak, as well as
several controlled gas releases in SLOF.
•S Bump test results were fairly consistent, with an average change of -12% in sensor sensitivity over 3 months
suggesting excellent stability of sensors in the field.
f	A	United States
molex	\
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report — Appendix B Page 55

-------
Progress on LDAR Innovation
United States
I	f FLINT HILLS	AFP^X Environmental Protection	Appendix C: FHR CC TeStS
I I IUIvA	J~; resources-	vu ^Aswey (fj	Page C1 to C59
Appendix C: FHR CC Tests
Appendix C consists of three sets of figures and tables summarizing select aspects of the FHR CC
pilot tests conducted in 2019. Appendix CI (pages C2 to C13) presents the sensor node layout
planning for the Mid-Crude and m-Xylene process units. Appendix C2 (pages C14 to C48) provides
summary tables of LDSN/DRF information and figures illustrating the potential source location (PSL)
boxes determined by mSyte™, along with M21 screening values (SVs) determined under the DRF.
Appendix C3 (pages C49 to C59) presents an updated version of the sensor locations that will be
used in the process units on an ongoing basis. In the case of Mid-Crude, this includes six additional
sensors that improve leak detection coverage based on pilot test results (described in report).
The following was originally developed as part of internal CRADA team communications and has
been modified in places to remove proprietary information and to improve clarity for use in this
report. This appendix does not intend to capture all efforts performed in these tests but provides the
reader with a sense of the type of work accomplished.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix C Page 1

-------
Appendix CI
LDSN Node Layouts for FHR CC Tests
Tests Conducted May 1, 2019 to November 30, 2019 (m-Xylene unit)
and July 1, 2019 to November 30, 2019 (Mid-Crude Unit)
f	A United States
molex | FBN;r,HiH-s.
ji i» OB# * iw«ew
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 2

-------
FHR CC Mid-Crude Process Unit (Level 1)
lo 32E£2s »	=n
-EE3-EE3J
11= 42GA48 = 4/CA47 =SJ
-f.
42GA356
-jSOiA
Hi
l , IB
1
¦PRATOR t ^ T ca <, T-2-i^ - - 1 - - - i"
QP£SfiI£3
Bi'Kp'c'
-r.
^jl	^4?GA3^Ai'B
'.SB-..
i 0 0 Qj Ba,s» »»,«% "•1°»n""Er«
ssi 3 i Is! .JLUJ

II
jzaS31
*42SA2BV
1
•I	4?GAtfi
smi |	gy
42GAM t	I	
=»
¦I2GA3A 	
<8Wi ..OT
-W= — -1- Eq —, .
H ^ gJ^135
V^-ea eP
IT
!M	1 &¦ \Ljjfn
ri^_ _	I ' d5FA?S
Ground level (A)
Figure Cl-1. Mid-Crude sensor locations (ground level, level 1). The sensor nodes are represented by
the red dots and are sequentially numbered for each process unit. The colored circles surrounding
each sensor node indicates a 60 ft radius and represents an approximate detection coverage by
individual LDSN nodes for an RF = 1 gas. These are the approximate node locations during the 2019
pilot test and do not reflect the any additional nodes added after the pilot test (described in report).
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 3

-------
FHR CC Mid-Crude Process Unit (Level 2)
22
una
d^l
| j
PLATFORMS EL 1
38
20
m
^	I	)	f	f-r^
illkijf

-rial"
rl~
' T— 4 (
0
-i2DA1
-""LI
H 19
PLATFORM® EL 1,
2nd level (B)
Figure Cl-2. Mid-Crude sensor locations (level 2). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 4

-------
FHR CC Mid-Crude Process Unit (Level 3)
420A7
0
24
Er
PLATFORM EL 144MT
" f	1	T" "
<
PLATFORM
@EL 143M03M* '
>I|42EA7^" I
PLATFORM @ EL. 144*^
F -t2£A3A
-26—
•»:ea3B
f_ j2EA3C _
|[
42EA71A"
- 42EA718
42EA17B
L -i2EAl7C
42EA71C
25
Iff


"23
PLATFORM $ EL 136*-C

\ /
\/
"7V
34
tPL
DECK @ EL W-tr
3rd level (C)
Figure Cl-3. Mid-Crude sensor locations (level 3). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 5

-------
FHR CC Mid-Crude Process Unit (Level 4)
j _ L
I l

ItffH
i 1
i {
i	J .
Q.1
28
--
jgi30-	L
PLATFORM® EL 16f-C
_ i:

' If" "221
f?E_A3C
FIN-FAN DECK I Ifc
@EL 1S3'-1034" biljl
42EA3Q
L
—p- ss
42EA7IA
i2EAI7A
21"

42EA7TC

FIN-FAN OECK $ a. 153"-1Q 3/4"
a!	C . i.gi
WMHH
	M A
' i 32
fr
PLATFORM a El 156" 0"
su
XPLATFORM @ EL '62"-6"
4th level (D)
/42DA6
Figure Cl-4. Mid-Crude sensor locations (level 4). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 6

-------
FHR CC Mid-Crude Process Unit (Level 5)
5th level (E)
Figure Cl-5. Mid-Crude sensor locations (level 5). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 7

-------
FHR CC Mid-Crude Process Unit (Level 6)
42EA7
0
i '
ft..
4 2 DAS
Li"
35_
6th level (F)
36
J2CA1
Figure Cl-6. Mid-Crude sensor locations (level 6). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 8

-------
FHR CC Mid-Crude Process Unit - All Levels
Figure Cl-7. Three dimensional representation of the LDSN nodes on different elevations
within Mid-Crude unit reflected in Figures Cl-1 trough Cl-6. The colors represent
different elevations within the process unit. These are the node location during testing
and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 9

-------
FHR CC m-Xylene Process Unit (Level 1)
dots and are sequentially numbered for each process unit. The colored circles surrounding
each sensor node indicates a 60 ft radius and represents an approximate detection coverage
by individual LDSN nodes for an RF = 1 gas. These are the approximate node locations during
the 2019 pilot test and do not reflect the any additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 10

-------
FHR CC m-Xylene Process Unit (Level 2)
Figure Cl-9. m-Xylene sensor locations (level 2), These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 11

-------
FHR CC m-Xylene Process Unit (Level 3)
Figure CI-10. m-Xylene sensor locations (level 3). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 12

-------
FHR CC m-Xylene Process Unit (Level 4)
Figure Cl-11, m-Xylene sensor locations (level 4). These are the node location during
testing and do not reflect the additional nodes added after the pilot test.
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 13

-------
Appendix C2
Summary of LDSN PSLs and DRF SVs
Tests Conducted May 1, 2019 to November 30, 2019 (m-Xylene unit)
and July 1, 2019 to November 30, 2019 (Mid-Crude Unit)
¦	f~	A rr>A United States
molex f	<>epaisst~
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 14

-------
FHRCC Mid-Crude
Process Unit
PSL Summary
Table C2-1. PSL summary
table for the LDSN/DRF pilot
test in the FHR CC Mid-
Crude process unit from
7/1/2019 to 11/30 2019.
Table is sorted by ascending
DRF SV values.
LDSN PSL
ID
PSL/DRF
1 nvestigation
ID
Process Unit
Level
DRF Date
Found
Date
Repaired
LDAR
Component
Tag
Type of
component
« Dist. To
LDSN Node
(ft)
DRF SV
(ppm)
OGI Detection
Result
19-SD-00045
3
l
11/6/2019
11/8/2019
105180A.1
Connector
23
582
No Detect
19-SD-00011
1
2
7/11/2019
7/24/2019
112357
Connector
36
626
No Detect
19-SD-00014
4
1
10/30/2019
11/4/2019
111636
Valve
19.2
664
No Detect
19-SD-00030
3
2
11/6/2019
11/20/2019
114788A
Valve
46.7
681
No Detect
19-SD-00056
2
1
11/26/2019
11/27/2019
102078
Pump
50
713
No Detect
19-SD-00059
1
2
11/5/2019
11/20/2019
106351
Valve
48
756
No Detect
19-SD-00042
1
1
8/26/2019
9/4/2019
112800
Connector
38
792
No Detect
19-SD-00043
1
1
8/27/2019
8/28/2019
113910B.1
Connector
24
1262
No Detect
19-SD-00045
1
1
9/5/2019
9/9/2019
AVO 018748
Connector
44
1652
No Detect
19-SD-00058
2
1
11/4/2019
11/7/2019
118887
Valve
48
1783
No Detect
19-SD-00012
1
1
7/18/2019
7/18/2019
Near 112144
OEL
72
1896
No Detect
19-SD-00021
1
1
7/11/2019
7/15/2019
105282.1
Connector
24
2186
No Detect
19-SD-00033
2
1
8/12/2019
8/13/2019
AVO 018726
Connector
25
2451
No Detect
19-SD-00014
8
1
11/19/2019
11/20/2019
112011.1
Connector
63
2572
No Detect
19-SD-00052
1
3
11/6/2019
11/10/2019
108260.1
Connector
23
2742
No Detect
19-SD-00030
2
2
9/18/2019
9/21/2019
AVO 018786
Connector
46.7
2836
No Detect
19-SD-00045
2
1
10/30/2019
10/30/2019
AVO 018989
Connector
38.6
2946
No Detect
19-SD-00033
3
1
8/12/2019
8/14/2019
AVO 018728
Connector
25
3082
No Detect
19-SD-00052
3
4
11/6/2019
11/7/2019
Non LDAR
Non LDAR
40
3153
No Detect
19-SD-00014
6
1
11/5/2019
11/7/2019
AVO 018235
Connector
18.4
3463
No Detect
19-SD-00058
1
1
10/29/2019
10/31/2019
AVO 018987
Connector
30.5
3520
No Detect
19-SD-00033
1
1
8/8/2019
8/22/2019
AVO 018740
Valve
25
3559
No Detect
19-SD-00025
5
1
9/11/2019
9/12/2019
105803.1
Connector
52
3654
No Detect
19-SD-00052
4
4
11/6/2019
11/7/2019
Non LDAR
Non LDAR
40
4535
No Detect
19-SD-00053
1
4
10/9/2019
10/10/2019
Near 109280
Non LDAR
42
4751
No Detect
19-SD-00014
5
1
11/5/2019
11/7/2019
111575
Compressor
19
5526
No Detect
19-SD-00014
7
1
11/19/2019
11/20/2019
112082
Valve
72
7287
No Detect
19-SD-00014
3
1
9/18/2019
9/24/2019
AVO 018724
Valve
20
9951
No Detect
19-SD-00020
2
1
7/2/2019
7/17/2019
AVO 018743
Connector
15
15231
No Detect
19-SD-00038
1
2
9/12/2019
9/17/2019
AVO 018985
Connector
10
18330
No Detect
19-SD-00042
3
1
8/26/2019
8/27/2019
119746.1
Connector
54
30876
No Detect
19-SD-00020
1
1
7/11/2019
7/14/2019
103056R.1
Connector
25
93106
Moderate Detect
19-SD-00024
1
1
7/23/2019
7/24/2019
108940.1
Connector
63
100000
Easy Detect
19-SD-00014
2
1
7/30/2019
8/19/2019
111575
Compressor
19
100000
Moderate Detect
19-SD-00014
9
1
7/23/2019
7/24/2019
111961.1
Connector
63
100000
No Detect
19-SD-00014
1
5
11/19/2019
11/25/2019
111972
Pump
48
100000
Easy Detect
19-SD-00042
2
1
8/26/2019
8/27/2019
119674.1
Connector
49
100000
Moderate Detect
19-SD-00052
2
4
11/6/2019
11/7/2019
Non LDAR
Non LDAR
38.6
100000
Difficult Detect
19-SD-00056
1
1
10/24/2019
10/24/2019
Non LDAR
Non LDAR
5
100000
Easy Detect
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 15

-------
FHR CC Mid-Crude Process Unit

19-SD-00012
Created : 2019-06-08 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
310 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-1. Mid-Crude PSL 19-SD-00012
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 16

-------
FHR CC Mid-Crude Process Unit

19-SD-00014
Created : 2019-06-10 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
310 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-2. Mid-Crude PSL 19-SD-00Q14
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 17

-------
FHR CC Mid-Crude Process Unit
7%10
^212!
19-SD-00020
Created : 2019-06-26 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-3. Mid-Crude PSL 19-SD-00020
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 18

-------
FHR CC Mid-Crude Process Unit
CHFMICAI
WJcCTiON
4- >4
A
N
^ 3
4.f A11 -irfAl?
^><3>
2EASCD:
I I L
rr~—i
TT1^ i i
y
v
:2£ilS£J
<>-[- j--|
HI- • ¦
ml
S * ? s
1
v T V
 A
r^- a

ib . A A
s-hs^s
32E/&
4«GA50ft -.
436AA05 |w
fl &
¦t/GAk'Ao
ft
H
tn
LL
ET
12
.. •
H
'« e
t- E3'
a13^j
Ck oH
E3^
¦ , -E»S -
j|tZ3''	si6S4-

-; fr
£
X0¥
ml
ret
¦i"
M.f
&K. *difi
ffiff
JE*§j
1^l£
jb£I

	1
g
—$4
kikS:
¦EZ36E
Q
EH2E
424523
BJILDlUG
O
(}%w
310 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00021
Created : 2019-07-05 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-4. Mid-Crude PSL 19-SD-00021
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 19

-------
FHR CC Mid-Crude Process Unit
7%10
19-SD-00025
Created : 2019-07-21 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-5. Mid-Crude PSL 19-SD-00025
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 20

-------
FHR CC Mid-Crude Process Unit
7%10
19-SD-00033
Created : 2019-08-07 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-6. Mid-Crude PSL 19-SD-00033
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 21

-------
FHR CC Mid-Crude Process Unit
A
N

mf—
T «- 5 S f V
«u
—ttt

E5EEE-
t|| —:ir^.
i r

42fAH
_;l_.l -IM——

6
•r •••
I - — -¦ |H
w:

{	
¦!LfA33
o
til
u-l
en
S5C~c:
irr
Ftjn;tll; - ¦ I
FT
y ML
!

W |*g
=f
J
ff
JifiAjfiS
-*M' CvVg-A
l£US21;
•pGAij
¦!2AS-1
IXflJ
II
JZFAS) - -t2FA2S
IZ3-EZE
—±EM-
II

T
HSL
10
Wmm
ft

¦K
bJ
Ug-1


f
(i a. rt
' -!?cA33
<;6A36A3
"
it±
its' «*
£3
'$ =*¦
r-
E3i;
J" E3
ED-
HI


W4.

m
'£& i-8
& -feii £
rSv
iN
^£2
.3 '•••• r
.... :5, Ti?£ 23
era-
©if]
r—

ca1
ear
tBi
O
o
- -14
I
•nr4
m
. £
g~.
.
Tjenji
rp iffiam
J£—;

&
	^—
42AS-14
eaiwDNG
O
U^2?
310 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00056
Created : 2019-10-23 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-7. Mid-Crude PSL 19-SD-00056
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 22

-------
FHR CC Mid-Crude Process Unit
7%10

19-SD-00042
Created : 2019-08-23 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-8. Mid-Crude PSL 19-SD-00042
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 23

-------
FHR CC Mid-Crude Process Unit
7%10
19-SD-00043
Created : 2019-08-23 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-9. Mid-Crude PSL 19-SD-00043
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 24

-------
FHR CC Mid-Crude Process Unit
CHEMICAL
NJECOS


N
7%o
¦i --Q'. D

ffl
HI - L-
—±=±3i±-

41 EA'OC'j
111 : I:
l^F"
SB1

Lili
U 111

13:
IT
Bi—5
-tlris BZ
fllD
- 6
sat
551
1 - • w "
iSA n£"
m*
jiieZ^y
f2 —4> BE&8
4?l IAJP.V7
i22; L_i_
TFF

tti
JiUuii
11
TTOr

MSZ
.71?
&

il
0 0
6-. »-
9
a
¦BSC
g ii£S25 - iiliii —ji
•t'i'iW	—'' ¦
Fi/TUSE

JJI3
iH
lb S»2.
Ti /"T\
#44-

10
ffi
hgmm -, "
43GAatS -vr

CTamEm
Ml
"'1=	I
. i:EAL6
e—



if».r 12:
'$s%
ci_3 — r .
13

s^^ca SI3^
mm
• *
tih
3P
HB,"
o .?!•
3
• -T^ft
tej-1 «ctU
:t3!
jfjzj
K-fB5E jajf
A


-14
a] I
' 25 *>'[


s

42AS-14
ANALYZER
EUILDING
^2321
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00045
Created : 2019-09-03 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-10. Mid-Crude PSL 19-SD-00045
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 25

-------
FHR CC Mid-Crude Process Unit
CHEMICAL
Cl+
—m&f
•ifolil .! .
N
4; EA^AB .
Ill I
• 17

"Ttssiw"
JM24J
ml
—1 ajgg—
*1 tA'OCO

i> :-v?c£i
—W X-
zet
d=B I
42^ All 43fA<2
•Q O
ill>
¦)5 f f S
I
rn
$
c^-
«i|
- &
^tRt39
IJ
iiiiay
%£»?
if-f-
ttui
ll;l
~cu
tei.
m:
81
¦
r II. -I
e-t
¦J i-.ij
« 7t
»
itj ...i.
!»
0 D
SL
— ";9

Hl
1= -«fFA?6 a -li'FAjS
•EZ3 -ElZ3 .
It 	—*—
F/TU3E
m
' :GAj&9
¦ -


,:fe
i!2E
^8 r^s-1 •
ANALYZES
10
n ~ -f 1
Mm
3 ""»" = • 3
DO

~ ¦ ¦ -d
0±J
I

J [J
II-
{1-EAS6
-se^z
m
- -EEEEb
426A54A -»
¦' •'¦-¦¦¦¦• -i
ft
• i
"'i-	1

E3|?
1
at
E3^
Iff
d8r
1
TXnr
4^!/
¦' L
ro"
on
1		—
/:
______
¦1
r^T-	-rsStif-
o •
O 'tl
Hr
M
*P
esS t4J
ipf i£
co^i

fe;s
T51
Tp$
CD
O
—14
*«n
»f> ;

-'-A—I
esa=
42AS-14
BUILDING
C^23
510 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00059
Created : 2019-10-31 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-11. Mid-Crude PSL 19-SD-00059
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 26

-------
FHR CC Mid-Crude Process Unit
CHEMICAL
N'JEC^A
X;-L
¦¦
ill
i N
7%io
Wss
7
nhr

£-
3ft-
JUL—
* ¦ *~1 * " * *•
e<£
JrfA.V. -	—|
iilQi
fmt-
ii
a-i
5 t |

m -
iiLW#
i ait
a1!
-Li—Ii
llSu±il
....qi
msz.
FUTURE -
Vt 7l
4 ¦!&,
i
0 0
1
B ,.
ft
.
I
L±iL_
jg -i_r - =i
U —*22*+-

2 Lei.
AH-1^3
'H
71
fen J3fl|
10
I
iwr
W-
M

J fj
A . A .
MM
Mi,
C-£Alc
—
"trl—
-63%
, sis a
— n
-iffilU
*«?, 12
Li •
* S
if
W

1
Uf
J2
C3-
t3
¦at
r^taai

il
4j3AJ
3+Ja
=S
V-
4{Gf3 'A
13 G»'Cfi
CZ3^i!

B"
4265=1 fr-
- tr:^-v
Q:i
e'1
n
] ^"44
-rMz*i %
8
-¦:-} :.<: • ¦
&
esa=
BJIlDiNo

2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330
2290
19-SD-00058
Created : 2019-10-25 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-12. Mid-Crude PSL 19-SD-00058
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 27

-------
FHR CC Mid-Crude Process Unit
~
N
Tf
920
880
840
¦
800
7%10
T-~r- '
1:1 -1%
JiiSL
pUtfo w 3 B-|l2Tf
p
a
i®--


4! J
:
i8
XM
111 Sr
/«a2£3
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00011
Created : 2019-06-07 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 2
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-13. Mid-Crude PSL 19-SD-GGQ11
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 28

-------
FHR CC Mid-Crude Process Unit
N
7%hT
O
M
.0.

!gjL!2?f
t:
-

rS:
PUATFOPMfiG

-H-—1
i8

fcj
ss




rente
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00030
Created : 2019-08-06 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 2
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-14. Mid-Crude PSL 19-SD-00030
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 29

-------
FHR CC Mid-Crude Process Unit
920
880
840
800
76ftlO
































































































A























































|:
























r
¦r






f
/j

))











?
r




















im


h






\













•





















/I:







B





-U~
































Vp./



























r






















PlATFf
RMftEL


















~

























































































>0

























I




















•















































ill





















i

•


_ J



















El













































































r






































i
$
r
i

,
'1





"i

































~



|


A\




i



































EL
\


%
J

P
H




i
i






p





























mJ

FtfiTPtWMe





i

















































V















































a 20*1
















































































































































•i
















































































































j-
"4s
HI

















































19















































•













































PLATFO
W«EL
2T-b'





































































































































19-SD-00038
Created : 2019-08-14 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 2
Detection Category: 3
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-15. Mid-Crude PSL 19-SD-00038
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 30

-------
FHR CC Mid-Crude Process Unit
920
880
840
800
7%10
19-SD-00052
Created : 2019-10-08 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 4
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-16. Mid-Crude PSL 19-SD-00052
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 31

-------
FHR CC Mid-Crude Process Unit
920
880
840
800
N
7%
27
" r i mf v
21
- •
1 E
I MBBJtfB I-
r  50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
10 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-17. Mid-Crude PSL 19-SD-00053
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix C Page 32

-------
FHR CC Mid-Crude Process Unit
A
N
'¥
%
2E
f

o
«
nitnx
KA"C4Ut£fct --fj'

7%10
2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
19-SD-00024
Created : 2019-07-18 19:00:00
Site/Unit: Mid-Crude
Elevation Level: 5
Detection Category: 3
Leak size (ppm)
•	> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
•	Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-18. Mid-Crude PSL 19-SD-00024
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report - Appendix C Page 33

-------
FHR CC m-Xylene
Process Unit
PSL Summary
Table C2-2. PSL summary
table (1 of 2) for the
LDSN/DRF pilot test in the
FHR CC m-Xylene process
unit from 5/1/2019 to
11/30 2019. "DRF-Pre"
entries represent leak
detected by LDSN prior to
implementation of PSL ID
system. Table is sorted by
ascending DRF SV values.
LDSN PSL
ID
PSL/DRF
1 nvestigation
ID
Process Unit
Level
DRF Date
Found
Date
Repaired
LDAR
Component
Tag
Type of
component
« Dist. To
LDSN Node
(ft)
DRF SV
(ppm)
OGI Detection
Result
19-SD-00028
15
l
11/18/2019
11/19/2019
249694
Valve
6
564
No Detect
19-SD-00029
1
l
8/6/2019
8/19/2019
253195
Connector
26
583
No Detect
19-SD-00029
5
l
8/20/2019
8/21/2019
253213
Drain
26
590
No Detect
19-SD-00028
4
l
8/22/2019
8/23/2019
253442.2
Connector
24
643
No Detect
19-SD-00029
8
l
10/15/2019
10/18/2019
253342
Valve
15
665
No Detect
19-SD-00051
1
3
10/12/2019
10/22/2019
251183
Connector
17
676
No Detect
19-SD-00029
3
1
8/12/2019
8/13/2019
252682.1
Connector
42
692
No Detect
19-SD-00050
6
2
10/28/2019
11/4/2019
252885
Valve
11
828
No Detect
19-SD-00028
14
1
10/22/2019
10/23/2019
253677.1
Connector
35
863
No Detect
19-SD-00057
7
2
11/21/2019
11/23/2019
250925
Valve
16
890
No Detect
19-SD-00028
7
1
9/17/2019
9/19/2019
248889.1
Connector
15
894
No Detect
19-SD-00035
7
1
11/21/2019
11/27/2019
252776
Valve
45
903
No Detect
19-SD-00035
6
1
10/22/2019
10/30/2019
252796
Valve
45
924
No Detect
19-SD-00055
4
1
11/18/2019
11/21/2019
252574
Valve
51
1071
No Detect
19-SD-00028
9
1
9/17/2019
9/19/2019
248875.1
Connector
15
1073
No Detect
19-SD-00055
1
1
10/17/2019
10/21/2019
252577
Valve
51
1073
No Detect
19-SD-00057
3
2
11/21/2019
11/26/2019
250968
Connector
16
1103
No Detect
19-SD-00057
4
2
11/21/2019
11/26/2019
250964
Connector
16
1254
No Detect
19-SD-00029
11
1
11/11/2019
11/21/2019
AVO 018950
Connector
18
1316
No Detect
19-SD-00028
17
1
11/18/2019
11/19/2019
249906
Valve
24
1320
No Detect
DRF-Pre




AVO 018732
Connector

1332

19-SD-00035
2
2
8/29/2019
9/3/2019
252228
Valve
32
1341
No Detect
19-SD-00055
6
1
11/18/2019
11/21/2019
252555
Valve
51
1342
No Detect
19-SD-00028
12
1
10/11/2019
10/16/2019
248881
Connector
15
1371
No Detect
19-SD-00055
5
1
11/18/2019
11/21/2019
252602
Valve
42
1398
No Detect
19-SD-00029
2
1
8/6/2019
10/11/2019
Near 253317
Non LDAR
16
1607
No Detect
DRF-Pre




253329.1
Connector

1635

DRF-Pre




AVO 018733
Connector

1842

Leak ID #5
3
1
5/8/2019
5/9/2019
AVO 018744
Connector
15
1848
No Detect
19-SD-00050
3
2
10/15/2019
10/17/2019
252827
Valve
11
1903
No Detect
19-SD-00047
2
1
9/18/2019
9/24/2019
249354.1
Connector
17
2048
No Detect
19-SD-00028
13
1
10/22/2019
10/24/2019
248701
Connector
9
2050
No Detect
19-SD-00055
3
1
10/21/2019
11/5/2019
252682
Valve
42
2072
No Detect
19-SD-00028
11
1
10/11/2019
10/16/2019
248835
Pump
9
2109
No Detect
19-SD-00028
18
1
11/18/2019
11/21/2019
249883.1
Connector
24
3014
No Detect
19-SD-00028
1
1
8/6/2019
8/8/2019
253410.1
Connector
15
3020
No Detect
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 34

-------
FHR CC m-Xylene
Process Unit
PSL Summary
Table C2-3. PSL summary
table (2 of 2) for the
LDSN/DRF pilot test in the
FHR CC m-Xylene process
unit from 5/1/2019 to
11/30 2019. "DRF-Pre"
entries represent leak
detected by LDSN prior to
implementation of PSL ID
system. Table is sorted by
ascending DRF SV values.
LDSN PSL
ID
PSL/DRF
1 nvestigation
ID
Process Unit
Level
DRF Date
Found
Date
Repaired
LDAR
Component
Tag
Type of
component
« Dist. To
LDSN Node
(ft)
DRF SV
(ppm)
OGI Detection
Result
19-SD-00057
2
2
10/28/2019
11/6/2019
250735
Connector
6
3315
No Detect
19-SD-00028
6
1
8/29/2019
9/3/2019
249216.1
Connector
27
3555
No Detect
19-SD-00029
10
1
10/15/2019
10/17/2019
253195
Connector
26
3718
No Detect
19-SD-00035
3
2
8/29/2019
9/4/2019
252164.1
Connector
57
3861
No Detect
19-SD-00050
5
3
10/22/2019
11/5/2019
252945
Valve
34
4465
No Detect
19-SD-00028
10
1
10/1/2019
10/7/2019
248701
Other (Lid)
9
5590
No Detect
19-SD-00028
16
1
11/18/2019
11/21/2019
AVO 018951
Connector
10
8381
No Detect
Leak ID #5
1
1
5/8/2019
5/9/2019
AVO 018730
Connector
15
9776
Difficult Detect
19-SD-00028
8
1
9/17/2019
9/19/2019
248881
Connector
15
10285
No Detect
19-SD-00050
2
1
10/12/2019
10/17/2019
AVO 018787
Connector
5
10724
No Detect
19-SD-00035
5
1
10/11/2019
10/16/2019
252783
Connector
45
11416
Moderate Detect
19-SD-00029
7
1
8/26/2019
8/28/2019
253255.1
Connector
16
14383
No Detect
19-SD-00028
19
1
11/18/2019
11/21/2019
250012
Connector
25
15834
No Detect
19-SD-00050
4
3
10/15/2019
10/17/2019
252945
Valve
34
16733
Easy Detect
19-SD-00028
5
1
8/26/2019
8/28/2019
248720.1
Connector
9
17437
No Detect
19-SD-00050
1
3
10/1/2019
10/8/2019
AVO 018727
Connector
50.4
17988
Difficult Detect
19-SD-00057
1
2
10/28/2019
11/8/2019
AVO 018988
Connector
10
18850
No Detect
19-SD-00057
5
2
11/21/2019
11/26/2019
AVO 018990
Connector
15
19987
No Detect
19-SD-00047
5
1
10/28/2019
11/8/2019
249905
Connector
24
21031
No Detect
19-SD-00029
6
1
8/22/2019
8/23/2019
253343.2
Connector
15
21040
No Detect
19-SD-00008
1
3
6/13/2019
8/7/2019
252951
Valve
34
22797
Difficult Detect
19-SD-00054
1
1
10/16/2019
10/30/2019
252782
Valve
45
23820
Moderate Detect
19-SD-00055
2
1
10/17/2019
10/31/2019
252552
Valve
51
24102
No Detect
19-SD-00001
1
2
6/13/2019
10/22/2019
250735
Connector
6
28317
No Detect
19-SD-00047
4
1
10/22/2019
10/23/2019
Non LDAR
Non LDAR
5
30102
No Detect
19-SD-00035
4
1
9/25/2019
9/26/2019
252783
Connector
45
35690
No Detect
19-SD-00029
9
1
10/15/2019
10/17/2019
AVO 018741
Connector
20.6
38147
No Detect
19-SD-00029
4
1
8/15/2019
8/24/2019
252682
Valve
42
38878
No Detect
Leak ID #5
2
1
5/8/2019
5/9/2019
AVO 018721
Connector
15
48893
Difficult Detect
19-SD-00028
2
1
8/6/2019
8/6/2019
253399.1
OEL
15
50042
Moderate Detect
19-SD-00047
1
1
9/18/2019
9/24/2019
249331.1
Connector
17
66720
No Detect
19-SD-00028
3
1
8/15/2019
8/19/2019
253329.1
Connector
15
71744
No Detect
19-SD-00035
1
1
8/12/2019
8/13/2019
252766
Connector
45
72203
No Detect
DRF-Pre




AVO 018731
Connector

98000
Easy Detect
19-SD-00057
6
2
11/21/2019
11/25/2019
AVO 017056
Connector
10
100000
Easy Detect
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 35

-------
FHR CC m-Xylene Process Unit
19-SD-00005
Created : 2019-06-06 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-19. m-Xylene PSL 19-SD-00005
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 36

-------
FHR CC m-Xylene Process Unit
19-SD-00028
Created : 2019-07-27 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-20. m-Xylene PSL 19-SD-00028
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 37

-------
FHR CC m-Xylene Process Unit
i,
I-
19-SD-00029
Created : 2019-08-05 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
•	> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
•	Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-21. m-Xylene PSL 19-SD-00029
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 38

-------
FHR CC m-Xylene Process Unit
19-SD-00047
Created : 2019-09-17 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-22. m-Xylene PSL 19-SD-00047
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 39

-------
FHR CC m-Xylene Process Unit
i
I-
19-SD-00055
Created : 2019-10-16 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 1
Detection Category: 3
Leak size (ppm)
•	> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
•	Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-23. m-Xylene PSL 19-SD-00055
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 40

-------
FHR CC m-Xylene Process Unit
19-SD-00001
Created : 2019-06-05 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 2
Detection Category: 2
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-24. m-Xylene PSL 19-SD-00001
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 41

-------
FHR CC m-Xylene Process Unit
19-SD-00006
Created : 2019-06-06 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 2
Detection Category: 2
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-25. m-Xylene PSL 19-SD-00006
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 42

-------
FHR CC m-Xylene Process Unit
19-SD-00008
Created : 2019-06-07 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 2
Detection Category: 2
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-26. m-Xylene PSL 19-SD-00008
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 43

-------
FHR CC m-Xylene Process Unit
19-SD-00035
Created : 2019-08-08 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 2
Detection Category: 2
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure 01-21. m-Xylene PSL 19-SD-00009
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 44

-------
FHR CC m-Xylene Process Unit
19-SD-00054
Created : 2019-10-15 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 2
Detection Category: 2
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-28. m-Xylene PSL 19-SD-00054
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 45

-------
FHR CC m-Xylene Process Unit
19-SD-00057
Created : 2019-10-25 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 2
Detection Category: 2
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-29. m-Xylene PSL 19-SD-00057
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 46

-------
FHR CC m-Xylene Process Unit
19-SD-00050
Created : 2019-09-26 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 3
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-30. m-Xylene PSL 19-SD-00050
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 47

-------
FHR CC m-Xylene Process Unit
19-SD-00051
Created : 2019-09-26 19:00:00
Site/Unit: m-Xyiene
Elevation Level: 3
Detection Category: 3
Leak size (ppm)
> 50,000
5,000 - 50,000
1,000- 5,000
500 - 1,000
Gas sensor
Cat. 2 PSL
Cat. 3 PSL
Scale : 10ft xlOft
Figure C2-31. m-Xylene PSL 19-SD-00051
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 48

-------
Appendix C3
Revised LDSN Node Layouts for FHR CC
Appendix C3 presents an updated version of the sensor locations that
will be used in the process units on an ongoing basis. In the case of
Mid-Crude, this include six additional sensors that improve leak
detection coverage based on pilot test results (described in report).
f	t\	A United States
molex | FBN;r,HiH-s.
ji i» OB# * iw«ew
Proprietary Information has been removed from this summary - LDAR Innovation CRADA Report -- Appendix C Page 49

-------
FHR CC Mid-Crude Process Unit (Level 1)
Figure C3-1. Mid-Crude sensor locations (level 1). The four added nodes are beige colored.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 50

-------
FHR CC Mid-Crude Process Unit (Level 2)
Figure C3-2. Mid-Crude sensor locations (level 2). The added node is beige colored.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 51

-------
FHR CC Mid-Crude Process Unit (Level 3)


(v —
FUTURE

f' -¦
HfcATEP


4j6a?Agffl_ j
PLATFORM 
-------
FHR CC Mid-Crude Process Unit (Level 3)
W	1
'LATFORMSi Sl. WF-C
¦12EWWB
Figure C3-4. Mid-Crude sensor locations (level 4). The added node is beige colored.
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 53

-------
FHR CC Mid-Crude Process Unit (Level 5)
[aMIllillll
FL'-JRE
hE^a
**2GA3s8
"42GA35A
PRE-HEATER
42AS- '¦
ANALYZER
.•¦A3 fen
42EA2A/B
¦l?gMA
42DA5
PLATFORM # a. 176-
IMlMt
42GA51B
Figure C3-5. Mid-Crude sensor locations (level 5).
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 54

-------
FHR CC Mid-Crude Process Unit (Level 6)
FL-JRE
MEATE3
^2GA35B
"42GA35A
PRE-HEATER
42AS-'
ANALYZER
M?GA31a;b
^GAjcA'B
4;ga°4;
42DAS
4
-------
FHR CC m-Xylene Process Unit (Level 1)
Figure C3-7. m-Xylene sensor locations (level 1).
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 56

-------
FHR CC m-Xylene Process Unit (Level 2)

Figure C3-8. m-Xylene sensor locations (level 2).
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 57

-------
FHR CC m-Xylene Process Unit (Level 3)
Figure C3-9. m-Xylene sensor locations (level 3).
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 58

-------
FHR CC m-Xylene Process Unit (Level 4)
Figure C3-10. m-Xylene sensor locations (level 4).
Proprietary Information has been removed from this summary — LDAR Innovation CRADA Report - Appendix C Page 59

-------
molex
FLINT HILLS
resources
Progress on LDAR Innovation
Appendix D: Test Methods
Paqes D1 to D68
Appendix D
Test Methods and Procedures

-------
¦	f	A	United Stains
molex \ JH:S7,H!LH oEPA^:>j^'
Progress on LDAR Innovation
Appendix D: Test Methods
Appendix D1: EPAM21 Procedures
Page D2 to D8
Appendix D1
EPA Method 21 Procedures

-------
Method 21
—Determination of Volatile Organic Compound Leaks
1.0 Scope and Application
1.1 Analytes.
Analyte
CAS No.
Volatile Organic Compounds (VOC)
No CAS number assigned.
1.2	Scope. This method is applicable for the determination of VOC leaks from process equipment.
These sources include, but are not limited to, valves, flanges and other connections, pumps and
compressors, pressure relief devices, process drains, open-ended valves, pump and compressor seal
system degassing vents, accumulator vessel vents, agitator seals, and access door seals.
1.3	Data Quality Objectives. Adherence to the requirements of this method will enhance the quality
of the data obtained from air pollutant sampling methods.
2.0 Summary of Method
2.1 A portable instrument is used to detect VOC leaks from individual sources. The instrument
detector type is not specified, but it must meet the specifications and performance criteria contained in
section 6.0. A leak definition concentration based on a reference compound is specified in each
applicable regulation. This method is intended to locate and classify leaks only, and is not to be used as a
direct measure of mass emission rate from individual sources.
3.0 Definitions
3.1	Calibration gas means the VOC compound used to adjust the instrument meter reading to a
known value. The calibration gas is usually the reference compound at a known concentration
approximately equal to the leak definition concentration.
3.2	Calibration precision means the degree of agreement between measurements of the same
known value, expressed as the relative percentage of the average difference between the meter readings
and the known concentration to the known concentration.
3.3 Leak definition concentration means the local VOC concentration at the surface of a leak source
that indicates that a VOC emission (leak) is present. The leak definition is an instrument meter reading
based on a reference compound.
3.4 No detectable emission means a local VOC concentration at the surface of a leak source,
adjusted for local VOC ambient concentration, that is less than 2.5 percent of the specified leak definition
concentration, that indicates that a VOC emission (leak) is not present.
3.5 Reference compound means the VOC species selected as the instrument calibration basis for
specification of the leak definition concentration. (For example, if a leak definition concentration is 10,000
ppm as methane, then any source emission that results in a local concentration that yields a meter
reading of 10,000 on an instrument meter calibrated with methane would be classified as a leak. In this
example, the leak definition concentration is 10,000 ppm and the reference compound is methane.)

-------
3.6 Response factor means the ratio of the known concentration of a VOC compound to the
observed meter reading when measured using an instrument calibrated with the reference compound
specified in the applicable regulation.
3.7 Response time means the time interval from a step change in VOC concentration at the input of
the sampling system to the time at which 90 percent of the corresponding final value is reached as
displayed on the instrument readout meter.
4.0 Interferences [Reserved]
5.0 Safety
5.1 Disclaimer. This method may involve hazardous materials, operations, and equipment. This test
method may not address all of the safety problems associated with its use. It is the responsibility of the
user of this test method to establish appropriate safety and health practices and determine the
applicability of regulatory limitations prior to performing this test method.
5.2 Hazardous Pollutants. Several of the compounds, leaks of which may be determined by this
method, may be irritating or corrosive to tissues (e.g., heptane) or may be toxic (e.g., benzene, methyl
alcohol). Nearly all are fire hazards. Compounds in emissions should be determined through familiarity
with the source. Appropriate precautions can be found in reference documents, such as reference No. 4
in section 16.0.
6.0 Equipment and Supplies
A VOC monitoring instrument meeting the following specifications is required:
6.1	The VOC instrument detector shall respond to the compounds being processed. Detector types
that may meet this requirement include, but are not limited to, catalytic oxidation, flame ionization, infrared
absorption, and photoionization.
6.2	The instrument shall be capable of measuring the leak definition concentration specified in the
regulation.
6.3 The scale of the instrument meter shall be readable to ±2.5 percent of the specified leak
definition concentration.
6.4 The instrument shall be equipped with an electrically driven pump to ensure that a sample is
provided to the detector at a constant flow rate. The nominal sample flow rate, as measured at the
sample probe tip, shall be 0.10 to 3.0 l/min (0.004 to 0.1 ft3/min) when the probe is fitted with a glass wool
plug or filter that may be used to prevent plugging of the instrument.
6.5 The instrument shall be equipped with a probe or probe extension or sampling not to exceed 6.4
mm ("/ in) in outside diameter, with a single end opening for admission of sample.
6.6 The instrument shall be intrinsically safe for operation in explosive atmospheres as defined by
the National Electrical Code by the National Fire Prevention Association or other applicable regulatory
code for operation in any explosive atmospheres that may be encountered in its use. The instrument
shall, at a minimum, be intrinsically safe for Class 1, Division 1 conditions, and/or Class 2, Division 1
conditions, as appropriate, as defined by the example code. The instrument shall not be operated with
any safety device, such as an exhaust flame arrestor, removed.
7.0 Reagents and Standards

-------
7.1 Two gas mixtures are required for instrument calibration and performance evaluation:
7.1.1 Zero Gas. Air, less than 10 parts per million by volume (ppmv) VOC.
7.1.2 Calibration Gas. For each organic species that is to be measured during individual source
surveys, obtain or prepare a known standard in air at a concentration approximately equal to the
applicable leak definition specified in the regulation.
7.2 Cylinder Gases. If cylinder calibration gas mixtures are used, they must be analyzed and
certified by the manufacturer to be within 2 percent accuracy, and a shelf life must be specified. Cylinder
standards must be either reanalyzed or replaced at the end of the specified shelf life.
7.3 Prepared Gases. Calibration gases may be prepared by the user according to any accepted
gaseous preparation procedure that will yield a mixture accurate to within 2 percent. Prepared standards
must be replaced each day of use unless it is demonstrated that degradation does not occur during
storage.
7.4 Mixtures with non-Reference Compound Gases. Calibrations may be performed using a
compound other than the reference compound. In this case, a conversion factor must be determined for
the alternative compound such that the resulting meter readings during source surveys can be converted
to reference compound results.
8.0 Sample Collection, Preservation, Storage, and Transport
8.1 Instrument Performance Evaluation. Assemble and start up the instrument according to the
manufacturer's instructions for recommended warmup period and preliminary adjustments.
8.1.1	Response Factor. A response factor must be determined for each compound that is to be
measured, either by testing or from reference sources. The response factor tests are required before
placing the analyzer into service, but do not have to be repeated at subsequent intervals.
8.1.1.1 Calibrate the instrument with the reference compound as specified in the applicable
regulation. Introduce the calibration gas mixture to the analyzer and record the observed meter reading.
Introduce zero gas until a stable reading is obtained. Make a total of three measurements by alternating
between the calibration gas and zero gas. Calculate the response factor for each repetition and the
average response factor.
8.1.1.2 The instrument response factors for each of the individual VOC to be measured shall be less
than 10 unless otherwise specified in the applicable regulation. When no instrument is available that
meets this specification when calibrated with the reference VOC specified in the applicable regulation, the
available instrument may be calibrated with one of the VOC to be measured, or any other VOC, so long
as the instrument then has a response factor of less than 10 for each of the individual VOC to be
measured.
8.1.1.3 Alternatively, if response factors have been published for the compounds of interest for the
instrument or detector type, the response factor determination is not required, and existing results may be
referenced. Examples of published response factors for flame ionization and catalytic oxidation detectors
are included in References 1-3 of section 17.0.
8.1.2	Calibration Precision. The calibration precision test must be completed prior to placing the
analyzer into service and at subsequent 3-month intervals or at the next use, whichever is later.

-------
8.1.2.1 Make a total of three measurements by alternately using zero gas and the specified
calibration gas. Record the meter readings. Calculate the average algebraic difference between the meter
readings and the known value. Divide this average difference by the known calibration value and multiply
by 100 to express the resulting calibration precision as a percentage.
8.1.2.2 The calibration precision shall be equal to or less than 10 percent of the calibration gas
value.
8.1.3 Response Time. The response time test is required before placing the instrument into service.
If a modification to the sample pumping system or flow configuration is made that would change the
response time, a new test is required before further use.
8.1.3.1 Introduce zero gas into the instrument sample probe. When the meter reading has stabilized,
switch quickly to the specified calibration gas. After switching, measure the time required to attain 90
percent of the final stable reading. Perform this test sequence three times and record the results.
Calculate the average response time.
8.1.3.2	The instrument response time shall be equal to or less than 30 seconds. The instrument
pump, dilution probe (if any), sample probe, and probe filter that will be used during testing shall all be in
place during the response time determination.
8.2 Instrument Calibration. Calibrate the VOC monitoring instrument according to section 10.0.
8.3 Individual Source Surveys.
8.3.1 Type I—Leak Definition Based on Concentration. Place the probe inlet at the surface of the
component interface where leakage could occur. Move the probe along the interface periphery while
observing the instrument readout. If an increased meter reading is observed, slowly sample the interface
where leakage is indicated until the maximum meter reading is obtained. Leave the probe inlet at this
maximum reading location for approximately two times the instrument response time. If the maximum
observed meter reading is greater than the leak definition in the applicable regulation, record and report
the results as specified in the regulation reporting requirements. Examples of the application of this
general technique to specific equipment types are:
8.3.1.1	Valves. The most common source of leaks from valves is the seal between the stem and
housing. Place the probe at the interface where the stem exits the packing gland and sample the stem
circumference. Also, place the probe at the interface of the packing gland take-up flange seat and sample
the periphery. In addition, survey valve housings of multipart assembly at the surface of all interfaces
where a leak could occur.
8.3.1.2	Flanges and Other Connections. For welded flanges, place the probe at the outer edge of
the flange-gasket interface and sample the circumference of the flange. Sample other types of
nonpermanent joints (such as threaded connections) with a similar traverse.
8.3.1.3	Pumps and Compressors. Conduct a circumferential traverse at the outer surface of the
pump or compressor shaft and seal interface. If the source is a rotating shaft, position the probe inlet
within 1 cm of the shaft-seal interface for the survey. If the housing configuration prevents a complete
traverse of the shaft periphery, sample all accessible portions. Sample all other joints on the pump or
compressor housing where leakage could occur.
8.3.1.4 Pressure Relief Devices. The configuration of most pressure relief devices prevents
sampling at the sealing seat interface. For those devices equipped with an enclosed extension, or horn,
place the probe inlet at approximately the center of the exhaust area to the atmosphere.

-------
8.3.1.5 Process Drains. For open drains, place the probe inlet at approximately the center of the
area open to the atmosphere. For covered drains, place the probe at the surface of the cover interface
and conduct a peripheral traverse.
8.3.1.6 Open-ended Lines or Valves. Place the probe inlet at approximately the center of the
opening to the atmosphere.
8.3.1.7 Seal System Degassing Vents and Accumulator Vents. Place the probe inlet at
approximately the center of the opening to the atmosphere.
8.3.1.8 Access door seals. Place the probe inlet at the surface of the door seal interface and
conduct a peripheral traverse.
8.3.2 Type II—"No Detectable Emission". Determine the local ambient VOC concentration around
the source by moving the probe randomly upwind and downwind at a distance of one to two meters from
the source. If an interference exists with this determination due to a nearby emission or leak, the local
ambient concentration may be determined at distances closer to the source, but in no case shall the
distance be less than 25 centimeters. Then move the probe inlet to the surface of the source and
determine the concentration as outlined in section 8.3.1. The difference between these concentrations
determines whether there are no detectable emissions. Record and report the results as specified by the
regulation. For those cases where the regulation requires a specific device installation, or that specified
vents be ducted or piped to a control device, the existence of these conditions shall be visually confirmed.
When the regulation also requires that no detectable emissions exist, visual observations and sampling
surveys are required. Examples of this technique are:
8.3.2.1 Pump or Compressor Seals. If applicable, determine the type of shaft seal. Perform a survey
of the local area ambient VOC concentration and determine if detectable emissions exist as described in
section 8.3.2.
8.3.2.2 Seal System Degassing Vents, Accumulator Vessel Vents, Pressure Relief Devices. If
applicable, observe whether or not the applicable ducting or piping exists. Also, determine if any sources
exist in the ducting or piping where emissions could occur upstream of the control device. If the required
ducting or piping exists and there are no sources where the emissions could be vented to the atmosphere
upstream of the control device, then it is presumed that no detectable emissions are present. If there are
sources in the ducting or piping where emissions could be vented or sources where leaks could occur,
the sampling surveys described in section 8.3.2 shall be used to determine if detectable emissions exist.
8.3.3 Alternative Screening Procedure.
8.3.3.1 A screening procedure based on the formation of bubbles in a soap solution that is sprayed
on a potential leak source may be used for those sources that do not have continuously moving parts,
that do not have surface temperatures greater than the boiling point or less than the freezing point of the
soap solution, that do not have open areas to the atmosphere that the soap solution cannot bridge, or that
do not exhibit evidence of liquid leakage. Sources that have these conditions present must be surveyed
using the instrument technique of section 8.3.1 or 8.3.2.
8.3.3.2 Spray a soap solution over all potential leak sources. The soap solution may be a
commercially available leak detection solution or may be prepared using concentrated detergent and
water. A pressure sprayer or squeeze bottle may be used to dispense the solution. Observe the potential
leak sites to determine if any bubbles are formed. If no bubbles are observed, the source is presumed to
have no detectable emissions or leaks as applicable. If any bubbles are observed, the instrument
techniques of section 8.3.1 or 8.3.2 shall be used to determine if a leak exists, or if the source has
detectable emissions, as applicable.

-------
9.0 Quality Control
Section
Quality control measure
Effect
8.1.2
Instrument calibration precision
check
Ensure precision and accuracy, respectively, of instrument
response to standard.
10.0
Instrument calibration

10.0 Calibration and Standardization
10.1 Calibrate the VOC monitoring instrument as follows. After the appropriate warmup period and
zero internal calibration procedure, introduce the calibration gas into the instrument sample probe. Adjust
the instrument meter readout to correspond to the calibration gas value.
Note: If the meter readout cannot be adjusted to the proper value, a malfunction of the analyzer is indicated
and corrective actions are necessary before use.
11.0 Analytical Procedures [Reserved]
12.0 Data Analyses and Calculations [Reserved]
13.0 Method Performance [Reserved]
14.0 Pollution Prevention [Reserved]
15.0 Waste Management [Reserved]
16.0 References
1. Dubose, D.A., and G.E. Harris. Response Factors of VOC Analyzers at a Meter Reading of
10,000 ppmv for Selected Organic Compounds. U.S. Environmental Protection Agency, Research
Triangle Park, NC. Publication No. EPA 600/2-81051. September 1981.
2. Brown, G.E., etal. Response Factors of VOC Analyzers Calibrated with Methane for Selected
Organic Compounds. U.S. Environmental Protection Agency, Research Triangle Park, NC. Publication
No. EPA 600/2-81-022. May 1981.
3.	DuBose, D.A. et al. Response of Portable VOC Analyzers to Chemical Mixtures. U.S.
Environmental Protection Agency, Research Triangle Park, NC. Publication No. EPA 600/2-81-110.
September 1981.
4.	Handbook of Hazardous Materials: Fire, Safety, Health. Alliance of American Insurers.
Schaumberg, IL. 1983.
17.0 Tables, Diagrams, Flowcharts, and Validation Data [Reserved]

-------
f	A United Stain*
molex j ftyy.wH
Appendix D2
FHRTVA lOOO-B FID Calibration Procedure
The following procedure refers to the model TVA-1000B (Thermo Fisher
Scientific, Waltham MA, USA) flame ionization detector (FID) typically used
for EPA Method 21 (M21) Leak Detection and Repair (LDAR) inspections in
Flint Hills Resources (FFIR) facilities. This procedure also applies to
alternate EPA Method 21 FID systems. For example, this procedure is
used for the model Phx42™, manufactured by LDARtools (Dickinson, TX,
USA), discussed in Appendix D3. The TVA-1000-B FID was the M21
instrument used for the exploratory tests (Appendix A), the SLOF tests
(Appendix B), and in generation of historical LDAR data discussed in this
report. The Phx42™ was the predominate M21 instrument used in the 2019
FHR CC Pilot tests. At the end of this procedure, a typical example of an
instrument calibration record set is provided.
Progress on LDAR Innovation
Appendix D: Test Methods
Appendix D2: FHRTVACal. Procedures
Page D9 to D18

-------
f FLINT HILLS
resources*
Corpus Christ! Refineries
Environmental Policy/Procedure Manual
Issued: December 2001
Revision Date: January 27, 2015
Revision No: 1
Last Reviewed: September 25, 2014
Environmental Policy/Procedure Manual
Calibration of TVA1000-B
Procedure No: FEP-AQ-100
Page: 1 of 5
I. Purpose
The purpose of this procedure is to outline the necessary steps for performing the daily calibration
and drift test on the TVA 1000-B analyzer.
II. Scope
This procedure is to be used by all LDAR Environmental Monitoring Technicians.
III.	Pre-requisites
1.	Ensure certification sheets for calibration gasses are filed in the correct location and
accurately reflect the calibration gas bottles.
2.	Update the Daily Calibration Form (found on the info-net under environmental policies,
procedures, forms and tools, Program: LDAR) to reflect current gas concentrations
and Cylinder Identification.
3.	Verify the calibration gas cylinders contain enough gas to last a full shift.
IV.	Procedure
1.0 Before Beginning Calibration
1.	Ensure filters are clean so possible contamination does not interfere with calibration
results & flow of instrument.
2.	Ensure hydrogen tank is filled.
3.	Ensure probe assembly is properly attached.
4.	Check machine filter and ensure flame arrestor is tightly secured.
5.	Make sure program settings are set correctly. Reference set-up guide in "check off
list" box as needed.
6.	If a dilution probe will be calibrated to the machine that is being set up, check the
100,000 ppm box next to "scale". All machines without a dilution probe will have the
10,000 ppm box checked.
7.	Make sure all "blanks" and check boxes are filled in appropriately on the Daily
Calibration form.
2.0 Calibration Steps
1. Turn machine on and open hydrogen supply valve. Allow 5 minutes for electronic
warm-up and to ensure proper hydrogen flow.
Programs: LDAR

-------
FLINT HILLS
Corpus Christ! Refineries
Environmental Policy/Procedure Manual
Environmental Policy/Procedure Manual
Issued: December 2001
Revision Date: January 27, 2015
Revision No: 1
Last Reviewed: September 25, 2014
Calibration of TVA1000-B
Procedure No: FEP-AQ-100
Page: 2 of 5
2.	Then press 1 = Run. This will ignite the unit and display readings. If a flame out
message appears, clear the message (press exit), wait a moment, and repeat this
step.
3.	Check the instrument sample flow rate using the rotometer. The flow rate must be
within 0.1 and 3.0 liters/minute to pass per method 21. Document the flow rate on the
Daily Calibration form.
4.	Allow at least 30 minutes for warm-up.
5.	Unplug machine.
6.	Press exit until main menu appears.
7.	Press 2 = Set up.
8.	Press 1 = Calibration.
9.	Press 3 = Zero to zero the instrument.
10.	Introduce zero gas into the analyzer through the probe.
11.	Press enter to start.
12.	Wait for minimal change in values (about 15 seconds). Typically, the sample is stable
when the first two digits of the reading do not change for 4-5 seconds. The values or
counts should be between 2000 and 5000.
13.	Press enter to accept, and then press 1 to save.
14.	Exit to Main Menu, Press 1 = Run. Record the reading with Zero Gas still attached to
the analyzer through the probe.
15.	Document results on "Daily Calibration Log".
16.	Exit to Main Menu.
17.	Press 2 = Set up.
18.	Press 1 = Calibration.
19.	Press 4 = Span.
20.	Follow screen prompts. The first gas introduced is the 500 ppm (methane).
21.	Wait for readings to stabilize at least 15 seconds.
22.	Press enter to accept, and then press 1 to save.
Programs: LDAR

-------

FLINT HILLS
resources'
Corpus Christ! Refineries
Environmental Policy/Procedure Manual
Issued: December 2001
Revision Date: January 27, 2015
Revision No: 1
Last Reviewed: September 25, 2014
Environmental Policy/Procedure Manual
Calibration of TVA1000-B
Procedure No: FEP-AQ-100
Page: 3 of 5
23.	Exit to Main Menu, Press 1 = Run, record your reading with 500 ppm Gas still
attached into the analyzer through the probe.
24.	Before moving on to the next calibration gas, introduce the Zero gas again to zero out
the analyzer.
25.	Repeat Steps 16-24 for the other gases. The other gases used for calibration are
2000 ppm and 10,000 ppm.
26.	On analyzers specifically used for carbon canisters, a 100 ppm gas will be used for
span 1, with the 500 ppm, 2000 ppm, and 10,000 ppm gases following in that order.
27.	TVA must be calibrated within 10% of the known calibration value as per method 21.
28.	Per FHR-CC internal policy, the TVA should be calibrated within ±4% of the known
calibration value.
29.	Calibration results will be documented on the "Daily Calibration Log". Place your
initials in appropriate column after calibrating analyzer, print your name in appropriate
column and record the time.
30.	Exit to main menu. Press 2 = Set up, 1 = Calibrate and 5 = RFO
31.	Confirm that the response factor says "RFO: DEFAULT". If not, set to this value.
32.	If for any reason the wrong gas is introduced to the wrong span, repeat Steps 6-23.
3.0 Calibrating Dilution Probe
Check the dilution probe assembly for loose fittings, worn out O-rings, and the hose for
cracks or tears.
The Dilution Probe operates by drawing ambient air through an activated charcoal
scrubber, then through a length of tubing, then through the dilutor sidearm and finally into
the TVA. The dilutor orifice allows samples to be drawn at a fixed ratio of 10:1. Standard
probe allows 100 mL of air through sidearm for each 10 mL of air through dilution probe,
expanding accuracy of the TVA between 10,000 ppm and 100,000 ppm.
1. To begin calibrating the Dilution Probe, place Dilutor fitting on the Close Area Sampler
of the TVA.
2.	Introduce 10,000 ppm gas or equivalent.
3.	Using the Fine Metering Valve of the Dilution Probe, adjust reading to 10% of
calibration gas value.
4.	Note: Opening the Fine Metering Valve dilutes the samples, allowing adjustments to
be lowered; whereas adjusting to the closed position restricts air flow, allowing
adjustments to be higher.
Programs: LDAR

-------
f FLINT HILLS
resources*
Corpus Christ! Refineries
Environmental Policy/Procedure Manual
Issued: December 2001
Revision Date: January 27, 2015
Revision No: 1
Last Reviewed: September 25, 2014
Environmental Policy/Procedure Manual
Calibration of TVA1000-B
Procedure No: FEP-AQ-100
Page: 4 of 5
4.0 Calibration Drift Analysis Test (CDAT)
The CDAT is conducted at the end of each workday. FHR has the option of also
performing a mid-day CDAT. The results are documented in the "Calibration Drift
Assessment Test" section of the "Daily Calibration Log". This test indicates how far the
instrument has drifted off its calibration.
1.	Have analyzer in "Run" mode.
2.	Introduce 500 ppm gas to Analyzer through probe. Document reading on the
appropriate CDAT column of the "Daily Calibration Log".
3.	Introduce the zero gas to the analyzer and repeat step 2, introducing the 500 ppm gas
and documenting your second reading. Zero out the analyzer a third time and
introduce the 500 ppm gas a third time, again documenting the reading on the "Daily
Calibration Log".
4.	Take the average of the three documented 500 ppm gas readings by adding them
together and dividing by 3. Follow the CDAT % formula on the back of the daily
calibration form to determine the percentage of drift from the original calibration value.
5.	Repeat the same steps for the 2000 ppm and 10,000 ppm gasses.
6.	If FHR is performing the optional mid-day CDAT and the drift is ±4% or greater on any
of the three gasses, the analyzer must be re-calibrated and documented on
calibration paperwork.
7.	If the drift is at -10% or lower on any of the three gasses, this will constitute a negative
drift. When a negative drift occurs, each component that was monitored after the
most recent calibration that resulted in readings of 50 ppm or greater must be re-
monitored. Notify FHR data managers and the FHR technician in charge of
calibrations for the week. Reference the negative drift guidelines on how to build an
"over 50's" package and for each FHR technician's responsibilities.
8.	If for whatever reason, a CDAT cannot be performed due to a mechanical failure, all
components must be re-monitored. Inform the data-managers as soon as possible
so they can review data in quarantine before it is processed.
9.	Reference approved wording and examples on both negative drifts and mechanical
failures for proper documentation.
IV. References
Refer to FHR form FEF-AQ-100 (Daily Calibration Form).
Refer to the LDAR Daily Checklist in the Environmental Share Drive.
Refer to set up guide and TVA 1000B Quick Start and Calibration Guide.
Programs: LDAR

-------
f FLINT HILLS
resources*
Corpus Christ! Refineries
Environmental Policy/Procedure Manual
Issued: December 2001
Revision Date: January 27, 2015
Revision No: 1
Last Reviewed: September 25, 2014
Environmental Policy/Procedure Manual
Calibration of TVA1000-B
Procedure No: FEP-AQ-100
Page: 5 of 5
Environmental Procedure Approval
This procedure has been reviewed and approved for distribution by Flint Hills Corpus Christi, LLC.
Environmental Director	Date
Programs: LDAR

-------
Flint Hills
Environmental Policy/Procedure Manual
Issued: December 2001
Revision No: 2.0
Daily Calibration Form
Form No: FEF-AQ-100
Date instrument Calibrated
Instrument Model: TVA 1000B
: s/zs/zr
Technician Signature
Instrument ID#: 69443
1



Zero
500 ppmv
2,000 ppmv
10,000 ppmv
Calibration Gas Bottle
Certification No.
Lot Number:
856697
Lot Number:
TLBH-150A-500-3
Lot Number:
TLBH-150A-2000-1
Lot Number:
TLBH-150A-9950-1
Calibration Gas Concentration
(ppmv)
0
512
2020
9940
Checklist:
~Checked the probe filter
Q^Program spans set correctly
TVA start time:.
~Checked the machine filter Q^Checked o-rings
GK^hecked expiration dates of calibration gases
fahaiiaafl&aiftB&s sif&al&rai&Q
Start Shift
Mid Shift CDAT
(optional)
Mid Shift
Calibration
(if needed;
End Shift CDAT
Time
f:ce>,
'Mi,
rffjt

Zero
4-
500 ppmv

sse
2,000
ppmv
ZoZZ
/O. poo
Z/i/
10,000
ppmv
w/
Dilution
Probe
!6c>7
Instrument sample flow rate = /.. 	liters/min. Must be between 0.1 and 3 liters/min to pass.
Notes:
1.	Recalibrate if any instrument readings vary greater than 10% from the gas standards.
2.	A calibration drift assessment test (CDAT) must be performed at the end of the monitoring shift and a mid-shift CDAT is
optional.
3.	If the CDAT instrument reading is less than 10% of the gas standard, re-monitor all components screened greater than 50ppm
since the last acceptable calibration check.
4.	If a dilution probe is to be used during the monitoring shift, it must be calibrated with the instrument and recorded on this form.
Do not use the probe if it does not calibrate within the above standards. Use 10.OOOppm gas to calibrate dilution probe.
Comments:,

-------
f FLINT HILLS
resources*
Corpus christi Refineries	Program; LDAR
Daily Calibration Form
Date Instrument Calibrated: 5 " $ ~ £016? Instrument Model (check one):[3^rVA ~ PID ~ Other
Instrument ID#:__ M°i 4^.3 Scale: ~ 10,000 PPM 0^100,000 PPM ~ Other
Technician initials: 0 F		Name (print):		
Calibration Gas Standards

Zero
500 ppmv
2,000 ppmv
10,000 ppmv
Calibration Gas
Cvlnder Identification
856697
TLBH-150A-500-3
TLBH-150A-2000-1
TLBH-150A-9950-1
9940
Calibration Gas Concentration
fppmv)_
0
512
2020
Calibration Gas
Certification Date
See Gas
Certificat on
See Gas
Certification
See Gas
Certification
See Gas
Certification
Checklist:
0^ Checked the flame arrestor prior to starting TVA	[2^Checked o-rings and connections are airtight
hecked the probe filter	GKphecked the machine filter
[y Program spans set correctly	Q"Checked expiration dates of calibration gases
Start time for TVA warm up: IQ' DQ	PM
Instrument sample flow rate: f liters/min. (Must be between 0.1 and 3 liters/min to pass)
Notes:
1.	The battery charger should be disconnected from the instrument during all tests (Calibration, CDAT and Quarterly tests).
2.	If a dilution probe is to be used during the monitoring shift, it must be calibrated with the instrument and recorded on this form.
Use 10,000 ppmv gas to calibrate dilution probe.
3.	A CDAT must be performed at the end of the monitoring shift and a mid-shift CDAT is optional. This applies to LDAR leak
definitions only (500 ppmv, 2000 ppmv and 10,000 ppmv).
4.	If the CDAT shows a negative drift of more than 10 percent from the last calibration value, then all components screened
greater than 50 ppmv since the last acceptable calibration check must be remonitored. This applies to LDAR leak definitions
only (500 ppmv, 2000 ppmv and 10,000 ppmv)
5.	FHR Corpus Christi only uses purchased certified calibration gases.
6.	In the comment section below, describe any corrective action taken if the meter readout could not be adjusted to correspond to
the calibration gas value in accordance with section 10, 1 of Method 21 of appendix A-7 of this part.
Comments:
QA/QC review completed by (print name):	Date:,
Issued: December 2001
Revision Date: January 27, 2015 (No. 4)
Last Review: September 25, 2014
Form No FEF-AQ-100
Page 1 of 2

-------
(FLINT HILLS
resources'
Corpus Christi Refineries	PrOQCanV LDAR
Instrument Readings of Calibration Gases
Instrument Calibration
Technician initials: Technician name (print): Time:
Calibration in-house target is between +/- 4% of the calibration qas value
*33* PM

Zero

500
ppmv
2,000
ppmv
10,000 ppmv
w/ Dilution Probe
PPMV
/

5/3
zozn
10, QQ0
C
)
-------

FLINT HILLS
resources*
Corpus Christi Refineries
Certifier's Name:
Instrument ID#:
Environmental Policy/Procedure Manual
Issued: December 2001
Revised: 5/2005 (Revision No. 2)
Reviewed: 5/2013
JpgL fUlUS	 ^ROITTINE 3-MONTH certification
r REPLACEMENT INSTRUMENT
		 r MAINTENANCE PERFORMED Zer0 Air Cy'inder Number:	I
Instrument Certification Form
Form No. FEF-AQ-101
Date of Test:
5' 3-1&
Calibration Test Results
1
Cylinder #:
Expiration Date:
¦
2
Cylinder #:
Expiration Date:
TU&l\*lSQfrSoOZ

2.0 21
TL&H -ISO a toco A
11- 06
-


500 ppm cal





2,000 ppm cal




gas mixture
Meter reading
Absolute delta



gas mixture
Meter reading
Absolute delta

Zero air
ppmv
ppmv
ppmv


Zero air
ppmv
ppmv
ppmv
Test
reading ppmv
(A)
(B)
(A-B)1

Test
reading ppmv
(A)
_ «
(A-B)'
1
o*
Zl 2.
52J
A

1
0
2,02.0
2.6*4 i
z. \
2
o
£tZ
52.2
to
2
o
•2.0ZM


3
o
eiz
S 2*6

3
o
2.O7.0

2A


Total (C)=
32.

1

Total (C)=
14,
D= ((C/3)/A)*100=
±.bl
D= <(C/3)/A)*100=
i-LS
3
Cylinder #:
Expiration Date:
"TCBH -I50A
u- ofc
-£0ZJ


10,000 ppm cal




gas mixture
Meter reading
Absolute delta

Zero air
ppmv
ppmv
ppmv
Test
reading ppmv
(A)
(B)
(A-B)1
1
o


loo
2
o
ct^Oo
t&ZjO*
11*0
3
o

IO ZjOO
2M0



Total (C)=
(,fl0

D= ((C/3)/A)*100=
2. 2.8
If (D) is less than or equal to 10%, the machine passes;
Calibration gas mixture (A) x .90 = V& /
This is your 90% response reading that is required.
Test 1: Lfc.	_Seconds (B)
Test 2: g , M > Seconds (B)
Test 3: H 5 Seconds (B)
Total Seconds (E)
(E) divided by 3=	Seconds (F)
Calibration gas mixture (A) x .90 = { 8 (8
This is your 90% response reading that is required.
Test 1:	_Seconds (B)
Seconds (B)
Seconds (B)
Total Seconds (E)
Seconds (F)
Test 2:
Test 3:
5- 01
w
(E) divided by 3=
If all values for (F) are less than or equal to 30 seconds, the machine passes;
Maximum/highest response time obtained during tests:.
Seconds
Sample Flow Test
Instrument sample flow rate = / __ liters/min. Must be between 0.1 and 3 liters/min to pass.
Certification
INSTRUMENT PASSES CERTIFICATION: f^YES l~ NO
This instrument meets the specifications outlined in reference Method 21 and is therefore certified for use by:
Signature:	Date:
11n this case, the absolute difference is always a positive value
Response Time Test
Calibration gas mixture (A) x .90 =
This is your 90% response reading that is required.
Test l:	Seconds (B)
Test 2
Test 3
> OLf

Seconds (B)
Seconds (B)
Total Seconds (E)
(E) divided by 3= 6t-m Seconds (F)
Accessories attached during test:
n/arfiOAJ ffW6B	

-------
¦	f	A	United Stains
molex \ JH:S7,H!LH oEPA^:>j^'
Progress on LDAR Innovation
Appendix D: Test Methods
Appendix D3: Phx42™ Cal. Procedures
Page D19 to D35
Appendix D3
Phx42™ FID Use and Calibration Procedures
The user instruction manual, maintenance information, and response
factors for the EPA Method 2(M21) flame ionization detector (FID), model
Phx42, manufactured by LDARtools (Dickinson, TX, USA) can be found at
the below web address, last accessed February 21, 2019 (version of
manual used in FFIR CC 2019 studies).
http://www.ldartools.eom/#resources
Calibration Procedure
The calibration procedures for the LDARtools Phx42™ starting on page 4
of the manual. See Appendices D1 and D2 for further information on FFIR
M21 FID calibration procedures and record keeping.
Clarification on use
The LDARtools Phx42™ was the primary M21 screening value
quantification tool used during the 2019 FFIR CC pilot studies but early
CRADA tests and historical M21data was primarily acquired with the TVA-
1000B (Thermo Fisher Scientific, Waltham MA, USA)

-------
LDARtools
Cal5.0 Manual
Rev. Date: November 26,2018
To confirm that this is the most current
version, please go to
www.LDARtools.com/#support. then
check the Resources section.
OR
Go to the Customer Support Portal and
Open the "Docs" section.

-------
Table of Contents
Settings	2
Cylinder Set-up and Management	2
Extension Probes	3
Daily Calibration	4
Drift	5
Precision Calibration	6
Exporting Calibration Records via USB	7
Automatic wireless export of Calibration Records	7
Manual wireless export of Calibration Records	7
Managing Updates	8
Update Cal5.0	8
Update the Device Settings	8
Update License	8
Changing Wireless Network	9
T roubleshooting	10
Responding to Errors	10
Reporting an Issue	12
Exporting Logs	13
Shutdown/Restart	13
1

-------

Settings
Cylinder Set-up and Management
NOTE: A Cal5.0 Parameter Form is available for you to use to develop and document your
Calibration Settings. See LDARtools.com>Support>Resources.
1. Add a Cylinder:
a.	Tap Menu (the "hamburger" icon).
b.	Tap Manage Cylinders.
c.	Tap Add Cylinder.
d.	Fill in the fields appropriate to your cylinder:
i.	Certification day - The day your cylinder was certified
ii.	Expiration date - The day the certification on the cylinder will expire
iii.	Manufacturer's or Cylinder Serial Number - The ID you want to use to
refer to this cylinder
iv.	Actual PPM - The actual PPM of the cylinder
v.	Target PPM - The target PPM or the Leak Definition associated with this
cylinder
e.	Tick the following checkboxes that apply:
i.	The Calibration checkbox means that this cylinder will be used for Daily
and Precision.
ii.	The Drift checkbox means that this cylinder will be used for all drifts.
i. Continue adding cylinders until finished. (NOTE: At this point, Cal5.0
will automatically assign the ports from least to greatest concentrations.)
2

-------
ii.	Verify that each cylinder's port number corresponds to the port number its
gas line is connected to in the back. (The Port number is in the top left
corner of every cylinder.)
iii.	Tap Done.
2.	Deleting a Cylinder
f.	Tap the Delete button next to cylinder.
3.	Temporarily Suspending the use of a Cylinder.
g.	Uncheck the Calibration and Drift checkboxes.
h.	Tap Done.
4.	Adding Technicians:
a.	Tap Menu.
b.	Tap Manage Technicians.
c.	Tap the textbox and enter the Technician's name.
d.	Tap Add.
e.	Tap Done.
5.	Removing Technicians:
a.	Tap Menu.
b.	Tap Manage Technicians.
c.	Tap the X next to a technician's name.
d.	Tap Done.
Extension Probes
In order to calibrate extension probes on a SpanBox5 you will need to
contact support@ldartools.com for instructions on modifying the box settings.
NOTE: The response time of the phx42 with an extension probe exceeds the cut-off for the
default settings and may result in the units being assigned to the wrong port during
calibration.
3

-------
Daily Calibration
1.	Confirm that you have been certified according to the Cal5.0 Mastery Document.
2.	For the initial use of the SpanBox5, a best practice is to "assign" each phx42 to a specific
port on the SpanBox5. You should label each port with the serial number of the phx42
that you will use on that port for all Daily and Precision Calibration and Drifts.
3.	Ignite the phx42 using the 3-button-press ignition before attaching the device to the
SpanBox5.
NOTE: Throughout the calibration process, avoid kinking, bending, or blocking probes at
any time!
4.	Check the Probe Flow of all your units and record the results for later entry into Cal5.0.
Do this before you attach the phx42s to the SpanBox5.
5.	Tap Daily Calibration.
6.	Select phx42(s) on Connection screen, then tap Next.
If connection or firmware error appears, tap Yes to Retry Connection.
After a successful Connection, Self-check will begin
If there are any Self-check issues, acknowledge them by marking the associated Check
Boxes. Then tap OK.
7.	Tap Next.
8.	Verify that the cylinder information is correct, then tap Next on the Cylinders screen.
Otherwise, tap Manage Cylinder to remove or add cylinders.
At the four-minute mark of the warm-up, the Hunting process will administer gases
through each port and ID the phx42 that "recognizes" that gas has been released. This
will confirm which port each phx42 is on.
Once the warm-up period is over, the unit will start calibrating the assigned cylinders.
Afterwards, the calibration confirmation process will verify if the unit is calibrated to
each gas correctly.
Once the units pass or fail Calibration, the Maintenance Report screen will appear.
9.	Select your Tech Name from the dropdown and sign, then tap Next.
10.	Check the box next to any non-critical self-check issues that may have occurred.
11.	Select Tech Name from the dropdown list, sign, then tap Next.
4

-------
12.	Tap OK.
13.	Daily Calibration of Filter Detection
Immediately after the last step of your calibration/confirmation process or right before your
phx42 goes out into the field:
1.	Remove the probe filter.
2.	Wait 5 seconds (the pumps will shut off).
3.	Replace the same probe filter and continue with your work day.
REMINDER: If you do not do this within 30 minutes of calibration you will receive ERROR
CODE 24.
Drift
1.	Make sure that the phx42 is ignited and warmed-up before attaching it to the SpanBox5.
2.	Make sure that the phx42 is attached to its assigned port: THE SAME PORT with which
it was calibrated earlier in the day.
3.	Tap the Drift box on the Home Screen.
Analyzers that have a Daily Calibration from today will appear.
On the right of each analyzer, the latest EOD and Noon drift results will be shown.
4.	Check the boxes next to the analyzers to be drifted:
If the analyzer can't be discovered, it will be greyed out.
If the analyzer is in range, wait a minute for the SpanBox5 to discover and
connect.
5.	Check either Noon or End of Day at the bottom.
6.	Tap Next.
If a drift of the same type (Noon or EOD) has already been performed on a unit, it will
prompt you to confirm that the previous drift will be overwritten. Only do this if you are
an experienced user and understand the consequence.
7.	If the Cylinder information is correct, tap Next. Otherwise, tap Manage Cylinders.
The Drift process will begin.
Units will be drifted to the selected cylinders that were assigned.
5

-------
If you are running a standard drift, each gas will be applied once.
If you are running a VVa, each gas will be applied 3 times.
If a unit fails on one gas, it will continue testing any other gases you might have selected.
8. Once complete, select a Tech from dropdown, sign, then tap OK.
Precision Calibration
1.	Make sure that each phx42 is attached to its assigned port: THE SAME PORT with
which it was calibrated earlier in the day.
2.	Tap the Precision Calibration box on the Home screen.
Analyzers that have a Daily Calibration from today will show.
On the right of each analyzer, the latest Precision Calibration result will be shown.
3.	Check the boxes next to the analyzers to be Precision Calibrated.
If analyzer cannot be discovered, it will be greyed out.
4.	Tap Next.
REMINDER: If you ever run an extra Precision Calibration on the same day, you will be
warned that you will be overwriting the previous record for that day. Only do this if you
are an experienced user and understand the consequence.
The Calibration Precision process will automatically begin.
Each gas response time will be tested 3 times each.
If a unit fails at any time, it will be skipped for the remainder of the tests but remain on-
screen. The other units will continue.
A notice at the end of the Precision Cal will display the phx42 number and what gas it
failed at on the Cal Precision Report screen.
5.	Select Tech from dropdown, sign, and tap OK.
6

-------
Exporting Calibration Records via USB
1.	Insert a USB drive into SpanBox5.
2.	On the Home Screen, tap Export Records.
3.	Select the desired date range.
4.	Tap OK.
5.	Wait until the "Successfully exported records to USB" prompt appears.
6.	Tap OK.
7.	Remove the USB from SpanBox5 and connect it to a computer.
The Record files can be found in the Cal5 Records folder with either of these filename
formats:
•	Cal5 Record DD.MM. YYYY-DD.MM. YYYY
•	Cal5 Record DD.MM. YYYY
Automatic wireless export of Calibration Records
Cal5.0 can scheduled to transmit your calibration records to your LTI Desktop Manager or
Chateau database daily at a time of your choosing.
Please contact support@ldartools.com for sync credentials and setup instructions.
Manual wireless export of Calibration Records
Cal5.0 can manually transmit calibration records to your LTI Desktop Manager or Chateau
database if sync credentials have been enabled.
Obtain your credentials from your Site Supervisor or contact support@ldartools.com to enable
this feature.
1.	On the Home screen, tap Reports.
2.	Tap Manual Sync.
3.	Select the desired date range.
4.	Tap OK.
5.	Once upload process is complete, tap OK.
7

-------
Managing Updates
Update Cal5.0
1.	Tap Menu.
2.	Tap Update Cal5.0.
If a newer version is available, it will show a prompt to download the update.
3.	Tap OK.
Once the download is complete, it will automatically update and restart Cal5.0.
Update the Device Settings
1.	Email support@ldartools.com with the desired change.
LDARtools Technical Support will complete the change.
2.	Once you are notified of the change, tap Menu.
3.	Tap Settings.
4.	Wait, then once the updated prompt appears, tap OK.
5.	Tap Menu.
6.	Tap Home.
7.	Tap OK to close the prompt.
Update License
1.	Tap Menu.
2.	Tap License.
3.	Tap Update Online.
8

-------
4.	Once the Success prompt appears, tap OK.
5.	Tap Done.
Changing Wireless Network
1.
Request the Super-Secret Password from support®,ldartools.com.
2.
Tap Menu.
3.
Tap Manage Networks.
4.
Enter the Super-Secret Password.
5.
Tap OK.
6.
Tap Search.
7.
Select the Wi-Fi name.
8.
Enter the password, then tap Connect.
9.
Tap Test Connection.
10.
Once successful, tap Done.
9

-------
T roubleshooting
Responding to Errors
Error: Could not get
Two things are happening:
Unit and product

settings from support
• This phx42 is not being calibrated on its assigned SpanBox5.
portal or database
• This SpanBox5 can't communicate with the LDARtools
cache.
Support Portal to download settings.

1. Check if the SpanBox5 can grab settings from the Support Portal:

a. Tap Menu.

b. Tap Settings.

c. Wait, then do either:

• If a successful prompt appears, tap OK.

Try calibrating again, there may have been a

temporary loss of network connection.

• If vou do not set a successful prompt, then proceed

to next step.

2. Verify that the SpanBox5 is connected to a wireless network:

a. Follow Steps 1-8 of the Changing Wireless Network

section.

Once the wireless network is selected and a Disconnect

button appears, then the network is connected

b. Tap Test Connection to verify that the connection is good.

c. Do either:

• If successful, navigate to the Home screen, then

start Daily Calibration.

• If unsuccessful, submit a software support ticket

and be sure to attach the Cal5.0 logs.
10

-------
Error: No phx
mapped to port
1.	Contact Support if you do not use 10k ppm gas. This is the first
step!
2.	Check the gas lines.
Verify that all gases going from left to right (when facing the back
of the SpanBox5) are arranged in order of least to greatest
concentration.
REMINDER: SpanBox5 uses the highest span g to determine the
phx42 port assignment.
3.	Verify that the gases selected in the Manage Cylinders screen are
the ones connected to the SpanBox5:
a.	Open the Manage Cylinders screen.
The Port number is in the top left corner of every cylinder.
b.	Verify that each cylinder's port number corresponds to the
port number its gas line is connected to in the back.
4.	If these are correct and the problem persists, submit a software
support ticket and be sure to attach the Cal5.0 Logs.
Critical Self-check
Issues
Critical Self-check
issues prevent the
calibration process.
1.	If the problem is a Sample Pump Shutdown error:
a.	Check for tears and leaks in the probe.
b.	Perform Probe Integrity test.
c.	If there is a probe problem. Submit a Hardware support
ticket. (Do not use the 42App to report this issue.)
Describe the problem in the comment field.
2.	If problem is an HPH2 error:
a.	Fill the analyzer with Hydrogen and rerun the Daily
Calibration.
b.	If the problem persists, connect with the 42App.
c.	Run the Self-check and Report an Issue on the Support
Portal.
d.	Attach the phx42 logs from the SpanBox5 to the Support
ticket.
3.	If the problem is neither of the above:
a.	Connect with the 42App.
b.	Run the Self-check and Report an Issue on the Support
Portal.
c.	Attach the phx42 logs from the SpanBox5 to the Support
ticket.
11

-------
Non-Critical Self-
Check Issues
1.	If this is a Charge or Batterv issue:
a.	Make sure the units are on chargers overnight.
b.	If the batterv shows "uncharged" after 12 hours on the
charaer. connect to the 42App.
c.	Run the Self-check and Report an Issue on the Support
Portal.
d.	Attach the phx42 logs from the SpanBox5 to the Support
ticket.
2.	If this is a Heater issue:
a.	Connect with the 42App.
b.	Run the Self-check and Report an Issue on the Support
Portal.
c.	Attach the phx42 logs from the SpanBox5 to the Support
ticket.
Failed Connection /
Signal Strength
Problems
1.	If Bluetooth strength is not OK after 60 seconds, ao back to
the Home screen, then continue.
2.	If that does not resolve it. submit a software support ticket and
be sure to attach the Cal5.0 Logs.
Flame-out or ignition
problems during
warm-up
1.	Connect with the 42App.
2.	Run the Self-check.
3.	Do either:
•	If it passes, put it back on the SpanBox5 and calibrate.
•	If it fails. Report an Issue on the Support Portal.
Reporting an Issue
1.	Tap Menu.
2.	Tap Report an Issue.
3.	Enter Support Portal Credentials.
4.	Select type of issue:
a.	Cal5.0 issue
b.	Analyzer issue
5.	If Analyzer issue, select serial number from dropdown.
6.	Select Date of issue.
12

-------
7.	In the Issue text box, enter a description of the issue.
8.	Tap submit.
Exporting Logs
NOTE: When reporting an issue through Cal5.0, logs will automatically be attached. Only use
the steps below if you are submitting updated logs for an existing support case.
1.	Tap Menu.
2.	Tap Copy Log files to USB.
3.	Wait until the "Successfully exported logs to USB" popup appears.
4.	Tap OK.
5.	Remove the USB from SpanBox5 and connect it to a computer.
6.	On the computer's File Explorer, navigate to the USB drive.
The logs can be found in the Logs folder. The file name includes the phx42 serial number
followed up by the type of Log file and the date.
Shutdown/Restart
Note: Only shutdown the SpanBox5 when instructed to do so by LDARtools support or when
preparing to move the unit.
1.	Tap Menu.
2.	Scroll down Menu.
3.	Tap Shutdown/Restart.
13

-------
5/21/2019
DailyCal Report
Analyzer ID Date Calibrate Cylinder	Cyl. Mfr. ID Cone. Reading Pass Ext Drift 1 Drift Drift Drift Drift Pass Drift 2 Drift Drift Drift Drift Probe Pass Warm up Tech ID Warm Up
Probe	1a 1b 1c Pct1	2a 2b 2c Pct2 Flow	Duration
phx42-1745
4/15/2019
10:47 AM
Zero 0
5625377Y
0.00
1.12
Yes
No
-
-
-
-
-
-
-
-
-
-
-
0.25
-
10:31 AM
Mike Charles
15 min
phx42-1745
4/15/2019
10:47 AM
Methane 500
EB0106513
495.00
497.00
Yes
No
11:51 AM
496
496
503
0.3
Yes
4:51 PM
504
501
502
1.1
0.25
Yes
10:31 AM
Mike Charles
15 min
phx42-1745
4/15/2019
10:47 AM
Methane 2000
CC400156
2,021.00
2,013.00
Yes
No
11:53 AM
2014
2007
2018
0.0
Yes
4:52 PM
1996
2005
1998
-0.7
0.25
Yes
10:31 AM
Mike Charles
15 min
phx42-1745
4/15/2019
10:47 AM
Methane 10000
CC82984
9,890.00
9,738.00
Yes
No
11:54 AM
9712
9768
9809
0.3
Yes
4:53 PM
9741
9748
9771
0.2
0.25
Yes
10:31 AM
Mike Charles
16 min
Personnel have performed the required daily calibration, for all working Analyzers on site, to every calibration span gas.
Zero gas was used to purge after the initial calibration process and between all samples during the drift assessment

-------
Precision Calibration Record
Page 1 of 2
Analy
zer
Cal. Date
Tech ID
Gas
Span
Cone
Pass
Test
1
Test
2
Test
3
Prec.
Resp
1
Resp
2
Resp
3
Avg
Resp
Anlz SN
Cyl. Man.
ID
Cyl. Exp
Date
phx42-
1745
4/15/2019
10:54:00
AM
Mike
Charles
Methane
500
495
Yes
501
500
502
1.212%
03.77
03.92
03.77
03.82
phx42-
1745
EB010651
3
9/18/2026
phx42-
1745
4/15/2019
10:55:00
AM
Mike
Charles
Methane
2000
2021
Yes
2006
2024
2018
0.247%
03.82
04.01
03.73
03.85
phx42-
1745
CC400156
10/16/202
2
phx42-
1745
4/15/2019
10:56:00
AM
Mike
Charles
Methane
10000
9890
Yes
9838
9821
9832
0.603%
04.08
03.81
03.96
03.95
phx42-
1745
CC82984
8/1/2020
Zero gas was used to purge after the initial calibration process and between all samples during the drift assessment and precision calibration processes.
Signature:
Print:

Date

Field Supervisor

Signature:
Print:

Date

Site Administrator

Signature:
Print:	
Client Rep. (optional)
Date

-------
f	A United Stain*
molex j ftyy.wH
Appendix D4
MiniRAE 3000 and Cub PID Information
and Procedures
The hand-held probe (HHP) MiniRAE 3000 10.6 eV photoionization
detector (PID), manufactured by Honeywell Rae Systems Inc. (Sunnyvale,
CA USA) and the Cub personal VOC monitor (10.6 eV PID), manufactured
by Ion Science Inc. (Stafford TX, USA) may be used as supporting
diagnostics (non-critical data) for assessment of leaks/emissions in Test 4.
A product brochure and a calibration procedure for the MiniRAE 3000 is
contained on the following pages D32-D34. A product brochure for the Cub
is contained on pages D35-D36.
The user's manual for the MiniRAE 3000 is found at the flowing web
address, last accessed February 21, 2019 (version FHR CC pilot tests):
https://www.raesvstems.com/sites/default/files/content/resources/3G%20PI
D%20User%27s%20Guide%20Rev%20A English O.pdf
The user's manual for the Ino Science Cub is found at the flowing web
address, last accessed February 21, 2019 (version FHR CC pilot tests):
Progress on LDAR Innovation
Appendix D: Test Methods
Appendix D4: PID info and Procedures
Page D36 to D44
https://www.ionscience.com/products/cub-personal-voc-
detector/#downloads

-------
Honeywell
THE POWER OF CONNECTED
ppbRAE 3000
Portable Handheld VOC Monitor
The compact ppbRAE 3000 + is a comprehensive VOC gas monitor and
datalogger for hazardous environments. The ppbRAE 3000 + is one of
the most advanced handheld VOC monitors available for parts-per-billion
detection. This third-generation patented RID device monitors VOCs
using a photoionization detector with a 9.8 eV, 10.6 eV UV-discharge lamp.
The built-in wireless modem allows
real-time data connectivity with the
command center located up to two
miles/3 km away (with optional RAELink3
portable modem) from the detector.
Workers can easily measure VOCs
and wire Less Ly transmit readings
up to 2 miles/3 km away.
Accurate VOC measurement in all operating conditions
Easy access to lamp and sensor
in seconds without tools
Patented sensor and lamp autocleaning
reduces maintenance
Monitors real-time readings and location of people
Low Cost of Ownership: 3-year 10.6 eV lamp Warranty
BLE module and dedicated APR for
enhanced datalogging function
FEATURES & BENEFITS
APPLICATIONS
Proven PID Technology
•	3-second response time
•	Extended range from 1 ppb to 10,000 ppm with highly acute linearity
•	Humidity compensation with integral humidity and temperature sensors
•	Reflex PID Technology™
Oil & Gas
HazMat
Industrial Safety
Civil Defense
Environmental & Indoor Air Quality
Integrated
•	Highly connectivity capability through multiple wireless module options
•	Integrated Correction Factors list of 220 compounds—more than any other PID
•	Includes flashlight for dark conditions
•	Large graphic display presents gas type,
Correction Factor and concentration
Durable
•	Easy access to battery, lamp and sensor in seconds without tools
•	Rugged housing withstands use in harsh environments
•	IP-67 waterproof design for easy cleaning and decontamination
~ G£T 
-------
Instrument Specifications
Size
10" L x 3.0" W x 2.5" H (25.5x7.6x6.4 cm)
Weight
26 oz(738 g)
Sensors
Photoionization sensor with standard 10.6 eV or optional 9.8 eVlamp
Battery
•	Rechargeable, external field-replaceable Lithium-Ion battery pack
•	Alkaline battery adapter
Running time
16 hours of operation (12 hours with alkaline battery)
Display Graphic
4 lines, 28 x 43 mm, with LED backlight for enhanced display readability
Keypad
1 operation and 2 programming keys, 1 flashlight on/off
Direct Readout
Instantaneous reading
•VOCs as ppm by volume or mg/m3 (3 in upper case for cubic)
•STEL, TWA and PEAK
•	Battery and shutdown voltage
•	Date, time, temperature
Alarms
95 dB (at 12730 cm) buzzer and flashing red LED to indicate exceeded preset limits
•	High: 3 beeps and flashes per second
•	Low: 2 beeps and flashes per second
•STEL and TWA: 1 beep and flash per second
•Alarms latching with manual override or automatic reset
•	Additional alarm for low battery and pump stall
EMI/RFI
Highly resistantto EMI/RFI
Compliantwith EMC Directive 89/336/EEC
IP Rating
•	IP-67 unit off and without flexible probe
•	IP-65 unitrunning
Datalogging
Standard 6 months at one-minute intervals
Calibration
Two-point or three-point calibration for zero and span.
Reflex PIDTechnology™
Calibration memory for 8 calibration gases
Sampling Pump
•	Internal, integrated flow rate at 500 cc/mn
•	Sample from 100' (30m) horizontally and vertically
Low Flow Alarm
• Auto pump shutoff at low-flow condition
Communication &
Data Download
• Download data and upload instrument set-up from PC through charging cradle or
using BLEmoduleanddedicatedAPP
•Wireless data transmission through built-in RF modem
Wireless Network
Mesh RAESystems Dedicated Wireless Network
Wireless Range
(Typical)
Up to 15 ft(5m) for BLE
EchoView Host: LOS > 660 ft (200 m)
ProRAE Guardian & RAEMesh Reader: LOS > 660 ft (200 m)
ProRAE Guardian & RAELink3 Mesh: LOS >330 ft (100 m)
Safety
Certifications
•	US and Canada: U L, cU L, Classified as Intrinsically Safe for use in Class 1, Division
1 Groups A, B, C, D
•	Europe: ATEX Ex II 2GExia IIC/IIBT4
•	lECEx: Ex ia IIC/IIBT4
Temperature
-4° to 113° F (-20° to 50° C)
Humidity
0% to 95% relative humidity (non-condensing)
Instrument Specifications
Warranty
3-year warranty for 10.6 eV lamp,
1 year for pump, battery, and instrument
Wireless Frequency
ISM license-free band. IEEE 802.15.4
SublGHz
IEEE 802.11 bands b/g2.4GHz
Wireless Approvals
FCC Part 15, CER&TTE, Others1
Radio Module
Supports Bluetooth or RM900 or BLE
1 Contact RAE Systems for country-specific
wireless approvals and certificates-
Specifications are subject to change.
Sensor Specifications
Gas Monitor
Range Resolution
Response
TimeT90
VOCs
0 to 9999 ppb
10 to 99 ppm
100 to 999 ppm
1000 to 9999 ppm
lppb <3s
0.01 ppm <3s
0.1 ppm <3s
1 ppm
< 3 s
MONITOR ONLY INCLUDES:
•	ppbRAE 3000 + Monitor
•	Wireless communication module built in,
as specified
•	Charging/download adapter
•	Organic vapor zeroing kit
•	Ted La r'1 bag for calibration
•	Flex-I-Probe™
•	External filter
•	Rubber boot with straps
•	Alkaline battery adapter
•	Lamp-cleaning and tool kit
•	Soft leather case
MONITOR WITH ACCESSORIES KIT:
•	Hard transport case with pre-cut foam padding
•	Charging/download cradle
•	5 Porous metal filters and O-rings
•	Gas outlet port adapter and tubing
OPTIONAL CALIBRATION KIT ADDS:
•	10 ppm isobutylene calibration gas, 34L
•	Calibration regulator and flow controller
OPTIONAL GUARANTEED
COST-OF-OWNERSHIP PROGRAM:
•	4-year repair and replacement warranty
•	Annual maintenance service
Attachments
Durable black rubber bootwith straps
For more information
www.honeywellanalytics.com
www.raesystems.com
Europe, Middle East, Africa
Life Safety Distribution GmbH
Tel: 00800 333 222 44 (Freephone number)
Tel: +41 44 943 4380 (Alternative number)
Middle East Tel: +971 4 450 5800 (Fixed Gas Detection)
gasdetection@honeywell.com
Americas
Honeywell Analytics Distribution Inc.
Tel: +1 847 955 8200
Toll free:+1800 538 0363
detectgas@honeywell.com
Honeywell RAE Systems
Phone:+1 408 952 8200
Toll Free: +1 888 723 4800
Datasheet_ppbRAE 3000_+_DS-1018-_EN
©2018 Honeywell International Inc.
Asia Pacific
Honeywell Analytics Asia Pacific Tel:
+82 (0) 2 6909 0300
India Tel: +91124 4752700
China Tel: +86 10 5885 8788-3000
analytics.ap@honeywell.com
Technical Services
EMEA: HAexpert@honeywell.com
US: ha.us.service@honeywell.com
AP: ha.ap.service@honeywell.com

-------
Mini Rae 3000 Calibration Procedures
1.	To turn on hold down "ON/Mode" key for a couple of seconds. When the PID is turned on it will
go thru a short range of information. You will see the word Ready across your screen go ahead
and press "Y/+" key. At this point it will go thru a short warm up phase and straight into the
survey mode. PID is now ready to calibrate.
2.	Next, hold down the "MODE" and "N/-" keys at the same time for a couple of seconds. You will
see "Calibrate/select gas?" across your screen; press "Y/+" key, it will take you to "Fresh air
calibrate?" press " Y/+" key again. Connect your zero air to end of probe; the mini will do an
automatic 20 second countdown. After 20 seconds are complete leave your bag on the probe and
it will show the reading/results for zero gas. You can wait until it prompts you to the next gas or
you can press "Y/+" to move on to your gas calibration.
2.1 You will see "Span Cal?" connect your gas to end of probe, the Mini Rae will do an automatic
20 second countdown; after 20 seconds are complete leave your bag on probe and it will show
your reading/results. The Mini Rae will not record your results so you must remember and
document. *Note: See step 4.1 for multiple gasses.
3.	If you are only calibrating 1 span gas (lOOppm) you are complete with calibration. Press "Mode"
key until it takes you to the display screen. PID is ready to use. For other span gases proceed to
strep 4.
4.	For multiple span calibrations: after you have completed the first gas calibration, your screen
should take you to "Select Cal Memory" here press "Y/+" key. It will show you the cal memory
you are currently on. Example "Mem #??" Here you will press "N/-" key to take you to your next
cal memory. Your screen should read "Mem #??" press "Y/+" then press "Y/+" again to save
memory. The PID will ask questions "Change span value" and "Modify cal memory" "Change
correction factor" you should press "N/-" to these questions. You will be asked for "fresh air cal?"
If you have calibrated you zero gas in the first span you can skip this by pressing "N/-" key. Now
repeat step 2.1.
4.1 Repeat step 4 to calibrate multiple spans.
5.	Once you have calibrated all span gases you can return to display screen by pressing "Mode" key
until it appears. To change the cal memory you're in return to the calibration screen. Enter "Y/-"
when "Calibrate/select gas?" screen appears; now using your "N/-" key scroll until you find Select
cal memory key. Use "Y/+" or "N/-" to select memory you want. When you get the memory you
want save it by pressing "Y/+" key. Now use your Mode key to return to display screen.

-------
f
Flint Hills
RESOURCES'
Environmental Policy/Procedure Manual	Daily Calibration Form
Issued: December 2001	Form No: FEF-AQ-100
Revision No: 2.0	Page: 1 of 1
Date Instrument Calibrated: / £ ~~S -	Technician Signature: n. i
Instrument Model: Mini Rae 3000	Instrument ID#: 592-912541 (Rental Unit from Airqas)
CuiliixiiiliJJ da iikiiliib

Zero
100 ppmv
Calibration Gas Bottle
Certification No. or Lot No.
Lot Number:
856697
Lot Number:
854049
Calibration Gas Concentration
(ppmv)
0
101
Checklist:
Q^hecked the probe filter	GKChecked the machine filter H^Checked o-rings
Q^Program spans set correctly	[^Checked expiration dates of calibration gases
kiiiiujwiiliiiatlliijp >J Calliiiiilwi

Time
Zero
100 ppmv
Start Shift

.A
A5?
Mid Shift CDAT
(optional)
//¦o-7
.0

Mid Shift
Calibration
(if needed)
//¦•of
0
/of
End Shift CDAT
7-.o/
<*
r f
?3
Instrument sample flow rate = j"		 liters/min. Must be between 0.1 and 3 liters/min to pass.
Notes:
1. Recalibrate if any instrument readings vary greater than 10% from the gas standards.
2 A calibration drift assessment test (COAT) must be performed at the end of the monitoring shift and a mid-shift CDAT is
optional.
3. If the CDAT instrument reading is less than 10% of the gas standard, re-monitor all components screened greater than 50ppm
since the last acceptable calibration check.
4 If a dilution probe is to be used during the monitoring shift, it must be calibrated with the instrument and recorded on this form.
Do not use the probe if it does not calibrate within the above standards. Use 10,000ppm gas to calibrate dilution probe.
Comments:

-------
Flint Hills
RESOURCES
Environmental Policy/Procedure Manual	Daily Calibration Form
Issued: December 2001	Form No: FEF-AQ-100
Revision No: 2.0	„ Page: 1 of 1
Technician Signature:
	 . _ _	...... u:
Date Instrument Calibrated:
Instrument Model: Mini Rae 3000	Instrument ID#:	-5	V/

Zero
100 ppmv
Calibration Gas Bottle
Certification No.
twos-am
£ iL^lo
Calibration Gas Concentration
(ppmv)
0
loo
Checklist:
Checked the probe filter	Checked the machine filter (ElfChecked o-rings
JET"Program spans set correctly	jX Checked expiration dates of calibration gases

Time
Zero
100 ppmv
Start Shift

o
|0O
Mid Shift CDAT
(optional)



Mid Shift
Calibration
(if needed)



End Shift CDAT
I.
O
io3
Instrument sample flow rate = O' S liters/min. Must be between 0.1 and 3 liters/min to pass.
Notes:
1	Recalibrate if any instrument readings vary greater than 10% from the gas standards
2	A calibration drift assessment test (CDAT) must be performed at the end of the monitoring shift and a mid-shift CDAT is
optional.
3.	If the CDAT instrument reading is less than 10% of the gas standard, re-monitor all components screened greater than 50ppm
since the last acceptable calibration check.
4.	If a dilution probe is to be used dunng the monitoring shift, it must be calibrated with the instrument and recorded on this form
Do not use the probe if it does not calibrate within the above standards. Use 10,000ppm gas to calibrate dilution probe
Comments:	

-------
f. FLINT HILLS	Environmental Policy/Procedure Manual
j J resources* Issued: December 2001
Revised: 5/2005 (Revision No. 2)
Reviewed: 5/2013
We/*'"	 K ROUTINE 3-MONTH CERTIFICATION
ra\ dnCJ'	r REPLACEMENT INSTRUMENT
Instrument ID#: __ Kl/Jl ~ Tf^ZH
FLINT HILLS
resources*
Corpus Christ! Refineries
Certifier's Name:	t
F MAINTENANCE PERFORMED
Date of Test:.
Zero Air Cylinder Number:
Instrument Certification Form
Form No. FEF-AQ-101
tWoS-#77 750
Calibration Test Results
1
Cylinder #:
Expiree
npate:
2
Cylinder #:
Expiration Date:
3
Cylinder #:
Expiration Date:
\ l_30|0
3/i
'm




Test
Zero air
reading ppmv
' SOtfppm cal
gas mixture
ppmv
(A)
Meter reading
ppmv
(B)
Absolute delta
ppmv
(A-B)1
Test
Zero air
reading ppmv
2,000 ppm cal
gas mixture
ppmv
(A)
Meter reading
ppmv
(B)
Absolute delta
ppmv
(A-B)1
Test
Zero air
reading ppmv
10,000 ppm cal
gas mixture
ppmv
(A)
Meter reading
ppmv
(B)
Absolute delta
ppmv
(A-B)1
1

|GC
i°l
7
1




1




2
o
IDO
iOO
o
2




2




3

(CO
HI
i
3




3




1

Total (C}=
8


Total (C)=



Total (C)=

0= ((C/3)/A)*100=

D= ((C/3)/A)*100=

D= ((C/3)/A)*100=

If (D) ts less than or equal to 10%, the mach ne passes;

Calibration gas mixture (A) x .90 = _
This is your 90% response reading that is required.
Test 1: 3. S7 Seconds (B)
Test 2:"	Seconds(B)
Test 3:	Seconds (B)
Total Seconds (E)
(E) divided by 3= ¦U5_ Seconds (F)
1 In this case, the absolute difference Is always a positive value
Response Time Test
Calibration gas mixture (A) x .90 =	
This is your 90% response reading that is required.
Test I
Test 2
Test 3
(E) divided by 3=
Seconds (B)
Seconds (B)
Seconds (B)
Total Seconds (E)
Seconds (F)
If all values for (F) are less than or equal to 30 seconds, the machine passes;
Maximum/highest response time'obtained during tests:. H ^5
Calibration gas mixture (A) x .90 =	
This is your 90% response reading that is required.
Test 1
Test 2
Test 3
(E) divided by 3=
Seconds (B)
Seconds (B)
Seconds (B)
Total Seconds (E)
Seconds (F)
.Seconds
Sample Flow Test
Instrument sample flow rate = f\, 5 liters/min. Must be between 0.1 and 3 liters/min to pass.
INSTRUMENT PASSES CERTIFICATION: JP^{ES
This instrument meets the specifications outlined ip
Signature: _	
Certification
r no
femnce Method 21 and is therefore certified for use by:
Date: _
hM/'7

-------
Cub Personal PID Monitor
Personal safety
monitor for
hazardous &
toxic gases.
The world's smallest, lightest, most sensitive
personal PID monitor for hazardous and toxic
compounds.
Best available photoionization (PID] detection
•	PID independently verified as best performing on the market
•	Unrivaled sensitivity detects down to ppb levels
•	Widest range detects gases 1 ppb - 5,000 ppm
•	In-built humidity resistance with no need to compensate
•Anti-contamination design for extended field operation
•	Measures 480 selectable compounds [10.6 eV lamp)
Safety
•	Fastest (<13 second] response to hazardous gases & vapors
•Clear audio, visual and vibrating alarms
•Large LCD display for clear readings
•Longest battery life [16 hours]
•	Meets ATEX, IECEx (US & Canadian certification pending]
Ease of use
•	Smallest, lightest personal PID monitor available
•Simple one button operation
•	Intuitive software and simple calibration routine
•Easyto service
•	Easily upgrade your instrument
Low cost operation
•	Inexpensive consumables and parts
•	Free 2 year warranty when instrument registered online
Unrivaled Detect
www.ionscienceusa.com

-------
Cub is the world's smallest, lightest personal PID monitor for the accurate
detection of volatile organic and total aromatic compounds. With market
leading parts-per-billion [ppb] sensitivity, Cub gives an indication of harmful
gases including benzene before they reach levels which are harmful.
Cub is available in three distinct variants; ppm, ppb and TAC mode. Choose a
ppb or ppm instrument with 10.6 eV lamp for accurately detecting a wide
range of VOCs dependent on your sensitivity requirements. Cub™ with
10.0 eV lamp gives accurate detection of total aromatic compounds [TACs]
down to ppb levels.
Small, compact and lightweight Cub is robust yet comfortable and
unobtrusive to wear. Cub has a dynamic range of 1 ppb to 5000 ppm, the
widest on the market measuring 480 selectable compounds.
When worker exposure exceeds pre-set limits the instrument's audible,
vibrating and flashing LED alarms alert you to the gases present. Readings
are displayed in ppb and ppm on its bright, back-lit LCD display with
selectable data logging time.
Upgradeable ppb sensitivity can
be purchased quickly and easily.
CubDoc docking stations are
available for USD communication,
charging and calibrating your
instrument, dependent on your
requirements.
Cub's linked together within their clocking stations.
The instrument's PID sensor technology has been independently verified as
best performing for speed, accuracy and humidity resistant operation. Its
unique Anti-contamination and patented Fence Electrode Technology provide
extended run time in the most challenging environments, giving you
accurate results you can truly rely on.
Applications include
•	Industrial hygiene • Chemical and petrochemical plants • Oil & gas
•	Pharmaceuticals • Health & safety • Hazardous materials • First response
•	Environmental
Accessories
Cub is supplied with an exclusive range of accessories.
Visit www.ionscience.com/cub for more info.
Distributed by:
CUB TECHNICAL SPECIFICATION
SENSITIVITY
0.001 ppm [isobutylene equivalent] *Model dependent
0.002 mg/m3[isobutylene equivalent] *Model dependent
ACCURACY
±5% display reading + one digit
RESPONSE TIME
<13 seconds [T90]
APPROVALS
Europe: ATEX: CE, Ex II1G, Ex ia IIC T4; -20 °C < Ta < 55 °C
IECEx: Ex ia IIC T4 -20 °C < Ta < 55 °C
China: Ex ia IIC T4-20°C
-------
¦	f	A	United Stains
molex \ JH:S7,H!LH oEPA^:>j^'
Progress on LDAR Innovation
2*^21, ,._r	Appendix D: Test Methods
Appendix D5: OGI Procedures
Page D45 to D58
Appendix D5
Optical Gas Imaging Procedures

-------
Optical Gas Imaging Procedures
March 2019
Page 1 of 11
General Optical Gas Imaging Procedures
March 2019

-------
Optical Gas Imaging Procedures
March 2019
Page 2 of 11
Contents
I.0	Scope and Applicability	3
2.0	Summary of Method	3
3.0	Definitions	3
4.0	Health and Safety Warnings	3
5.0	Cautions	4
6.0	Interferences	4
7.0	Personnel Qualifications	4
8.0	Equipment and Supplies	4
9.0	Procedure	5
10.0	Data and Records Management	10
II.0	Quality Control and Quality Assurance	10
12.0	References and Supporting Documentation	11
Figures
Figure 1. GF320 back of hand grip with Power, Menu Joystick, and Menu buttons	6
Figure 2. GF320 top of hand grip with Auto/Manual/HSM, Focus/Zoom, and Save/Start-Stop Recording
buttons	7
Figure 3. GF320 top of camera with Programmable button, Temperature Range, and Camera Dial	7
Figure 4. GF320 front left side of camera with Manual Image Focus, Visible Image, and Laser buttons ... 8
Figure 5. GF320 back of camera with battery, battery release latch, battery compartment release, and
camera SD card and data storage	8
Tables
Table 1. GF320 Infrared Camera Buttons, Locations, and Functions
5

-------
Optical Gas Imaging Procedures
March 2019
Page 3 of 11
1.0 Scope and Applicability
1.1	The purpose of this procedure is to describe the use and operation of the FLIR GF320
infrared (IR) camera during for field measurement activities. The FLIR GF320 is used to
locate leaks of hydrocarbon vapors from simulated leaking valves and connectors.
1.2	This procedure is for personnel using the GF320 for the leak screening purposes.
1.3	This procedure is a general guide. For more complete information on FLIR GF320 series
IR camera functionality, please consult the manufacturer's manual.
1.4	Please consult the quality assurance project plan (QAPP) for test-specific procedures and
record keeping requierenments.
2.0 Summary of Method
The GF320 is designed to identify emissions of hydrocarbon gases using an IR detector. The
detector has been designed to capture IR radiation emitted within a narrow IR spectral range
corresponding to several hydrocarbon gases. The images of these hydrocarbon emissions, which
appear as either black or white "smoke" depending on polarity settings and background, are
displayed in a viewfinder in real time. Lower emission detection limits can often be obtained using
the manual mode or high-sensitivity mode (HSM). The user should be familiar with these operating
modes prior to use in the field.
For the typical use, the GF320 capabilities will be implemented for two main purposes: (1) The IR
camera [or optical gas imager (OGI)] is used to conduct a visual site survey to identify any potential
hazards to health and safety. (2) perform leak detection surveys of with potential emissions.
This procedure was prepared by persons deemed technically competent by management based on
their knowledge, skills, and abilities. The procedure has been validated in practice and reviewed in
print by a subject matter expert.
3.0 Definitions
QAPP quality assurance project plan
HSM high-sensitivity mode
IR infrared
OGI optical gas imaging
SD secure digital
4.0 Health and Safety Warnings
4.1	This document does not attempt to address all potential safety concerns associated with the
use of the IR camera. It is the responsibility of the user of this procedure to establish
appropriate safety and health practices.
4.2	Always observe proper safety procedures when using the IR camera. The operator must be
familiar with the safety aspects associated with the instrument's operation, as outlined in
the instrument's operating manual.

-------
Optical Gas Imaging Procedures
March 2019
Page 4 of 11
4.3	The IR camera is not intrinsically safe and might require additional site-specific monitoring
prior to operation. Refer to any pertinent site-specific health and safety plans for guidelines
on safety precautions. These guidelines, however, should only be used to complement the
judgment of an experienced professional.
4.4	Minimize exposure to potential health hazards by use of protective clothing, and eyewear.
5.0 Cautions
5.1	If the camera is not working properly and a reboot of the camera does not alleviate the
problem, red-tag it and remove it from use.
5.2	Operate the camera only in appropriate environmental conditions. Temperatures above
50 °C (122 °F) can damage the camera.
5.3	Read the operational manual and fully understand all aspects of its use before attempting
to operate the IR camera.
5.4	Avoid pointing the camera at strong energy sources, including the sun, as these could
negatively affect the detector.
6.0 Interferences
6.1	For accurate measurements, allow the camera a 15-minute cool down period before use.
6.2	Avoid pointing the camera at strong energy sources, including the sun, as these could
negatively affect accuracy and possibly damage equipment.
6.3	Steam plumes and very hot backgrounds present potential interferences
7.0 Personnel Qualifications
7.1	Field personnel must only operate equipment for which they are trained and authorized to
use. Personnel must be trained by qualified operators.
7.2	Operators of the instrument must read the manufacturer's manual and this operating
procedure and demonstrate that they fully understand how to properly operate the
instrument.
7.3	Before use, the training of the operator must be suitably documented. The level of training
(proficiency of operator) is related to data quality objectives of the QAPP.
8.0 Equipment and Supplies
•	Infrared camera (FLIR Systems model GF320)
•	Infrared camera telephoto lens (FLIR Systems GF320 fixed lens; Standard 24° x 18°)
•	FLIR Systems user's manual (FLIR GF3xx Series)
•	Spare rechargeable lithium-ion batteries
•	Laptop with secure digital (SD) storage

-------
Optical Gas Imaging Procedures
March 2019
Page 5 of 11
9.0	Procedure
9.1	Camera Startup
Table 1 provides a summary of IR camera operations. Users should refer to the GF320 IR camera
operator's manual for additional information on keypad and button functions, level/gain control,
and thermal image recording.
Table 1. GF320 Infrared Camera Buttons, Locations, and Functions
Button
Location
Function
S
Top of hand grip
Allows the user to save an image when in picture mode or start
and stop the recording of video when in video mode.
A/M
Top of hand grip
Switches camera mode between Automatic -Level and Gain,
Manual - Level and Gain, and High Sensitivity Mode (FISM).
FOCUS/ZOOM
Top of hand grip
Allows the user to focus the image when in video or picture
mode and zoom in on an image that has been saved.
P
Top of camera
A programmable button that can be set to control the color
palette, change zoom factor, invert polarity, or hide/show
graphics.
Temperature Range
Top of camera
Allows the user to adjust the temperature range. Typical setting
is 50-140 °F.
Menu
Back of hand grip
Allows the user to bring up the menu. Pressing menu a second
time closes the menu or backtracks.
Menu Joystick
Back of hand grip
Allows the user to navigate menus. Pushing in selects menu
option.
Power
Back of hand grip
Allows the user to turn the camera on and off.
Visible Image
Front left side of camera
Toggles the camera between visible imagery and IR imagery.
Laser
Front left side of camera
Allows the user to aim/identify. Holding down produces a laser
dot for aiming/identifying.
Camera Dial
Back left side of camera
Used to switch between camera mode, video mode, file
archive, program, and settings.
To operate the infrared camera:
1.	Ensure that a fully charged battery has been inserted into the camera.
2.	Turn the power on and allow 5 minutes for the camera to reach operating temperature. The
message in the viewfinder will read "Cool down in progress" during this time.
3.	After cool-down is complete, verify that the date and time settings are correct.
4.	To set the date and time, turn the camera dial to settings, use the menu joystick to navigate to
"set date and time," and press joystick to change the date and time.
5.	Remove the lens cap and select AUTO, MANUAL, or HSM mode by pressing the A/M button:
•	In AUTO mode, the camera sets the level and gain based on scene content, which can also
be described as the temperature of the objects in the scene.
•	In MANUAL mode, the user adjusts the level and gain manually to optimize the image in
the viewfinder. Adjusting level and gain is done by using the menu joystick (up/down
adjusts level; left/right adjusts gain).

-------
Optical Gas Imaging Procedures
March 2019
Page 6 of 11
• In HSM mode, level and gain are set by the camera along with a higher image integration
rate to allow imaging of smaller leaks.
6,	Adjust the focus using the FOCUS/ZOOM button (or the black ring near the lens) to produce
the clearest thermal image.
7.	Ensure the camera is functioning properly (operation verification) by viewing the presence of
a hydrocarbon plume through the eyepiece. This can be accomplished by the use of a butane
lighter or other hydrocarbon source. Document this verification in the field logbook.
The camera is now ready for thermal imaging.
Figures 1-5 show the control buttons for the Infrared Camera.
Figure 1. GF320 back of hand grip with Power, Menu Joystick, and Menu buttons.

-------
Optical Gas Imaging Procedures
March 2019
Page 7 of 11
Figure 2. GF320 top of hand grip with Auto/Manual/HSM, Focus/Zoom, and Save/Start-Stop Recording buttons.
Figure 3. GF320 top of camera with Programmable button, Temperature Range, and Camera Dial.

-------
Optical Gas Imaging Procedures
March 2019
Page 8 of 11
Manual
Image
Focus
Laser
Visible
Image
Figure 4. GF320 front left side of camera with Manual Image Focus, Visible Image, and Laser buttons.
Battery	Battery release
latch
Figure 5. GF320 back of camera with battery, battery release latch, battery compartment release, and camera
SD card and data storage.

-------
Optical Gas Imaging Procedures
March 2019
Page 9 of 11
9.2	Safety Site Survey
1.	Use the GF320 to identify any substantial hydrocarbon vapor plumes that might pose significant
danger to health and safety by scanning the site upon arrival.
2.	Once the safety site survey is completed and no hazardous conditions are identified, begin OGI
survey as per QAPP.
9.3	Component Leak Source Identification
Upon completion of the safety site survey procedure, execute screening as per the QAPP. Screen
the components for hydrocarbon emitting leak source using the IR camera.
1.	For each test, thoroughly inspect the entire component from different angels so as to achive
different backgrounds.
2.	When a leak source is observed to be emitting (releasing hydrocarbon emissions), document
visibility (1. Easy, 2. Moderate, 3. Difficult) as per QAPP.
3.	Execute any required video saving and associated record keeping as per the QAPP
9.4
Reserved section
Reserved section
1. N/A

-------
Optical Gas Imaging Procedures
March 2019
Page 10 of 11
9.5 Camera Battery Replacement
1.	Open the battery compartment, located at the back of the camera
release located just to the right of the battery compartment.
2.	Press down on the battery release hatch (the small red lever located
the battery compartment).
3.	Remove the old battery.
4.	Insert a new battery with contacts up, facing towards the camera body.
5.	Close the battery compartment door.
10.0 Data and Records Management
Information generated by field personnel must be organized and accounted for in accordance with
established records management procedures of the QAPP. Data recorded in the project forms and
electronic files shall include, but are not limited to, the following:
•	IR camera serial number
•	Project name
•	Camera operator name
•	Date and time
•	Recorded video file number
•	Description of recorded video
•	Weather conditions
•	Remarks
11.0 Quality Control and Quality Assurance
Important factors in establishing quality requirements include the sensitivity and specificity of the
detection system used. Quality requirements include ensuring that equipment is ready for use as
follows:
•	Operating instructions and/or manuals from the manufacturer are available for each piece of
equipment.
•	Equipment used for field activities must be handled, transported, shipped, stored, and operated
in a manner that prevents damage and deterioration. Equipment must be handled, maintained,
and operated in accordance with the manufacturer's operating instructions.
by using the battery door
in the top right corner of

-------
Optical Gas Imaging Procedures
March 2019
Page 11 of 11
Field personnel are responsible for maintaining a central, comprehensive list of all field equipment
subject to this procedure. The equipment inventory list for each instrument or piece of equipment
must include the following:
•	The description/identity of the equipment (e.g., IR camera, IR camera lens)
•	Manufacturer 'sorvendor's name
•	Equipment serial number or other manufacturer identification number
•	The manufacturer's instructions or a reference to their location
•	Record of damage, malfunctions, modifications, and/or repairs to the equipment
Any problems or abnormalities observed during use of the instrument must be recorded in the field
notebook. If the instrument does not appear to be operating properly, red-tag it and remove it from
service. Record pertinent information in the project logbook, including date, time, video file
number and description, location, and camera operator name.
12.0 References and Supporting Documentation
NEICPROC/11-005. FLIR GasFindIR GF320 Infrared Cameras. 1-23-2012.
FLIR GF3xx Series User's Manual, Publ. No. T559157, Revision a506, December 21, 2010.
FLIR Systems, http://www.flir.com (last accessed December 13, 2016). Users can view training
videos and download manuals from FLIR Systems.

-------
Optical Gas Imaging Daily Validation and Log Sheet
Daily Validation
Date and Time:
/oZ/f/zr f--o 9
Project Name:
£,0*/% /-//stye; Z.
IR &mef& Serial Number:
Camera Operator Name:
^CsS ' -/s-—
Weather Conditions:
P/XTiy /z° fTzsc . z*f/>#
Validation test 1 distance: ' S /
5 '
Validation tesfl recorded video file number:
Validation test 2 distance:
AT'
Validation test 2 recorded video file number:
Z 7VS
Description of recorded video:
Remarks:
Video Recording
Time:

Camera Operator Name:
Recorded video file number:
££ Z7-77 — t"z7ro	71	
Weather Conditions:
/^7Ly		yd a 	*» 	aj£ S /tff/f
Description of recorded video:	'	'
/I
Remarks:

-------
Optical Gas Imaging Daily Validation and Log Sheet
Video Recording
Time:	Camera Operator Name:
" fW		
Recorded video file number:
D°z -7^?			
Weather Conditions:	**¦
Description of recorded video: ^	'y
Remarks:
Video Recording
Time:
Camera Operator Name:
Recorded video file number:
Weather Conditions:
Description of recorded video:
Remarks:
Video Recording
Time:
Camera Operator Name:
Recorded video file number:
Weather Conditions:
Description of recorded video:
Remarks:

-------
f	United Stains
molex \ JH:S7,H!LH oEPA^:>j^'
Progress on LDAR Innovation
u.ms»i..	Appendix D: Test Methods
Appendix D5: Node Field Test Procedures
Page D59 to D62
Appendix D6
Node Field Test Procedures -
Sensor Bump Test and Calibration
Molex Gas Sensor Bump Test Procedure (used in FHR CC Pilot test)
Equipment
•	Calibration gas cylinder (500 ppb isobutylene balanced with zero air)
•	Positive flow regulator, single stage, 0.3 - 8.0 LPM selectable flow
•	Calibration cup
•	Calibration tubing (with air conditioning assembly)
•	Mobile device with "mSyte" application installed
PPE
Compliance with the worksite's requirements
Frequency
Bump test to be executed once per quarter or as per specific project guidance (e.g. QAPP) period.
Pass/Fail Threshold Limits
±50% of the nominal value of the gas
Preparation
Hold the regulator, turn the calibration gas cylinder in a clockwise direction to tighten; connect either end
of the calibration tubing to the regulator's barb fitting; connect the other end of the calibration tubing to the
calibration cup.
Page D59

-------
f	A	Jl United Stales
molex \ F^HrnLs v/EPA^r^^'
Progress on LDAR Innovation
unMsoin	Appendix D: Test Methods
Appendix D5: Node Field Test Procedures
Page D59 to D62
Procedure
L Open the "mSyte" application, select the target device, click the "Bump Test'' button in the "mSyte"
application to enter the "Bump Test" mode,
2.	Click the "Start" button to initiate the actual bump test, the first count down clock (10s) will be activated.
3.	When an "Apply Gas" message appears, turn on the calibration gas cylinder, set the flow rate as "0.3"
LPM, and place the calibration cup over the sensor orifice, push the calibration cup onto the orifice to
secure the connection.
4.	The second countdown clock (30s) will be activated.
SourLake_8	last passed test
500 pub ItobutyJena	TS d#y» ago
Cancel
5. Once the count down clock is finished, the mobile device will display "Pass" or "Fail" result; the device
will return to the "Operation" mode. Turn off the Calibration gas and remove the calibration cup from the
sensor orifice.
Bump Test
Souriake_8
500 ccO isobutytenc
t Tl3r?
0 day. ago
JL
|AJWLLO
69% cal threshold
Tes? Complied
©PASS
I
Repeat Test
Done
Bump Test
SourLake_8
500 00b Isobutylene
LAST PASSED TEST
15 days ago

39% cal threshold
Test Completed
Qfail
Repeat Test
Done
Page D60

-------
f	United Stains
molex \ JH:S7,H!LH oEPA^:>j^'
Progress on LDAR Innovation
u.ms»i..	Appendix D: Test Methods
Appendix D5: Node Field Test Procedures
Page D59 to D62
6.	Perform another bump test if the first bump test result is "Fail."
7.	Follow the attached process flow chart for additional operations if the sensor doesn't pass the bump test.
Molex Gas Sensor Calibration Procedure
Equipment List
•	Calibration gas cylinder (500 ppb isobutylene balanced with zero air)
•	Positive flow regulator, single stage, 0.3 - 8.0 LPM selectable flow
•	Calibration cup
•	Calibration tubing (with air conditioning assembly)
•	Mobile device with "mSyte" application installed
PPE
Compliance with the worksite's requirements
Frequency
Once a year or when sensors fails bump test, whichever comes first.
Page D61

-------
f	United Stains
molex \ JH:S7,H!LH oEPA^:>j^'
Progress on LDAR Innovation
u.ms»i..	Appendix D: Test Methods
Appendix D5: Node Field Test Procedures
Page D59 to D62
Preparation
Hold the regulator, turn the calibration gas cylinder in a clockwise direction to tighten; connect either end
of the calibration tubing to the regulator's barb fitting; connect the other end of the calibration tubing to the
calibration cup.
Procedure
1.	Open the "mSyte" application, select the target device, click the "Calibration" button in the "mSyte"
application to enter the "Calibration" mode
2.	Click the "Start" button to initiate the actual calibration process, the first countdown clock (30s) will be
activated.
3.	When an "Apply Gas" message appears, turn on the calibration gas cylinder, set the flow rate as "0.3"
LPM, and place the calibration cup over the sensor orifice, push the calibration cup onto the orifice to
secure the connection.
4.	The second countdown clock (90s) will be activated.
5.	Once the countdown clock is finished, the mobile device will display "Pass" or "Fail" result; the device
will return to the "Operation" mode. Turn off the calibration gas and remove the calibration cup from the
sensor orifice.
6.	Perform another calibration if the calibration result is "Fail".
7.	Follow the attached flow chart for additional operations if calibration fails again.
Page D62

-------
molex \	<»epa
Urnlod Stale*
Ef»v 90% of
the trials. A two stage CGA590 regulator was fitted to the gas tank to release gas at a reduced
pressure. A gas flow control box was connected to the gas delivery line between the gas tank and
the leaking component using polyurethane tubing. The gas flow controller box consisted of 4
MC series digital mass flow controllers (MFCs) of varying flow rate ranges, i.e. 0-10 seem
(standard cubic centimeters per minute), 0-50 seem, 1-500 seem, and 0-2000 seem made by
Alicat Scientific, Inc, Tucson, AZ
Alicat MC series MFCs are based on the accurate measurement of volumetric flow. The
volumetric flow rate is determined by creating a pressure drop across a unique internal
restriction, known as a laminar flow element, and measuring differential pressure across it. Per
manufacturer's datasheet:
http://www.alicat.com/documents/specifications/Alicat Mass Controller Syecs.ydf
(last accessed 03/14/2020), the mass flow controllers can be operated at -10 to 60 °C, and
accurately controls the flow rate of gas to ± 0.4% of Reading + 0.2% of Full Scale. They also
have a list of gases built in to correct for possible errors when used for a gas different from the
calibration gas, typically nitrogen. In these tests, we set the gas type to either isobutylene or
ethylene depending on which gas is used. All these mass controllers used in the test had a Class 1
Div. 2 rating. A picture of the Alicat MFC is shown in Figure D7-1 below with the
Page D63

-------
Progress on LDAR Innovation
Appendix D: Test Methods
United Stale*	r r
f	A	United Stale*
molex	r	HILLS	Appendix D7: Simulated Leak Procedures
¦* 'e 3 °u 'c °8	Ufa?	Page D63 to D68
actual set of controllers used in testing shown Figure D7-2. Note these MFCs are specially
designed to handle isobutylene gas.
~~a
mum
SCIENTIFIC


I
	
¦J
Figure D7-1. Alicat Scientific MC series MFC
Figure D7-2. Set of Alicat Scientific MC series MFCs used in tests
Page D64

-------
Progress on LDAR Innovation
Appendix D: Test Methods
Unded Stalub	1 1
f	'J nrtod Stales
rtlOlGX	r	HILLS	^bWCrrIP",lL'ttior Appendix D7: Simulated Leak Procedures
¦* ra s ° u 'c ° 8	Page D63 to D68
In exploratory test (Appendix A), a flow rate between 5 and 20 seem was primarily used. In the
FHR SLOF tests (Appendix B), flow the presence of an interfering source (fin fan leaks)
necessitated the use of higher test gas release flow rates. When no pressure is built up in the
downstream, the actual flow rate displayed by the MFC is an indication of the leak rate of the gas
through the leaking component. For the purpose of comparison and documentation, every leak
was measured by the TVA-1000B FID and the MiniRAE 3000 PID following the EPA M21
procedure, and by a FLIR model GF320 OGI camera. The TVA-1000B and MiniRAE PID
instruments that were calibrated every day before use as per M21 requirements.
Isobutylene and ethylene are flammable gases. They have a lower explosive limit (LEL) of 1.8%
and 2.7% by volume, respectively. For safety reasons, the test site and particularly the leak point
were monitored by four Industrial Scientific Corporation model Radius BZ1 area monitors. Each
monitor is equipped with an oxygen sensor, a combustible gas sensor that has a resolution of 1%
LEL, and a PID sensor that has lppm resolution. The oxygen and combustible gas sensors have
their low-level alarm points set at 19.0% 02 and 10%LEL, respectively. As is specified in the in
CRADA Safety Plan, all the safety monitors were bump tested each day to known concentration
gases before use to ensure proper working order.
For Test 3 at SLOF, 100% isobutylene and 100% ethylene was released, controlled and
monitored utilizing CGA510 single-stage regulator and Alicat Scientific MFCs. Portable LEL
Monitors were set up at release gas cylinders, Alicat flow controller unit, and at leak point
locations. A manual power shutoff to the flow controller would have been performed if LEL
readings reach 10% LEL. Controlled releases were only be performed while test team was on
site. A total of ten (10) sensors and one (1) spare sensor were used for Test 3.1. Sensor
placement, leak location and isobutylene or ethylene mass flow rates were adjusted based on
actual test results. Detection results were monitored in real time for the SLOF test via a data
acquisition system and displayed for evaluation and adjustments. This testing approached helped
the team better understand sensor node spatial relationship to detectability, mass flow rate and
concentration correlations, impedance impact on plume characteristics and PID performance for
remote detection of near-source emissions under changing meteorological conditions.
Example Test Procedures Matrix for Test 3.1 (Appendix B1)
This section contains information on the test procedures used for Test 3.1 (an example)
Test Procedure: Geospatial Sensor Placement
Purpose: Study correlation between sensor locations and detectability for both isobutylene and
ethylene. Test and optimize sensor locations so that any leak within the defined perimeter can be
detected by one or more of the sensors.
Test Preparation (daily):
1.	Conduct Safety Meeting (Safety Plan)
2.	Establish Communication Protocol and Task Assignments
3.	Review Test Plan, Procedure and Safety Measures

-------
Progress on LDAR Innovation
Appendix D: Test Methods
f	A	A United Stalo-i	' '
rtlOlGX	r	HILLS	^bWCrrIP",lL'ttior Appendix D7: Simulated Leak Procedures
J-. ,aa0u,co»-	Q»»3	Page D63 to D68
4.	Conduct parameter space assessment in single sensor node mode
5.	Set up and calibrate equipment, baseline test sensors against background
6.	Conduct functional test and sensor bump test (dry run)
Data Collection Procedure and Instructions:
1. Establish and record primary test set up parameters
1.1. Sensors / emission geospatial position for Test #1 (performed for each node)
Each controlled release or background experiment (Test) was documented with layout
map containing the physical locations and separation measurements for each sensor
node and emission point. Here the layout map for the Test #1 (and all subsequent tests
with the same sensor locations) is signified as "A" in Table D7-1. The controlled
release or documented physical leak position (uniformly called "leak location"), is
recorded in the layout map and carries the Test # (e.g. LI for Test #1).
1.2.	Gas type (or known natural leak details)
1.3.	Mass flow rate to be used for the trial (or details of know natural leak)
2. Initiate recording of other test parameters (data logger)
2.1.	Meteorological Data
2.2.	Sensor Baseline Readings
Sensor
Leak
Figure D7-3. Example of placement geometry
3. Start gas flow at predetermined flow rate for release gas (if required)

-------
Progress on LDAR Innovation
Appendix D: Test Methods
Unded Stalub	1 1
f	'J nrtod Stales
rtlOlGX	r	HILLS	^bWCrrIP",lL'ttior Appendix D7: Simulated Leak Procedures
¦* ra s ° u 'c ° 8	Page D63 to D68
3.1.	Start gas flow and record gas flow start time
3.2.	Wait 5 min and then perform Method 21 and record TVA & MiniRAE readings,
and IR Camera results.
3.3.	Record sensor readings and the reading time
3.4.	MOLEX Sensor and SPod will be electronically data logging
3.5.	Continue taking readings until predetermined test time, (e.g. 30 min), is
completed
4.	Stop gas flow and record time ending Test #1
5.	Reposition sensors and/or leak to geospatial position #2 for Test #2 (or subsequent tests if
necessary).
6.	Examine controlled release gas type change requirements for Test#2 (or subsequent test).
If no controlled releases sas type chanse is required, repeat Step 1 to document new
layout and gas release parameters for Test #2, repeat Step 2 (if not already operating),
and repeat Steps 3-5 to complete Test 2# (use the same procedure for subsequent tests).
7.	If controlled release sas type chanse is required, switch source gas and reset the gas
type in MFC as required and perform required line purging procedure after gas change.
8.	With the new controlled release gas type, repeat Step 1 to document new layout and gas
release parameters for Test #2, repeat Step 2 (if not already operating), and repeat Steps
3-5 to complete Test 2# (use the same procedure for subsequent tests).
Example Test Matrix for Test 3.1
This section contains an example of the controlled release test matrix used for Test 3.1
(Appendix Bl). Compared to SLOF Test 3.2 (Appendix B2), Test 3.1 was more exploratory in
nature with the results obtained during initial days of testing providing feedback on decisions for
testing on subsequent days. Controlled release tests were conducted over a base 30-minute time
frame, which could extend to 2 hours, depending on conditions. Due to changing daily
meteorological conditions, the exact deployment plan (sensor and leak placements) was not
rigidly set beforehand but was determined at the beginning of each day based on the wind
forecast and thoroughly documented. Other parameters, such as sensor operating settings, were
varied. A basic example test matrix is showed in Table D7-1 with meteorological conditions (e.g.
wind speed and wind direction) recorded the onsite met station and the EPA SPod (not descried
here). See Appendix B for additional details.
Page D67

-------
Progress on LDAR Innovation
Appendix D: Test Methods
Unded Stalub	1 1
f	'J nrtod Stales
rtlOlGX	r	HILLS	^bWCrrIP",lL'ttior Appendix D7: Simulated Leak Procedures
¦* ra s ° u 'c ° 8	Page D63 to D68
Table 7-1. Example Planned Test Matrix for Test 3.1 (initial release rates increased to 30 seem min
due to fin fan leak interference)
Test#
Layout
Leak
Location
Gas Type
Leak Rate,
SCCM
Test Time, Min
l
A
N/A
Background
0
120
2
A
LI
ISO
10
30
3
A
L2
ISO
10
30
4
A
L3
ISO
10
30
5
A
L4
ISO
10
30
6
A
L5
ISO
10
30
7
A
L6
ISO
10
30
8
A
L7
ISO
10
30
5
A
L4
ISO
10
30
6
A
L5
ISO
10
30
7
A
L6
ISO
10
30
8
A
L7
ISO
10
30
9
A
L8
ISO
10
30
10
A
L9
ISO
10
30
11
A
L10
ISO
10
30
12-16
B
Selected*
ISO
10
30
17-18
C
Selected*
ISO
10
60
19-20
D
Selected*
ISO
10
60
21
E
Selected*
ISO
10
60
22
E
N/A
Background
0
Weekend
23-33
E
L1-L10
ETH
20
60
34-35
F
Selected*
ETH
20
60
36
F
Selected*
ISO
10
60
Leak locations adjusted to matched new sensor locations
Page D68

-------
molex b
Progress on LDAR Innovation
United States	Appendix E: EPA ORD Equivalency
Environmental Protection	11	1	J
Agency*-^	Simulations
E1 to E41
Appendix E
EPA ORD Equivalency Simulations
Appendix E-l

-------
¦	x	41 rnA United States
molex fj iFLINT;HiLLi
IRmORtHlftMlllMlMpM
Appendix El
EPA ORD Simulation Approach
Progress on LDAR Innovation
Appendix E: EPA ORD Equivalency
Simulations
Appendix E1: EPA ORD Simulation
Approach
Page E2 to E28
Appendix E-2

-------
Appendix El
EPA ORD Simulation Approach
Appendix E1
EPA ORD Simulation Approach
Table of Contents
1	Introduction	5
2	Previous Guidance	6
3	LeakDAS Inventory	7
3.1	Inventory Characteristics	7
3.2	Methods	12
3.3	Leak Sample Characteristics	16
4	Monte Carlo Model	18
4.1	Methods	18
4.2	Model Comparison	22
4.3	Model Sensitivity	24
5	Simulation Results	25
6	References	28
Appendix E-3

-------
Appendix El
EPA ORD Simulation Approach
Figures
Figure 3-1: LeakDAS TVA Measurement Distribution	10
Figure 3-2: LeakDAS Data Processing Method	13
Tables
Table 3-1: Mid-Crude Unit LeakDAS Records Summary	8
Table 3-2: m-Xylene Unit LeakDAS Records Summary	9
Table 3-3: Mid-Crude Unit Component Records Summary	11
Table 3-4: m-Xylene Unit Component Records Summary	12
Table 3-5: Mid-Crude Unit Leak Summary	16
Table 3-6: m-Xylene Unit Leak Summary	17
Table 4-1: EPA ORD Model Input Variables	20
Table 4-2: EPA ORD and Molex Simulation Comparison, Median Emissions	22
Table 4-3: EPA ORD and Molex Simulation Comparison, Equivalency	23
Table 4-4: Model Sensitivity, emissions distribution statistics	24
Table 5-1: Simulation results	25
Table 5-2: Simulation results, alternate CWP scenario	25
Table 5-3: Simulation results, LDSN DTA	26
Table 5-4: Simulation results, detection time	27
Appendix E-4

-------
Appendix El
EPA ORD Simulation Approach
1 Introduction
To establish an alternate work practice (AWP), the proposed leak detection and repair (LDAR)
approach must demonstrate a reduction in regulated material emissions at least equivalent to the
reduction achieved under current federal requirements (40 CFR 65.8(a)). This Appendix outlines
an equivalency determination approach which compares LDSN/DRF emissions reductions to
those achieved under the current work practice (CWP), scheduled Method 21 (M21) LDAR.
Previous guidance provides a rubric for evaluating equipment leak control relative to the CWP,
but previous models were designed such that they cannot comprehensively evaluate the strengths
and weaknesses of the LDSN/DRF concept. A new Monte Carlo model, rooted in previous
approaches, was developed in order to better evaluate the long-term controls of LDSN/DRF and
reflect the continued use of M21 Toxic Vapor Analyzer (TVA) measurements. Additionally,
historical process unit-specific data was used to better reflect the specific conditions of the Flint
Hills Resources, Corpus Christi (FHR CC) pilot study. This site-specific equivalency approach
was deemed most suitable by the CRADA team for evaluation of the LDSN/DRF concept at the
current stage of development. Development of a less specific and more transferable equivalency
evaluation approach for LDSN/DRF is the subject of ongoing research.
This Appendix includes additional support for the equivalency analysis presented in Section 4 of
the report. Appendix E is structured as follows:
El EPA ORD Simulation Approach (written summary)
•	Section 2 discusses previous equivalency approaches developed for evaluating new Leak
Detection and Repair (LDAR) methods.
•	Section 3 describes the historical database used to characterize processing unit leaks, data
processing methods, and the simulation leak dataset characteristics.
•	Section 4 describes the Monte Carlo model approach and discusses model sensitivities.
•	Section 5 reviews emissions simulation results.
E2 EPA ORD Software Code (R code)
E3 LeakDAS Database (.csv tables)
Appendix E-5

-------
Appendix El
EPA ORD Simulation Approach
2 Previous Guidance
Two different equivalency approaches were reviewed when developing the pilot study
LDSN/DRF equivalency evaluation. The first is the 1999 EPA document Monte Carlo
Simulation Approach for Evaluating Alternative Work Practices for Equipment Leaks (referred
to as "1999 EPA Approach" in this Appendix) (U.S. EPA, 1999). This document presents a
Monte Carlo simulation tool available for petitioners to use when proposing AWPs to replace the
CWP (scheduled, manual M21 LDAR management for all components). The tool was designed
to compare M21 to other means of detection—distinguishing between the 'true mass emission
rate' and the emission rate derived from screening value (SV) correlations.
While the emphasis on mass emissions is important given the significant uncertainty associated
with M21, it was not as easily applicable to the proposed LDSN/DRF method without
introducing additional uncertainty in comparisons. In this case, both the CWP and proposed
alternative are dependent on M21 to characterize individual leaks. Since M21 is part of the
LDSN/DRF approach and part of the CWP, it serves as a stable point of comparison for site-
specific evaluation with existing historical data.
The difference in approach is focused on distribution and efficiency of M21-based LDAR
resources rather than elimination of M21 entirely. Given this difference, true mass emissions
were not considered for the approach presented in this report, and M21 measurement uncertainty
was not included. This was a simplifying assumption, but it was deemed a reasonable
simplification because both the CWP and LDSN/DRF would have the same level of uncertainty
in individual M21 leak measurements. The focus on M21 uncertainty instead of leak duration
limited the suitability of the 1999 EPA Approach for this analysis. However, the document
provides important initial guidance. The 1999 EPA Approach establishes a Monte Carlo
modeling framework which creates simulated facility leak profiles, models different leak
detection methods, and applies a useful equivalency benchmark wherein at least 67% of
simulations must demonstrate emissions equivalency for an AWP to be defined as equivalent.
While the 1999 EPA Approach created a helpful framework, later work provides adjustments
more applicable for the LDSN/DRF method.
The 2000 document Monte Carlo Simulation Evaluation of Gas Imaging Technology (referred to
in this report as "OGI Evaluation") built upon the equivalency determination method presented
in the 1999 EPA Approach using an updated version of the original simulation code. Similar to
the original model, the OGI Evaluation considered M21 uncertainties relative to a detection
method dependent on 'true mass emissions.' Unlike LDSN, the proposed OGI AWP still
depended on scheduled monitoring, but the method did evaluate the impact of different
monitoring intervals to allow for a higher detection threshold. The OGI Evaluation also updated
the equivalency determination to compare cumulative emissions instead of cumulative emission
rates. It also ignored emissions from components which would not be subject to repair under
either work practice by assuming equivalent control levels would be achieved—simplifying
modeling. While the introduction of emissions and varying leak durations was very important for
evaluating LDSN/DRF, the OGI Evaluation was limited to estimating quarterly emissions. The
Appendix E-6

-------
Appendix El
EPA ORD Simulation Approach
approach did not seek to demonstrate long-term equivalency, and LDSN/DRF modeling
suggested contributions from smaller leaks are not always insignificant if the number of
unrepaired leaks increases with time. Ultimately, incorporating leak emissions was an important
factor in developing an LDSN/DRF equivalency analysis.
While each of the previous approaches provided some guidance in equivalency determination
methods, neither was thought to be the single best approach for evaluating the LDSN/DRF
concept. For example, the FHR CC pilot study process units include a variety of component
categories in their LDAR programs which are often subject to different monitoring intervals.
Both the 1999 and 2000 approaches were limited to modeling valves subject to quarterly
monitoring. Moreover, an LDSN's ability to detect leaks is inherently based in the sensors
locations chosen for a specific system. Neither previous analysis created space for more specific
evaluations, instead focusing on the most broadly applicable leak detection simulations. The
equivalency determination approach presented in this appendix seeks to address the limitations
of the previous analyses and provide a method which considers the particulars of the LDSN/DRF
concept.
3 LeakDAS Inventory
The LeakDAS inventory is a database system which FHR uses as a tracking tool for all LDAR
activities which occur at FHR, CC West. Records from 2013-2019 for the two pilot study
process units, Mid-Crude and m-Xylene, were used to characterize leaks subject to LDAR
management. Multiple years were chosen to provide sufficient data for a long-term multiyear
analysis, and the most recent available data was used in order to most closely reflect the current
process units. Both TVA measurement and AVO detection records were reviewed because both
must be considered in order to effectively compare the CWP and LDSN/DRF concept.
The inventory of TVA measurements was processed to produce a final sample of process unit
leak SVs more similar to the 1993 OAG multi-unit dataset used in the 1999 EPA Approach and
OGI Evaluation [American Petroleum Institute, 1995; Epperson, 2000], This section outlines the
dataset characteristics and processing methods used.
3.1 Inventory Characteristics
The LeakDAS inventory for the FHR CC Mid-Crude and m-Xylene units includes 148,692 and
106,275 relevant LDAR records, respectively, between the years of 2013-2019. Initial
investigations of the dataset were limited to the years 2014-2018 in order to create a 5-year leak
profile for each unit. However, data from adjacent years was ultimately incorporated in order to
better characterize leaks events which occurred over the 5-year period. A summary of Mid-Crude
LeakDAS measurements is included in Table 3-1, and a summary for m-Xylene measurements is
included in Table 3-2.
In both units, there were very few AVO-Visual detections, and the majority of TVA
measurements were very small (< 10 ppmv). For comparison, the leak definition for a component
Appendix E-7

-------
Appendix El
EPA QRD Simulation Approach
is, at minimum, defined as 500 ppmv. Measurements over a component threshold are recorded as
"Failed" and comprised 1.24% and 1.55% of TVA measurements in Mid-Crude and m-Xylene,
respectively. Leaks are also slightly overrepresented in the LeakDAS database relative to their
physical presence because a component with an active leak may be measured several times prior
to repair (accounted for in the analysis). In contrast, a component without an active leak is
unlikely to be measured more than once in a monitoring interval. Some components, for example
in heavy liquid service or under insulation, are not routinely monitored and leaks may only be
detected through AVO. The low proportion of "failed" TVA measurements in comparison to the
vast number of measurements executed illustrates the significant inefficiencies associated with
the CWP. While the comparatively small population of active leaks in a process unit during a
given period may be found effectively under the CWP, the majority of scheduled-M21 LDAR
efforts are focused on non-leaking components.
Table 3-1: Mid-Crude Unit LeakDAS Records Summary
Database Records Summary
Unit
Mid-Crude
Component Class
Valve
Pump
Drain
Relief
Valve
Connector
Compressor
Piping
Total
Total Measurements
66,261
3,319
5,656
2,462
70,844
149
1
148,692
Failed Measurements
428
204
27
4
1,156
27
1
1,847
AVO-Visual Records
30
95
0
0
56
13
1
195
Calendar Year
2013
10,434
493
879
293
1,775
12
0
13,886
2014
11,297
574
838
363
2,840
22
0
15,934
2015
9,283
446
1,099
333
5,968
21
0
17,150
2016
6,368
375
731
303
13,679
21
0
21,477
2017
12,381
437
721
344
16,723
20
1
30,627
2018
8,387
517
696
414
15,097
30
0
25,141
2019
8,111
477
692
412
14,762
23
0
24,477
TVA Measurements
0 ppmv
2.5%
5.3%
9.5%
14%
3.4%
3.7%

3.5%
1 - <10 ppmv
87%
64%
76%
82%
84%
31%
85%
10 - <100 ppmv
8.8%
21%
12%
4.1%
9.3%
32%
9.4%
100-<1,000 ppmv
1.3%
6.0%
1.7%
0.1%
1.9%
21%
1.7%
1,000 - <10,000 ppmv
0.3%
2.6%
0.2%
0.08%
0.9%
10%
0.7%
10,000 - <100,000 ppmv
0.02%
0.6%
0%
0%
0.1%
0.7%
0.08%
100,000 ppmv
0.01%
0.4%
0%
0%
0.08%
1.5%
0.05%
Appendix E-8

-------
Appendix El
EPA QRD Simulation Approach
Table 3-2: m-Xylene Unit LeakDAS Records Summary
Database Records Summary
Unit
m-Xylene
Component Class
Valve
Pump
Drain
Relief
Valve
Connector
Compressor
Piping
Total
Total Measurements
67,388
2,143
925
807
35,011
0
1
106,275
Failed Measurements
774
140
3
3
727

1
1,648
AVO-Visual Records
6
22
0
0
24
1
53
Calendar Year
2013
9,398
303
129
111
5,088

0
15,029
2014
9,891
319
126
136
5,431
0
15,903
2015
9,372
331
143
115
5,225
0
15,186
2016
10,556
315
136
112
6,911
1
18,031
2017
9,258
310
129
112
4,047
0
13,856
2018
9,162
288
129
113
4,164
0
13,856
2019
9,751
277
133
108
4,145
0
14,414
TVA Measurements
0 ppmv
3.2%
1.3%
3.9%
6.4%
3.0%


3.2%
1 - <10 ppmv
68%
40%
78%
82%
76%
70%
10 - <100 ppmv
25%
35%
16%
11%
17.5%
22%
100-<1,000 ppmv
3.7%
18%
2.2%
0.4%
2.4%
3.5%
1,000 - <10,000 ppmv
0.4%
4.6%
0.2%
0.1%
1.1%
0.7%
10,000 - <100,000 ppmv
0.03%
1.5%
0%
0%
0%
0.1%
100,000 ppmv
0.01%
0%
0%
0%
0%
0.04%
Appendix E-9

-------
Appendix El
EPA ORD Simulation Approach
(a)
All TVA Measurements
30%
25%
(b)
Leak TVA Measurements
(/)
"O
O
O
CD
a:
(D
(/)
03
_Q
03
-i—¦
03
Q
20%
15%
10%
5%
0%
~	m-Xylene
~	Mid-Crude
30%
25%
w
o 20%
o
(D
"S 15%

-------
Appendix El
EPA QRD Simulation Approach
The significant variation in connector monitoring was a limitation in the LeakDAS dataset. In
order to reflect the alteration in monitoring practices, the Mid-Crude leak sample was limited to
2016-2018 while the m-Xylene dataset reflects leaks from 2014-2018. See Section 2.2 for
additional discussion of leak dataset processing method development.
Table 3-3: Mid-Crude Unit Component Records Summary
Component Records Summary
Unit
Mid-Crude
Component Class
Valve
Pump
Drain
Relief
Valve
Connector
Compressor
Piping
Total
Total Components
9,325
145
220
120
17,553
4
1
27,368
Leaking Components
149
52
13
2
619
4
1
840
AVO Tags
31
84
0
1
80
0
1
197
Fuel Gas Components
443
0
0
0
1,540
0
0
1,983
Chemical State
GV
2,881
0
0
114
6,076
4
0
9,075
HL
31
84
0
1
38
0
0
154
LL
6,413
61
220
5
11,439
0
1
18,139
Components Added
2013
14
17
0
1
19
0
0
51
2014
103
13
0
2
958
0
0
1,076
2015
1,101
9
1
0
10,797
0
0
11,908
2016
2
11
0
0
10
0
0
23
2017
3,050
36
0
55
4,513
0
1
7,655
2018
7
14
0
0
10
0
0
31
2019
7
10
0
0
55
0
0
72
Total
4,284
110
1
58
16,362
0
1
20,816
Components Removed
2013
21
11
1
1
20
0
0
54
2014
43
18
34
0
30
0
0
125
2015
265
20
5
0
196
0
0
486
2016
20
10
0
0
74
0
0
104
2017
907
24
7
27
2,841
0
1
3,807
2018
119
14
0
0
215
0
0
348
2019
1
7
0
0
7
0
0
15
Total
1,376
104
47
28
3,383
0
1
4,939
Appendix E-l 1

-------
Appendix El
EPA ORD Simulation Approach
Table 3-4: m-Xylene Unit Component Records Summary
Component Records Summary
Unit
m-Xylene
Component Class
Valve
Pump
Drain
Relief
Valve
Connector
Compressor
Piping
Total
Total Components
2,764
25
32
29
6,631
0
1
9,482
Leaking Components
187
14
3
1
339

1
545
AVO Tags
0
0
0
0
26

1
27
Fuel Gas Components
76
0
0
0
0

0
76
Chemical State
GV
636
0
0
26
1,018

0
1,680
HL
0
0
0
0
2

0
2
LL
2,128
25
32
3
5,611

1
7,800
Components Added
2013
1
0
0
0
18

0
19
2014
41
0
0
1
99

0
141
2015
10
0
0
0
59

0
69
2016
302
0
0
0
1,661

1
1,964
2017
3
0
1
0
10

0
14
2018
31
1
0
2
42

0
76
2019
49
0
0
0
99

0
148
Total
437
1
1
3
1,988

1
2,431
Components Removed
2013
1
0
0
0
3

0
4
2014
16
0
0
0
26

0
42
2015
0
0
0
0
42

0
42
2016
35
0
0
0
727

1
763
2017
429
0
0
0
2,251

0
2,680
2018
18
1
0
3
39

0
61
2019
3
0
0
0
24

0
27
Total
502
1
0
3
3,112

1
3,619
3.2 Methods
In order to produce lists of SVs for leaks which occurred between 2014-2018 and 2016-2018 for
the m-Xylene and Mid-Crude units, respectively, the historical LeakDAS data needed to be
evaluated chronologically on a component-by-component basis. An algorithm was developed for
reviewing each component's measurement history and identifying individual leaks. This
algorithm is outlined in Figure 3-2 and the R code used to process the measurement data is
included in Appendix E2 (referred to as Identify Leaks). Both the input files and the output leak
sample datasets are included in Appendix E3.
Appendix E-12

-------
Appendix El
EPA ORD Simulation Approach
Data
Input
LeakDAS Data Processing Method
Component locations
estimated by Mol ex
LeakDASSV
measurements
Component Addition a nd
Retirement Records
Leak components list
Tag
Category
Chemical
Cluster
Date
Retirement
State
Location
Added
Date
1
CONNECT
LL
ABC-1
5/1/08
5/1/19
2
PUMP
LL
DEF-2
9/8/17

3
CONNECT
LL
DEF-1
1/1/00

AVO 4
CONNECT
LL
GHI-5
1/1/00

5
VALVE
GV
ABC-2
5/1/08
7/20/18
6
VALVE
LL
ABC-2
5/1/08

Identify
Leaks
Compile
Leak Data
o Estimate
start/end dates
o Remove leaks
inactive in
modeling period
o Remove fuel gas
components*












Tag 1


Tag 2


Tag 3

Date
SV
Status
Date
SV
Status
Date
SV
Status


Pass
9/10/17
7,540
Fail
5/17/15
21
Pass

5/20/16
20
Pass

9/12/17
100,000
Fail

3/25/16
NA
Fail


|5/13/17
720
Fail |

9/13/17
12,550
Fail

3/25/16
21,000
Fail


5/17/17
53
Pass

9/18/17
70
Pass
3/25/16
35
Pass
5/25/18
37
Pass
10/12/17
5
Pass
5/20/16
10
Pass

AVO 4


Tag 5


Tag 6


Date
SV
Status
Date
SV
Status
Date
SV
Status

10/11/17
25,000
Fail


Pass
12/11/13
750
Fail |


10/12/17
12,000
Fail
1/12/18
23
Pass
12/11/13
29
Pass


10/12/17
40
Pass
4/15/18
95
Pass
1/12/14
40
Pass




7/18/18
55,000
Fail

|4/15/14
1,150
Fail |




7/19/18
100,000
Fail

4/16/14
101
Pass












Leak
Tag
Category
Cluster
Location
Chemical
State
Date
Added
Retirement
Detection
SV

Start
End
1
2
3
4
1
2
3
AVO 4
CONNECT
PUMP
CONNECT
CONNECT
ABC-1
DEF-2
DEF-1
GHI-5
LL
LL
LL
LL
5/1/08
9/8/17
1/1/00
1/1/00
5/1/19
5/13/17
9/10/17
3/25/16
10/11/17
720
7,540
NA
25,00(

11/15/16
9/8/17
10/20/15
NA
5/17/17
9/18/17
3/25/16
10/12/17












7
6
VALVE
AL5L Z
ABC-2
LL
5/1/08

7/15/14
1,150

1/1/13
5/29/14
8/16/14


Non-AVOs



AVOs

Leak
Tag
Cluster
Location
SV
Start
End
Leak
Tag
Cluster
Location
SV
1
1
ABC-1
720
11/15/16
5/17/17
3
3
DEF-1
NA
2
2
DEF-2
7,540
9/8/17
9/18/17
4
AVO 4
GHI-5
25,000
7
6
ABC-2
1,150
5/29/14
8/16/14




Figure 3-2: LeakDAS Data Processing Method
Appendix E-13

-------
Appendix El
EPA ORD Simulation Approach
Seven input files, three for each process unit and one additional, are used to analyze unit leak
histories. These files are available in Appendix E3 and include:
1.	LeakDAS LDAR Records provided by FHR. A condensed version which provides all
data relevant to the Identify Leaks is included in this report.
2.	LeakDAS Deletion Records provided by FHR. A condensed version which provides
data relevant to the Identify Leaks, limited to data from 2013-2019, is included in this
report.
3.	Component Location and Cluster Designations developed by Molex using site maps
and LeakDAS records provided by FHR. A summary file is included in Appendix E3.
See Appendix F for additional information.
4.	Emission Factor Correlation Equation Constants based on 1995 EPA Guidance and
FHR inventory practices [U.S. EPA, 1995],
The LDAR records are used to create a list of unique components with discovered leaks during
2013-2019. Non-leaking components are not evaluated because emissions from components with
SV less than 500 are assumed to have equivalent levels of control under both the CWP and
LDSN/DRF. The fields 'ComponentTag', 'Pass Fail', and 'LocationDescription' are used to
develop the list of leaking components. 'LocationDescription' is necessary to distinguish
between AVO leaks without unique component tags (labeled as only "AVO").
Identify Leaks reviews the LeakDAS history for each of the identified components sequentially.
The algorithm recognizes a series of "Failed" measurements for a given component as a single
leak and records the first, maximum, and average SV for the leak. The code also includes a
process for bounding the leak duration. In cases where the set of "Failed" SVs is preceded and
followed by "Passed" measurements, those inspection dates are recorded (ex. Tag 1 and Tag 3 in
Figure 3-2). If this condition is not fulfilled for either boundary, the "DateAdded" and any
deletion records are considered. In these cases, those component specific dates are used unless
they are beyond the bounds of the dataset time period (2013-2019). If no other suitable dates are
identified, the dates of 1/1/2013 and 1/1/2019 are used. This condition primarily affects leaks
outside of the simulation periods (2014-2018 and 2016-2018) for which a preceding
measurement may have been in 2012 or the subsequent measurement had not yet occurred.
The leak bounds outlined above are used to estimate start and end dates for a given leak. This is
necessary to create a leak dataset representative of active leaks during the simulation period for a
given process unit. The start date is estimated to be halfway between the initial leak bound and
the first "Failed" record. This method was chosen based on the assumption that the occurrence of
leaks is equally likely at any given point in time. It is also consistent with Texas environmental
guidance for emissions estimation which allow for using half the interval between measurements
assuming a constant emission rate [Texas Commission on Environmental Quality, 2020], Repair
dates are conservatively assumed to be the end time determined above. This was decided because
often a first "Passed" measurement for a component is conducted immediately after a successful
repair attempt. Leaks with start dates after 1/1/19 or repair dates prior to 1/1/14 or 1/1/16 for m-
Xylene and Mid-Crude, respectively, are included in the all.R files but are excluded from the
Monte Carlo leak datasets. An exception is made for components with leaks found in 2019 with
Appendix E-14

-------
Appendix El
EPA ORD Simulation Approach
preceding "Passed" measurements in 2018 because a change in the connector monitoring month
in 2019 resulted in more connectors having estimated leak start dates in January 2019 rather than
in late 2018, as had been the case for previous years. See Figure 3-2 for several examples of
different cases and Appendix E2 for the processing code.
A constant emission rate was calculated for each leak using the SOCMI and refinery correlation
equations, for the m-Xylene and Mid-Crude units respectively, consistent with the 1995 EPA
Emissions Guidance and FHR Inventory practices [U.S. EPA, 1995], Correlation equation
selection was based on component category and the chemical state designation, as outlined in the
1995 Guidance. In cases where a designation was not clear, FHR correlation designations were
used. See the emission factors input file, provided in Appendix E3, for specific designations.
Identify Leaks includes 4 additional filters for the data at various points within the processing
algorithm. Components in Mid-Crude labeled "Pre Flash" for the field 'Drawing' are removed
because these components are in an area outside the process unit which is not covered by the
current LDSN system. Components in the fuel gas area are removed based on location
descriptors if the 'Chemical State' field is "GV' (gas vapor). This filter removes leaks from
components for which emissions are assumed to be primarily natural gas because the current
sensors cannot detect methane, the primary gas, effectively. Under a new LDAR program, these
components would still be subject to scheduled-M21 despite broader LDSN/DRF
implementation. For an LDSN implementation using the current sensor (10.6 eV PID that does
not respond to methane), the comminution of comprehensive M21 monitoring of fuel gas
component CWP is specified in the facility's fugitive management plan. Components for which
no location could be determined are also removed, eliminating 14 Mid-Crude and 31 m-Xylene
leaks (also reflects as approximately 5% of total). AVO leaks are not removed but instead
directed to a separate file for independent analysis. AVO leaks are not included in this specific
Monte Carlo analysis due to the lack of reliable initial SV measurements and the added
uncertainty for estimating leak start times.
The final Identify Leaks code is available in Appendix E2 and output files are included in
Appendix E3.
Appendix E-15

-------
Appendix El
EPA ORD Simulation Approach
3.3 Leak Sample Characteristics
In total, the processing method identified 1,115 and 904 unique leaks between 2013-2019 and
produced a final list of 285 and 568 leaks for simulation for Mid-Crude and m-Xylene,
respectively. A summary of the leak datasets is included in Table 3-5, Mid-Crude, and Table 3-6,
m-Xylene. The m-Xylene unit demonstrated a relatively similar leak rate to Mid-Crude, despite
the latter having almost three times the number of tagged components. Ultimately, the Mid-
Crude unit simulation sample set is significantly smaller due to both the shortened 3-year
simulation period and the much larger population of fuel gas components (excluded from the
analysis). Both units demonstrated skewed SV distributions. However, due to the lower LDSN
DTA, 25% of leaks within the m-Xylene dataset are greater than the DTA, 4,000 ppmv, while
only 8% are above the DTA for Mid-Crude, 12,500 ppmv. The units demonstrate similar
proportions of pegged SVs (100,000 ppmv), but it is notable that m-Xylene pegged values are
almost entirely connector leaks while Mid-Crude had pegged leaks from a variety of component
categories over a shorter time duration. Pump leaks, the category with the shortest CWP
monitoring interval, comprised about 10% of both datasets while connector leaks, the category
with the largest interval, comprised more than half of all leaks. Both Monte Carlo input files are
provided in Appendix E3.
Table 3-5: Mid-Crude Unit Leak Summary
Leak Summary
Unit
Mid-Crude
Component Class
Valve
Pump
Drain
Relief
Valve
Connector
Compressor
Piping
Total
Total Leaks
182
117
25
3
768
19
1
1115
AVO Leaks*
17
51
0
0
56
6
1
131
Fuel Gas Leaks
23
0
0
0
539
0
0
562
Simulated Leaks
Count
79
32
13
3
153
5

285
Median Duration
183
20
47
53
109
49
100
Median Repair Time
1.9
6.2
1.1
4.2
2.0
3.1
2.0
Calendar Year *
2016
15
3
9
1
60
0

88
2017
34
10
3
1
52
2
102
2018
22
17
1
1
26
3
70
2019
8
2
0
0
15
0
25
First Failed TVA Measurement
500 - <6,250 ppmv
90%
75%
100%
100%
82%
80%

84%
6,250 - <12,500 ppmv
5.1%
16%
0%
0%
9.2%
0%
8%
12,500 - <25,000 ppmv
1.3%
3.1%
0%
0%
1.3%
0%
1%
12,500 - <100,000 ppmv
0%
3.1%
0%
0%
3.3%
0%
2%
100,000 ppmv
3.8%
3.1%
0%
0%
4.6%
20%
4%
*	Note that this includes AVOs between 2013-2015 when connectors were not subject to scheduled monitoring
*	First failed detection record for final simulation leak set
Appendix E-16

-------
Appendix El
EPA ORD Simulation Approach
Table 3-6: m-Xylene Unit Leak Summary


Leak Summary



Unit




m-Xy ene



Component Class
Valve
Pump
Drain
Relief
Valve
Connector
Compressor
Piping
Total
Total Leaks
374
85
3
1
440
0
1
904
AVO Leaks*
1
11
0
0
35

1
48
Fuel Gas Leaks
17
0
0
0
0

0
17
Simulated Leaks
Count
198
56
1
1
314


588
Median Duration
45
21
49
68
182


54
Median Repair Time
19
4.1
1.1
13
3.8


3.0
Calendar Year "
2014
45
12
0
0
46


103
2015
32
12
0
1
69


114
2016
35
11
1
0
81


128
2017
32
8
0
0
36


76
2018
41
11
0
0
43


95
2019
11
2
0
0
39


52
First Failed TVA Measurement
500 - <2,000 ppmv
86%
43%
100%
100%
54%


64%
2,000 - <4,000 ppmv
8.6%
18%
0%
0%
13%


11%
4,000 - <1,000 ppmv
4.1%
20%
0%
0%
14%


11%
8,000 - <100,000 ppmv
3.1%
20%
0%
0%
13%


10%
100,000 ppmv
0.51%
0%
0%
0%
5.4%


3.2%
First failed detection record for final simulation leak set
Appendix E-17

-------
Appendix El
EPA ORD Simulation Approach
4 Monte Carlo Model
The Monte Carlo model developed for this equivalency analysis was designed to satisfy several
different needs. The simulation aims to approximate previous methods presented in the 1999
EPA Approach and the OGI Evaluation while addressing the limitations of those methods and
adjusting the analysis to better evaluate the specific strengths and weaknesses of the LDSN/DRF
concept [U.S. EPA, 1999; Epperson, 2000], As is noted above, this analysis was constrained to a
process unit specific evaluation due to the current stage of LDSN/DRF development. Future
analyses may be able to develop broader equivalency evaluations as continued research
establishes a more thorough understanding of the level of control achievable under different
LDSN/DRF embodiments.
This section discusses the equivalency evaluation approach developed for this report and
presents some emissions comparisons to verify agreement between the EPA ORD and Molex
equivalency determinations.
4.1 Methods
For the purposes of this report, a process unit specific Monte Carlo model was developed for the
two CRADA pilot study units. This model simulates process unit leak profiles for 3-year (Mid-
Crude) and 5-year (m-Xylene) simulation periods using historical data—see Section 3 for a
discussion of leak list development. Four different control regimes were included to compare the
CWP control performance to three different LDSN/DRF control scenarios. Multiple leak control
scenarios were developed for LDSN/DRF because the exact level of control is not fully
understood at the time of this report. The three LDSN/DRF emissions control scenarios are as
follows:
1. Detection Threshold Average (DTA)
This scenario assumes individual leaks for which
SV > DTAF	[Equation 1]
where DTAf is the LDSN DTA for process unit F, are detected and repaired. It assumes all leaks
for which SV < DTAF are not repaired. This is the most conservative and simplistic control
scenario.
2. Detection Threshold (DT)
This scenario assumes individual leaks for which
SV DTAF
—=¦ >	—	[Equation 21
D2 1,250	L 4	J
where D is the distance from the leaking component to the nearest sensor, are detected and
repaired. It assumes all leaks for which Equation 2 is not true are not repaired. This scenario is
considered more realistic due to its incorporation of distance effects. However, the heuristic used
here should be considered a useful tool, rather than a definitive characterization of distance
effects on LDSN detection capabilities. See Appendix F2 for a review of the development of this
scenario approach.
Appendix E-18

-------
Appendix El
EPA ORD Simulation Approach
3. Detection Threshold - Cluster (DTC)
This scenario assumes a cluster of leaks triggers an LDSN alert when
pwi \ dta
[Equation 3]
where the sum of individual DT designations in a localized cluster of leaks, a, are summed for
each sensor /' and the maximum of the sums is used to determine if a LDSN alert occurs. This
heuristic should also be considered a useful tool for characterizing a physical phenomenon
observed during the pilot study, rather than a definition (discussed further in Appendix F2).
Unlike the other 2 scenarios, the DTC scenario also incorporates DRF conditions. A randomized
search method within a cluster is used such that the model simulates finding smaller leaks in the
process of searching for larger leaks which trigger LDSN alerts. These smaller leaks are repaired
based on defined DRF conditions. Results presented in this report simulated the following
condition:
~ When an LDSN alert occurs, an investigator must continue searching until either three
leaks with SV < 3,000 ppmv or one leak with SV > 3,000 ppmv is found. Any leaks
found with SV > 500 ppmv are repaired, and an investigation is closed.
The model assumes that this process will continue to repeat after each round of repairs is
completed until the condition outlined in Equation 3 is no longer true.
In order to run the model, initial variables must be defined. Table 4-1 lists the input variables,
acceptable values where applicable, and default values chosen for this analysis. While different
numbers of model iterations were run, the analysis presented in Section 4 of the primary report is
focused on 10,000 iteration simulations. The repair time was conservatively assumed to be 7
days because regulations require that non-DOR leaks be repaired within 15 days [40 CFR
63.168(f)(1); 40 CFR 60.482-7(d)(l)]. In practice, the majority of leaks are repaired within 5
days, and the historical leaks included in the simulation datasets had a median repair time of 2
and 3 days for Mid-Crude and m-Xylene, respectively (see Table 3-5 & Table 3-6). The model
reads in an input file, created using the Identify Leaks code discussed in Section 2, based on
variables 'FACIL' and 'SV metric' and a list of FHR CC sensor IDs and coordinates which are
filtered based on 'FACIL.'
Appendix E-19

-------
Appendix El
EPA QRD Simulation Approach
Table 4-1: EPA ORD Model Input Variables
Variable
Name
Acceptable
Values
Default
Description
SV_metric
"first"
"max"
"ave"
"first"
Screening value used to characterize each leak
The first leak SV was assumed to best characterize the leak prior to
detection and before repair atempts.
FACIL
"META"
"MIDCRUDE"
-
FHR CC Process Unit
LEAK_DEF
500-100,000
4,000
12,500
LDSN Detection Threshold Average (DTA) [ppmv]
s
>0
500
250
Number of leaks sampled in each model iteration
Default values were -12% reduction from total list length to increase
the variation of leak profiles tested.
n
>0
--
Number of model iterations
t_detect
>0
3
LDSN detection time
t_repair
>0
7
Repair time
Applied equally to CWP and LDSN/DRF.
SV_DRF*
500 - LEAK_DEF
3,000
DRF threshold for leak investigations
xjk*
>0
3
DRF condition: number of leaks found below SV_DRF before closing
an investigation
* These variables are based on DRF conditions and are only used for the DTC scenario
The model is iterative and follows a 5-step process for each of the n iterations. The steps are as
follows:
Step 1: Simulate leak screening values
A process unit leak profile is created by randomly sampling, with replacement, 's' leaks
from the input file leak list.
Step 2: Randomly generate leak start time and location
Leak start times are generated using a uniform variate to create a list of randomly
distributed times between the simulation start, 1/1/2014 or 1/1/2016 depending on the
process unit, and end date, 1:1/2019. Each leak location is simulated by separately
generating x andy coordinates within each leak's cluster box range. The x and y
coordinates are each created using a uniform variate between 0 and 1 which is used to
generate a number between the minimum and maximum box values ('xmin' - 'xmax' and
'ymin' - 'ymax'). Note that both duplicate leaks and unique leaks with the same
component tag are each treated as if they are from separate components within the same
cluster. This simplifying assumption—no repeat leakers—is consistent with the 1999 EPA
Approach and the OGI Evaluation.
Appendix E-20

-------
Appendix El
EPA ORD Simulation Approach
Step 3: Simulate leak detection and repair
Leak detection and repair is simulatedfor the CWP and three different LDSN/DRF
control scenarios. LDSN/DRF detection and repair follows the methods discussion
above. The CWP scenario applies monitoring intervals (monthly, quarterly, or annually)
based on the component category. Monitoring is assumed to occur on the first of the
month; see Appendix F1 for an analysis of the impact of monitoring interval date
variation. Note that the model also includes an additional CWP scenario which simulates
scheduledM21 without connector monitoring. Results fi'om this scenario are not
presented in Section 4 of the report but are summarized in Section 5, below. The
simulation assumes a constant repair time across all modeled scenarios. An end date of
1/1/2019 is appliedfor leaks which are not repaired during the modeling period.
Step 4: Calculate leak emissions
Emissions are calculated on an individual leak basis for each of the four leak scenarios
using the following equation.
Eim = ER* (endi m - start, )	[Equation 4]
Eim ¦ Emissions from leak i under control scenario m [kg]
ER ¦ Emission rate [kg/day]
(endijm — start;) : Leak duration [days]
Step 5: Sum and compare CWP and LDS/DRF total emissions
Individual leak emissions are summed to determined total emissions for each scenario.
LDSN scenarios are compared to CWP emissions. An LDSN scenarios is defined as
equivalent to CWP for that iteration if total leak emissions are less than or equal to CWP
total leak emissions.
The simulation code outlined above is available for review in Appendix E2 and input files are
included in Appendix E3.
Appendix E-21

-------
Appendix El
EPA ORD Simulation Approach
4.2 Model Comparison
While the EPA ORD model is the principal focus of Section 4 of the primary report, two models
were considered in the overall evaluation. Molex produced an independent analysis using a
separate model developed in-house, presented in Appendix F. The EPA ORD and Molex models
were developed in tandem to ensure the models were comparable, but the two simulations vary
in approach.
Comparison studies between the two simulations provided additional insight and quality
assurance, and mistakes in both codes were identified and corrected during the comparison
process. While there are a variety of smaller variations between the two methods, the most
significant difference is the approach to components with multiple leaks. The EPA ORD model
treats each simulated leak as if it originates from a unique component within a cluster area,
consistent with the OGI Equivalency approach. In contrast, the Molex model does evaluate
components with multiple leaks and includes conditions for "overlapping" leaks when another
simulated leak occurs prior to the repair of a previously simulated leak. Each method has its
limitations. It is unclear whether leaks which occurred after an initial leak should actually be
modeled independently of one other, or if later leaks can be simulated as occurring prior to the
original leak on the same component. It was decided that including both approaches would be
preferable to choosing only one, given the uncertainty in which simulation approach is
preferable.
Molex created several different versions of its simulation code in order to more closely mimic
the EPA ORD approach. Table 4-2 presents the median emissions from 1,000 Mid-Crude unit
simulations for several model versions. Note that the Molex simulations used the same leak
profiles created using the EPA ORD Code so as to create an exact comparison.
Table 4-2: EPA ORD and Molex Simulation Comparison, Median Emissions
Model
Version
Repeat
Leakers?
DRF Model
Conditions
Median Emissions by Control Scenario
M21
DTA
DT
DTC
kg
EPA ORD
No
EPA
3,170
3,959
2,450
1,751
Molex*
No
EPA
3,169
3,977
2,471
2,080
Molex
3,169
3,977
2,471
2,049
Yes
EPA
2,874
2,285
1,524
1,543
Molex
2,874
2,285
1,520
1,543
* Only non-growth results were included in this direct comparison test
Appendix E-22

-------
Appendix El
EPA ORD Simulation Approach
In Table 4-2, the EPA ORD model is consistent with the model presented in the primary report.
The Molex models are divided based on whether multiple simulated leaks from the same
component are modeled independently or with "overlap" conditions. Additionally, Molex
provided results from two different cluster algorithms. The "EPA" method does not prioritize
finding larger leaks and uses a purely random search (note that the Molex version of the "EPA"
Cluster is not fully identical to the EPA model). The "Molex" model prioritized larger leaks in its
random search cluster approach. See Appendix F1 for a more thorough explanation of these
differentiations.
The emissions presented in Table 4-2 illustrate that the overall EPA ORD and Molex emissions
methods are similar when the repeat leaks conditions align. Note that there are more significant
differences in the DT and DTC simulations because the two methods simulate distance effects
and the cluster model DRFs differently. Notably, the individual simulation iterations are not as
closely aligned due to smaller variations in approach. For example, an individual simulation may
have significantly different M21 emissions if a pegged valve leak is simulated as occurring on
6/25/2017. The EPA ORD Model would detect it on the first of the month, 7/1, but the Molex
model would detect it based on a set interval of 90 days, finding no leak when testing on 6/24
and detecting the leak on 9/22. Conversely, the Molex model might find a leak earlier due to the
same monitoring interval deviations. Ultimately, the smaller differences which affect individual
iterations were found to have minimal impact on overall results, as demonstrated by the similar
median emissions and the equivalency results (see Table 4-3).
Table 4-3: EPA ORD and Molex Simulation Comparison, Equivalency
Model
Version
Repeat
Leakers?
DRF Model
Conditions
Equivalency by Control Scenario
DTA
DT
DTC
% Iterations Equivalent
EPA ORD
No
EPA
22.6%
77.1%
95.0%
Molex*
No
EPA
22.5%
76.7%
88.0%
Molex
22.5%
76.7%
88.8%
Yes
EPA
76.3%
96.1%
95.8%
Molex
76.3%
96.5%
96.1%
* Only non-growth results were included in this direct comparison test
Appendix E-23

-------
Appendix El
EPA ORD Simulation Approach
4.3 Model Sensitivity
While comparisons to the Molex model provided quality assurance, it was also important to
understand the impact different variables have on EPA ORD simulations. Three variations of the
original simulation code were created to test the primary variables randomized for the Monte
Carlo simulation:
1.	Leak SV samples
2.	Leak Start Time
3.	Component/Leak Location
For each model version, the tested variable was still randomized while the other two variables
were adjusted to remain constant. For the sample testing, a single set of randomized start times
and leak coordinates was used for all simulation iterations while the SV sample was varied. For
the other two model versions, the original base SV dataset was used, rather than a single
randomized sample. The three models were each run twice, once for each process unit.
Variable-specific modeling results are summarized in Table 4-4. The table presents the
coefficient of variation and skewness for each control scenario across all 6 model runs. The SV
sample test demonstrated the largest coefficient of variation across all scenarios, and CWP
emissions exhibited the largest variation. This is likely due to CWP emissions being related to
both leak start time and the SV distribution. Conversely, LDSN/DRF emissions, particularly
those from larger leaks, are more closely linked to SV size, and start time would only impact
emissions from smaller, less impactful leaks. Location had no impact on CWP and DTA because
distance effects are not incorporated into those control scenarios. Start time did not demonstrate
a strong skew for most scenarios, but a strong skew was found for m-Xylene DTC. Given the
slightly negative skew for Mid-Crude DTC, no specific relationship was demonstrated. However,
this could possibly be based in the different process unit layouts and cluster densities. Both SV
sample and location testing demonstrate a right tail skew.
Table 4-4: Model Sensitivity, emissions distribution statistics
Variable
Unit
Coefficient of Variation
Skewness
CWP
DTA
DT
DTC
CWP
DTA
DT
DTC
SV Sample
m-Xylene
19%
6%
8%
10%
0.34
0.31
0.15
0.27
Mid-Crude
29%
12%
10%
11%
0.47
0.36
0.30
0.23
Start Time
m-Xylene
9%
4%
5%
6%
0.02
0.05
0.11
0.45
Mid-Crude
14%
6%
6%
6%
0.02
0.17
0.04
-0.07
Location
m-Xylene
0%
0%
4%
4%
N/A
N/A
0.22
0.57
Mid-Crude
0%
0%
6%
5%
N/A
N/A
0.55
0.21
Appendix E-24

-------
Appendix El
EPA ORD Simulation Approach
5 Simulation Results
The data processing and simulation methods discussed in this Appendix were all developed to
serve an overall equivalency evaluation. Because a Monte Carlo approach was used, equivalency
is determined based on the proportion of simulation iterations for which LDSN/DRF simulated
emissions are less than or equal to CWP emissions. An analysis determines the proposed
approach is equivalent if the alternate work practice demonstrates equivalency for at least 67% of
the simulation iterations [Steering Committee for Alternative Leak Detection Work Practices,
1999], Molex and EPA ORD simulation results from 10,000 iteration model runs are presented
in terms of "% equivalent" in Table 5-1. Both models demonstrated equivalency for the m-
Xylene unit across all three LDSN/DRF scenarios. Equivalency was demonstrated for two of the
three scenarios for the Mid-Crude unit. These results demonstrate the impact the process unit
DTA may have on equivalency determinations because Mid-Crude's DTA is just over three
times larger than that for m-Xylene.
Table 5-1: Simulation results
Processing Unit
m-Xylene
Mid-Crude
Model
EPA
Molex*
EPA
Molex*
Scenario
% Equivalent
DTA
78.0%
100%
20.7%
59.0%
DT
99.8%
100%
72.4%
93.2%
DTC**
100%
100%
92.5%
92 8%
* Non-growth emissions model
** EPA and Molex cluster models reflect different DRFs
A secondary CWP scenario was also simulated in which no connectors are repaired. Results
using this CWP option are presented in Table 5-2. Both process units demonstrated robust
equivalency in this scenario because a majority of leaks identified in the LeakDAS Inventory
were connector leaks.
Table 5-2: Simulation results, alternate CWP scenario
Scenario
Processing Unit
m-Xylene
Mid-Crude
% Equivalent
DTA
100%
94.6%
DT
100%
99.6%
DTC
100%
100%
Appendix E-25

-------
Appendix El
EPA ORD Simulation Approach
The final analysis presented above focuses on the process unit DTAs of 4,000 ppmv and 12,500
ppmv for m-Xylene and Mid-Crude, respectively. However, using a single DTA is limited given
that detection is better described physically as a "detection threshold band." This was the
justification for incorporating the DT and DTC scenarios. While the DTAs used in this report
were representative of the LDSN systems in place during the pilot study, a system with a
different emissions-sensor response factor or closer sensor spacing would not have the same
DTA. Different DTA model runs were completed to provide a broader understanding of the
impact process unit DTA has on equivalency. Results from these 1,000 iteration runs are
presented in Table 5-3. The DTA and DT scenarios demonstrate equivalency at similar LDSN
DTAs (-5,000 ppmv and >15,000 ppmv respectively). This is likely because the leak samples for
both units demonstrated similar SV distributions. Conversely, m-Xylene demonstrates a much
higher rate of equivalency at 25,000 ppmv. This is probably due to the process unit density and
larger number of detectable leaks. While both units demonstrated similar historical leak
discovery rates, a large number of Mid-Crude leaks emit fuel gas and are not detectable by the
current LDSN system. Additionally, m-Xylene is a smaller unit—creating a stronger cluster
effect because smaller leaks are more likely to be located near other active leaks at any given
time.
Table 5-3: Simulation results, LDSN DTA
Model DTA Sensitivity
(b)
Model DTA Sensitivity
Unit
m-Xylene

Unit
Mid-Crude
LDSN DTA
DTA
DT
DTC

LDSN DTA
DTA
DT
DTC
ppmv
% Equivalent

ppmv
% Equivalent
1,000
100%
100%
100%

3,125
91%
99%
100%
2,000
99%
100%
100%

6,250
58%
89%
99%
3,000
91%
100%
100%

9,375
28%
81%
96%
4,000
78%
100%
100%

12,500
19%
71%
93%
6,000
27%
97%
100%

18,750
18%
57%
84%
8,000
6%
90%
100%

25,000
13%
43%
73%
16,000
0%
41%
99%

50,000
6%
18%
37%
25,000
0%
12%
91%




Appendix E-26

-------
Appendix El
EPA ORD Simulation Approach
One other variable which was not studied in the primary analysis was LDSN detection time.
Demonstrating an ability to detect large leaks quickly is an important aspect of any LDSN
system's performance, and the value may vary by facility and detection algorithm. The primary
analysis assumed a detection time of 3 days, which was identified as reasonably conservative
based on a qualitative review of pilot study data. Table 5-4 presents m-Xylene simulation results
from 1,000 iteration model runs with varying detection times. All scenarios demonstrate
equivalency with a modeled LDSN detection time of 12 days or less. The DT scenarios remains
equivalent for all detection times, but there is significant improvement from 60 to 30 days. The
DTA scenario, the most conservative, does not meet equivalency standards when a detection
time of 15 days is assumed.
Table 5-4: Simulation results, detection time
Unit
m-Xylene
Detection Time
DTA
DT
DTC
days
% Equivalent
1
79%
100%
100%
3
76%
99.9%
100%
6
75%
99.9%
100%
9
74%
99.9%
100%
12
68%
99.5%
100%
15
66%
99.6%
100%
30
47%
98%
100%
60
12%
81%
100%
Overall, the variety of simulation results presented in this Appendix and the primary report
suggest that the LDSN/DRF systems installed during the CRADA FHR CC pilot study would be
equivalent to scheduled M21 over multi-year time periods. Moreover, the simulations
demonstrate that a system with targeted leak repair, rather than scheduled monitoring, may be
both equivalent and produce significant long-term emissions reductions relative to the CWP.
Further research is needed to understand exactly what level of control should be modeled for
LDSN/DRF equivalency evaluations. However, the strong equivalency results, particularly for
the m-Xylene unit, demonstrate that the LDSN/DRF concept has the potential to significantly
reduce emissions and improve LDAR program efficiency relative to the CWP.
Appendix E-27

-------
Appendix El
EPA ORD Simulation Approach
6 References
American Petroleum Institute, 1995: Emission Factors for Oil and Gas Production Operations,
API Publication Number 4615, prepared by Star Environmental for API.
Epperson, David (prepared by), 2000: Monte Carlo Simulation Evaluation of Gas Imaging
Technology, Columbus, IN 47203.
Steering Committee for Alternative Leak Detection Work Practices, 1999: Demonstrating
Alternative Work Practices for Fugitive Leak Detection and Repair (LDAR) Programs.
Unofficial report.
Texas Commission on Environmental Quality, 2020: 2019 Emissions Inventory Guidelines,
Appendix A. Air Quality Division, RG-360/19. Austin, TX 13087. Available at:
https://www.tceq.texas.gov/assets/public/comm_exec/pubs/rg/rg360/rg360-19/rg360.pdf
U.S. EPA, 1995: 1995 Protocol for Equipment Leak Emission Estimates, Office of Air and
Radiation, Office of Air Quality Planning and Standards, EPA-453/R-95-017. Research
Triangle Park, NC 27711.
U.S. EPA, 1999 .Monte Carlo Simulation Approach for Evaluating Alternative Work Practices
for Equipment Leaks, Office of Air and Radiation, Office of Air Quality Planning and
Standards, Draft report. Research Triangle Park, NC 27711.
U.S. EPA. Alternative work practice to detect leaks from equipment. U.S. Code of Federal
Register: 40 CFR Parts 60, 63, 65; 73 FR 78199, EPA-HQ-OAR-2003-0199, 2008.
Appendix E-28

-------
molex b
Progress on LDAR Innovation
Appendix E: EPA ORD Equivalency
Simulations
Agency f	Appendix E2: EPA ORD Software Code

Page E29 to E40
Appendix E2
EPA ORD Software Code
Appendix E-29

-------
Appendix E2
EPA ORD Software Code
Appendix E1
EPA ORD Simulation Approach
Table of Contents
Identify Leaks.R	
Monte Carlo.R	
Appendix E-30

-------
Appendix E2
EPA ORD Software Code
ldentify_Leaks.R
1	#####################################################################################
2	# 2020/03 HLane
3	# Version: April 1, 2020
4	#
5	# This code uses formatted LeakDAS data archives to identify leak events within a
6	# processing unit.
7	#
8	#####################################################################################
9
10	m(list = Is())
11
12	# Select Processing Unit
13	unit <- "rrX" # rrX, MC
14
15	library(svMisc)
16	library(tidyverse)
17	library(lubridate)
18
19
20	setwd("C:/Users/hlane/Documents/FHR-Molex CRADA/Report/Appendix E/E.3")
21
22	# Load Files
23	{
24	pathl <- c("C:/Users/hlane/Documents/FHR-Molex CRADA/Report/Appendix E/E.3/")
25	start all <- as.POSIXct(ymd(strptime(c("1/1/2013"),"%m/%d/%Y"),
26	tz="America/Chicago"))
27	EF <- read.csv(pasteO(pathl,"Equation FHR.csv"))
28	if (unit=="rrX") {
2	9	LkDAS <- read.csv(
30	pasteO(pathl,"LeakDAS Records/Meta Inspections 2013 thru 2019.csv"))
31	L del <- read.csv(pasteO(pathl,"LeakDAS Records/rrXylene Inactive.csv"))
32	loci <- read.csv(pasteO(pathl,"Location Info/output META.csv"))
33	loc2 <- read.csv(pasteO(pathl,"Location Info/rrXylene leaks3.csv"))
34	EF <- filter(EF,Category=="SOCMI")	~~
35	filename <- "rrXylene"
36	time_all <- as.POSIXct(ymd(strptime(c("1/1/2014","1/1/2019"),"%m/%d/%Y"),
37	tz="America/Chicago"))}
38	if(unit=="MC"){
3	9	LkDAS <- read.csv(
40	pasteO(pathl,"LeakDAS Records/Mid Crude Inspections 2013 thru 2019.csv"))
41	L del <- read.csv(pasteO(pathl,"LeakDAS Records/MidCrude Inactive.csv"))
42	loci <- read.csv(pasteO(pathl,"Location Info/output MIDCRUDE 04012020.csv"))
43	loc2 <- read.csv(pasteO(pathl,"Location Info/midCrude leaks2.csv"))
4	4	EF <- filter(EF,Category=="REFIN")
4 5	filename <- "MidCrude"
46	time_all <- as.POSIXct(ymd(strptime(c("1/1/2016","1/1/2019"),"%m/%d/%Y"),
47	tz="America/Chicago"))}
48	}
49
50	# Clean Files
51	{
52	LkDAS <- LkDAS [,c( "InspectionHistorylD","ComponentID","UnitDescription",
53	"ComponentTag","ComponentClass","InspectionDate",
54	"TypeDescription","NetReading","Pass Fail","LocationDescription",
55	"PlantDescription","ChemicalState","DateAdded","Drawing")]
56	LkDAS[,"Date"]	<- as.POSIXct(
57	ymd hms(strptime(as.character(LkDAS$InspectionDate),"%m/%d/%y %H:%M:%S"),
58	tz="America/Chicago"))
5 9	LkDAS[, "DateAdded"] <- as.POSIXct(
60	ymd( strptime(as.character(LkDAS$DateAdded)	,"%m/%d/%Y") ,
61	tz="America/Chicago"))
Appendix E-31

-------
Appendix E2
EPA ORD Software Code
62
63	L del <- L del[,c( "Component.Tag","Component.Class"Unit.Description",
64	"Activity Level", "Component.ID","Component.Type","Chemical.State",
65	"Component.Stream", "Location.Description", "Plant.Description",
6	6	"Date.Added","Status.Description","Status.Date",
67	"Last.M21Inspection.Date","Last.Visual.Inspection.Date",
68	"Leak.Definition")]
69	L del[,"Date"] <- as.POSIXct(
70	ymd(strptime(as.character(L del$Status.Date) ,"%m/%d/%Y") ,tz="America/Chicago") )
71	}
72
73	# Initialize data
74	{
75	tags <- unique(
76	filter(LkDAS,Pass Fail=="Failed" & Drawing != "Pre Flash") [,c(
77	"ComponentTag","ComponentClass","TypeDescription",
7	8	"LocationDescription", "ChemicalState")] )
79
8	0	# Leak Data Frame
81	Leak <- data.frame("ComponentTag"	=tags$ComponentTag,
82	"ComponentClass"	=tags$ComponentClass,
83	"TypeDescription" =tags$TypeDescription,
8 4	"LocationDescription"=tags$LocationDescription,
85	"ChemicalState" =tags$ChemicalState,
8 6	MaxSV=c(0),AveSV=c(0),SV0=c(0),SVl=c(0) ,SVr=c(0) ,
87	tO=LkDAS$Date[1],tl=LkDAS$Date[1],tn=LkDAS$Date [ 1],
8 8	tr=LkDAS$Date[1],ta=LkDAS$Date[1],te=LkDAS$Date[1],
8 9	count=c ( 0) )
90	Leak <- Leak[0,names(Leak)]
91	}
92
93	# Review each component
94	k <- 0
95	for(i in 1:length(tags [,1])) {
96	tmp <- filter(LkDAS, ComponentTag==tags$ComponentTag[i] &
97	LocationDescription==tags$LocationDescription[i])
98	tmp <- tmp[order(tmp$Date),]
99
100	row.names(tmp) <- c(1:length(tmp[,1]))
101	x <- as.numeric(row.names(tmp[tmp$Pass Fail=="Failed",]))
102
103	# Identify Leaks
104	if(sum(tmp$Pass Fail=="Failed")>0){
105	for(j in 1:length(x)){
106	# Identify 1st measurement
107	if( (j ==1) isTRUE( (x[j]-x[j-1] )>1) ) {
108	k <- k+1	# leak count
109	m <- 0	# start meas. count
110	Leak[k,"tl"]	<- tmp$Date[x[j]]	# 1st measurement time
111	Leak$ComponentTag[k]	<- tmp$ComponentTag[1]	# component tag
112	Leak$ComponentClass[k]	<- tmp$ComponentClass[1] # component class
113	Leak$TypeDescription[k]	<- tmp$TypeDescription[1] # location description
114	Leak$LocationDescription[k] <- tmp$LocationDescription[1] # type description
115	Leak$ChemicalState[k]	<- tmp$ChemicalState[1]	# chemical state
116	Leak$SVl[k]	<- tmp$NetReading[x[j]] # 1st leak meas.
117	Leak$ta[k]	<- tmp$DateAdded[x[j]]	# date component added
118	if(sum(L del$Component.Tag==as.character(tmp$ComponentTag[1]))>0){
119	Leak$te[k] <- L del$Date[
120	L del$Component.Tag==as.character(tmp$ComponentTag[1])][1]
121	}
122	if(x[j]==1){
123	# Leak$t0[k] <- -1	# start time bounding
124	# Leak$SV0[k] <- -1	# previous measurement
125	}else{
126	Leak$t0[k]<-tmp$Date[x[j]-1]	# start time bound
127	Leak$SV0[k]<-tmp$NetReading[x[j]-1]	# previous passing meas.
Appendix E-32

-------
Appendix E2
EPA ORD Software Code
128	}
129	}
130
131	# Record measurement
132	{ m <- m + 1 # measurement count
133	if(!is.na(tmp$NetReading[x[j]])){
134	Leak$MaxSV[k] <- max(Leak$MaxSV[k],tmp$NetReading[x[j]],na.rm=TRUE)
135	# max measurement
136	Leak$AveSV[k] <- sum(Leak$AveSV[k],tmp$NetReading[x[j]],na.rm=TRUE)}
137	# ave measurement*m
138	}
139	# Identify last measurement
140	if( (j ==length(x) ) |isTRUE(x[j+1] ! = (x[j]+l))){
141	Leak$count[k] <- m	# measurement count
142	Leak$AveSV[k] <- Leak$AveSV[k]/m	# calculate average SV
143	Leak$tn[k] <- tmp$Date[x[j]]	# last measurement time
144
145	if(x[j]==length(tmp[,1])){
146
147	}else{
148	Leak$tr[k] <- tmp$Date[x[j]+1]	# repair time
149	Leak$SVr[k] <- tmp$NetReading[x[j]+1]} # repaired screening value
150	}
151	}
152	}
153	}
154
155	# Remove Fuel Gas
156	{
157	if(unit=="MC"){
158	r <- (1:length(Leak[,1]))*(
15 9	(lengths(strsplit(as.character(Leak$LocationDescription) ,"4 2BA") )>1)
160	(lengths(strsplit(as.character(Leak$LocationDescription) , "42-BA") )>1)
161	(lengths(strsplit(as.character(Leak$LocationDescription)42 BA"))>1)
162	(lengths(strsplit(as.character(Leak$LocationDescription) , "BLW HTR-3") )>1) )*
163	(Leak$ChemicalState=="GV")
164	}else{
165	r <- (1:length(Leak[,1]))*
166	(lengths(strsplit(as.character(Leak$LocationDescription),"54BA1") )>1)*
167	(Leak$ChemicalState=="GV")
168	}
169	r <- r [r!=0]
170
171	write.csv(Leak[r,],pasteO(filename," leaks FG.csv"))
172	if(length(r)>0){Leak <- Leak[-r,]}
173	}
174
175	# Add Location
176	{
177	Leak[, c("Level","yl","y2","xl","x2")]<- c(0)
178	Leak[1:2,"cluster"] <- as.character(locl$obj[1:2])
179
180	for(i in 1:length(Leak [,1])) {
181	tmp <- filter(loci,tag==as.character(Leak$ComponentTag[i]))
182	if(as.character(Leak$ComponentTag[i])=="AVO"){
183	tmp <- filter(loci,tag==as.character(Leak$ComponentTag[i])&
184	loc==as.character(Leak$LocationDescription[i]))}
185	if(length(tmp[,1])==0){
186	tmp <- filter(loc2, tag==as.character(Leak$ComponentTag[i]) )
187	if(as.character(Leak$ComponentTag[i])=="AVO"){
188	tmp <- filter(loc2,tag==as.character(Leak$ComponentTag[i])&
18 9	loc==as.character(Leak$LocationDescription [i] ) ) } }
190	if(length(tmp[,1])>0){
191	Leak[i,c("Level","yl","y2","xl","x2")] <- tmp[1,c("level","W","E","S","N")]
192	Leak[i,"cluster"] <- as.character(tmp$obj[1])
193	}
Appendix E-33

-------
Appendix E2
EPA ORD Software Code
194	}
195	}
196
197	# Calculate Emission Rates
198	{
199	Leak[,"ER_ID"] <- c(0)
200	for(r in 1:length(EF[,1])){
201	if(EF$Class[r]==""){r ->Leak[,"ER_ID"]
202	}else{
203	if(EF$Chem.State[r]==""){
204	r -> Leak[c(1:length(Leak [,1]))*
205	(Leak$ComponentClass==as.character(EF$Class[r])),"ER ID"]
2 0 6 }else{
207	r -> Leak[c(1:length(Leak[,1]))*
2 08	(Leak$ComponentClass==as.character(EF$Class[r]))*
209	(Leak$ChemicalState ==as.character(EF$Chem.State[r])),"ER ID"]
210	}
211	}
212	}
213	# Fix Mid-Crude Screwed Connectors
214	if(unit=="MC"){
215	Leak[(str detect(as.character(Leak$TypeDescription),"SCREW"))&
216	(Leak$ComponentClass=="CONNECT"),"ER ID"] <- as.numeric(
217	row.names(EF[EF$Type=="SCREWD",]))
218	}
219	Leak[,c("ER kgpd.max","ER kgpd.ave","ER kgpd.l")
220	] <- EF$constant[Leak$ER ID]*(Leak[,c("MaxSV","AveSV","SV1")]A
221	EF$exponent[Leak$ER ID])*24
222	Leak[(Leak$MaxSV==le5)&(lis.na(Leak$MaxSV==le5)),
223	"ER kgpd.max"] <- EF$P100k[Leak$ER ID[(Leak$MaxSV==le5)&
224	(!is.na(Leak$MaxSV==le5))]]*2 4
225	Leak[(Leak$AveSV==le5)&(!is.na(Leak$AveSV==le5) ) ,
226	"ER kgpd.ave"] <- EF$P100k[Leak$ER ID[(Leak$AveSV==le5)&
227	(!is.na(Leak$AveSV==le5))]]*2 4
228	Leak[(Leak$SVl ==1e5)&(!is.na(Leak$SVl ==le5)),
229	"ER_kgpd.l" ] <- EF$P100k[Leak$ER_ID[(Leak$SVl ==le5)&
230	(!is.na(Leak$SVl ==le5))]]*24
231	}
232
233	# Save all Files
234	write.csv(Leak,pasteO(filename," leaks all.csv"))
235
236	# Filter Data
237	{
238	# create leak start/end time
239	cl <- c(1:length(Leak[,1]))*(!is.na(Leak$te) )
240	cl <- cl[cl>0]
241	Leak[cl*(Leak$te[cl] modeling end date
255	Leak <- filter(Leak,tsCtime all[2] & repair>time all[l])
256
257	# Remove leaks w/out location data
258	Leak <- filter(Leak,yl!=0)
259	# Separate AVOs
Appendix E-34

-------
Appendix E2
EPA ORD Software Code
260	Leak AVO <- Leak[ ((lengths(strsplit(as.character(Leak$ComponentTag),
261	~~	"AVO"))>1)|(is.na(Leak$SVl))),]
262	Leak M21 <- Leak[!((lengths(strsplit(as.character(Leak$ComponentTag),
263	~~	"AVO"))>1)|(is.na(Leak$SVl))),]
264	}
265
266	# Save Simulation Data
267	{
2 68	Leak M21.tmp <- data.frame(
269	"ComponentClass"
27 0	"ComponentTag"
271	"Previouslnspection"
272	"StartDate"
273	"InspectionDate"
27 4	"RepairDate"
275	"NetReading"
27 6	"LocationDescription"
277	"ChemicalState"
27 8	"ComponentType"
27	9	"object"
280	"Level"
281	"yl"
282	"y2"
283	"xl"
284	"x2"
285	"cluster"
28	6	"ER_kgpd"
287	)
288	write.csv(Leak M21.tmp,pasteO(filename," leaks M21.max.csv"))
289
290	Leak M21.tmp[,c("NetReading","ER kgpd")] <- Leak M21[,c("AveSV","ER kgpd.ave")]
291	write.csv(Leak M21.tmp,pasteO(filename," leaks M21.ave.csv"))
292
2 93	Leak M21.tmp[,c("NetReading","ER kgpd")] <- Leak M21[,c("SV1","ER kgpd.l")]
294	write.csv(Leak M21.tmp,pasteO(filename," leaks M21.first.csv"))
295
296	write.csv(Leak AVO,pasteO(filename," leaks AVO.csv"))
297	}
MonteCarlo.R
1	#####################################################################################
2	# 2020/02 HLane
3	# Version: March 25, 2020
4	#
5	# Monte Carlo model which simulates leak emissions control for the m-Xylene and
6	# Mid-Crude units at the FHR CC facility. The model simulates 5 different LDAR
7	# control scenarios:
8	#	1. M21 - CWP, scheduled M21
9	#	2. C21 - CWP, scheduled M21 without connector control
10	#	3. DTA - LDSN/DRF, system DTA
11	#	4. DT - LDSN/DRF, system DTA with distance effects
12	#	5. DTC - LDSN/DRF, system DTA with distance and cluster effects
13	#
14	# See Appendix El for a full description of input variables and methods.
15	#
16	#####################################################################################
17
18	rm(list = Is())
19
20	SV metric <- "first" # max, ave, or first
= Leak M21$ComponentClass,
= Leak M21$ComponentTag,
= as.character(Leak M21$t0),
= as.character(Leak M21$start),
= as.character(Leak M21$tl),
= as.character(Leak M21$repair),
= Leak_M21$MaxSV, ~~
= Leak M21$LocationDescription,
= Leak M21$ChemicalState,
= Leak M21$TypeDescription,
= Leak M21$cluster,
= Leak M21$Level,
= Leak~M21$yl,
= Leak M21$y2,
= Leak~M21$xl,
= Leak M21$x2,
= Leak M21$cluster,
= Leak M21$ER kgpd.max
Appendix E-35

-------
Appendix E2
EPA ORD Software Code
21
22	# META #######################################
23	#FACIL <- "META" # META or MIDCRUDE
2 4	#LEAK_DEF <- 4e3 # [ppmv]
25	#s	<- 500 # sample size of leaks
26
27	# MIDCRUDE ###################################
28	FACIL <- "MIDCRUDE" # META or MIDCRUDE
2	9	LEAK DEF <- 12.5e3 # [ppmv]
30	s	<- 250 # sample size of leaks
31
32	# Initialize Variables
33	n	<- 1000 # iterations
34	t detect <-3	# [days]
35	t repair <- 7	# [days]
3	6	SV DRF <- 3000 # [ppmv] DRF threshold
37	x lk	<- 3 # No. leaks to be found below DRF threshold before
38	# closing PSL
39	SV_THRES <- LEAK_DEF/1250 # Distance detection
40	# [ppmv/ftA2]
41
42	# Load Libraries
43	{
44	library(tidyverse)
45	library(lubridate)
4	6	library(svMisc)
47	}
48
4 9	# Load Files
50	{
51	setwd("C:/Users/hlane/Documents/FHR-Molex CRADA/Report/Appendix E/E.3")
52	s loc <- read.csv("sensor locations.csv")
53	if (FACIL=="META") {	~~
54	leaks <- read.csv(
55	pasteO("Identify Leaks.R output s/rrXylene leaks M21.",SV metric,".csv"))
56	time_all <- c("2011-01-01 00:00:00"2019-01-01 00:00:00"7 # start and end date
57	}
58	if (FACIL== "MIDCRUDE" ) {
59	leaks <- read.csv(
60	pasteO("Identify Leaks.R outputs/MidCrude leaks M21.",SV metric,".csv"))
61	time_all <- c("2016-01-01 00:00:00","2019-01-01 00:00:00")~# start and end date
62	}
63	setwd("../Monte Carlo")
64	}
65
66	# Set up
67	{
68	# Create function which calculates distance
6	9	distance <- function(x 1,x 2,y 1,y 2){sqrt(((x 2-x 1)A2)+((y 2-y 1)A2) ) }
70
71	# Convert Time to POSIXct
72	leaks[,"start" ] <- as.POSIXct(ymd hms(strptime(as.character(leaks$StartDate ),
73	~~	"%m/%d/%Y %H:%M:%S")))
74	leaks[,"repair"] <- as.POSIXct(ymd hms(strptime(as.character(leaks$RepairDate),
75	~~	"%m/%d/%Y %H:%M:%S") ))
76	time all <- as.POSIXct(ymd hms(time all),tz="EST")
77	~~	~~ ~
7	8	# Set times to min/max for any NAs
79	leaks$start[is.na(leaks$start)] <- time all[l]
80	leaks$repair[is.na(leaks$repair)] <- time all[2]
81
82	# Create unique Cluster ID for each non-cluster leak
83	n.na <- sum(is.na(leaks$cluster))
84	leaks[,"cluster"] <- as.character(leaks$cluster)
85	leaks$cluster[is.na(leaks$cluster)] <- length(
8	6	unique(leaks$cluster,na.rm=TRUE) )+c(1:n.na)
Appendix E-36

-------
Appendix E2
EPA ORD Software Code
87	leaks[,"cluster"] <- as.factor(leaks$cluster)
88
8 9	# Assume Leaks w/out Level are Level 1 (most corrmon)
90	leaks$Level[is.na(leaks$Level)] <- 1
91
92	# Pull Unit Sensors
93	s loc <- filter(s loc,Unit==FACIL)
94	~~
95	# Create Results data frames
96	E_all <- data.frame("n"=c(l:n),"M21"=c(0),"C21"=c(0),
97	"DTA"=c(0),"DT_"=c(0),"DTC"=c(0)) # surrmary emissions
98	C all <- E all	# surrmary repair counts
99	C all[,"P 100k"] <- c(0)	# pegged value count
100	}
101
102	# Run n i Monte Carlo iterations
103	for(n i in l:n){
104	progress(n i,max.value = n)
105
106	# Initialize
107	{
108	# Randomize sample of s leaks
109	s i <- sample(1:length(leaks[,1]),s,replace=TRUE)
110
111	# Create data frame
112	leak i <- leaks[s i,c("ComponentClass","ComponentTag",
113	"Level"cluster","NetReading","ER kgpd")]
114
115	# Generate leak start times
116	leak i[,"start"] <- runif(length(s i),0,as.numeric(
117	difftime(time all[2],time all[1],units = "sees")))+time all[l]
118
119	# Generate leak location (note yl>y2 and x2>xl)
120	leak i[,"x"] <- leaks$xl[s i]+((leaks$x2[s i]-leaks$xl[s i])*runif(length(s i)))
121	leak i[,"y"] <- leaks$y2[s i]+((leaks$yl[s i]-leaks$y2[s i])*runif(length(s i)))
122
123	# Organize
124	leak i <- leak i[order(leak i$start),]
125	leak~i[,"ID"] <- 1:s
126	}
127
128	# Determine Leak Repair Times
129	{
130	# M21 Detection
131	{
132	leak i[,"M21"] <- leak i$start
133	leak i[(leak i$start <=time all[1]),"M21"] <- time all[l]+l
134	~~	~~
135	leak i$M21[leak i$ComponentClass=="PIJMP"] <- ceiling date(
136	leak i$M21[leak i$ComponentClass=="PIJMP"], unit="l month")
137	leak i$M21[leak i$ComponentClass!="PIJMP"] <- ceiling date(
138	leak i$M21[leak i$ComponentClass!="PIJMP"], unit="3 months")
139	leak i$M21[leak i$ComponentClass=="CONNECT"] <- ceiling date(
140	leak i$M21[leak i$ComponentClass=="CONNECT"],unit="12 months")
141
142	# Federal standard (no connectors)
143	leak_i[,"C21"] <- leak_i$M21
144	leak i$C21[leak i$ComponentClass=="CONNECT"] <- time all[2]
145
14 6	# Add repair time
147	leak_i[,c("M21","C21")] <- leak_i[,c("M21","C21")] + t_repair*24*3600
148	}
149
150	# Distance Effects
151	{
152	for(m in 1:length(s loc [,1] ) ) {
Appendix E-37

-------
Appendix E2
EPA ORD Software Code
153	leak i[, paste("D",m, sep=" ")] <- leak i$NetReading/(
154	distance(leak i$x, s loc$x[m],
155	leak i$y,s loc$y[m])A2)
156	leak i[leak i$Devel!=s loc$L[m],paste("D",m,sep=" ")] <- 0
157	}
158	leak i[,"D max"] <- apply(leak i[,paste("D",(1:length(s loc[,1])),sep="
159	~~	~~ MARGlN=l,FUN=max)	~~
160	}
161
162	# DTA Scenario
163	{
164	leak i[,"DTA"] <- max(time all)
165	leak i[leak i$NetReading>LEAK DEF,
166	"DTA"] <- pmax(leak i[leak i$NetReading>LEAK DEF,"start"],
167	min(time all)) + (t detect+t repair)*24*3600
168	leak i[,"DTA"] <- pmin(max(time all),leak i[,"DTA"])
169	}
170
171	# DT Scenario
172	{
173	leak i[,"DT "] <- max(time all)
174	leak i[leak i$D max>SV THRES,
175	"DT "] <- pmax(leak i[leak i$D max>SV THRES,"start"],
176	min(time all))+(t detect+t repair)*24*3600
177	leak i[,"DT "] <- pmin(max(time all),leak i[,"DT "])
178	}
179
180	# Cluster Scenario
181	{
182	leak i[,"DTC"] <- max(time all)	# initialize repair date
183	detect.s <- unique(leak i$cluster) # cluster IDs
184
185	# Cluster Doop
186	for(i in 1:length(detect.s)){
187	# Initialize leaks
188	tmp <- filter(leak i,cluster==detect.s[i])
189	tmp <- arrange(tmp,start)
190	tmp$start[tmp$startt B))[,paste(
207	~"D",(1:length(s_loc[,l])),sep="_")]7
208	D s <- sum(filter(tmp,(start<=t B) & (DTC>t B))[,"NetReading"])
209
210	# Apply Cluster Algorithm
211	x<- 0
212	while(max(D_x$B)>=SV_THRES){
213	t B <- t B + t detect*24*3600
214
215	if(x>0){t B <- t B + t repair*24*3600}
216	# Add repair times after 1st round of investigations
217	# (cannot detect until previous leaks repaired)
218
Appendix E-38

-------
Appendix E2
EPA ORD Software Code
219	r B <- row.names(tmp[(tmp$start<=t B) & (tmp$DTC>t B),])
220	r <- sample(r B,min(length(r B),x lk),replace = FALSE)
221	SV <- 0	~~	~~ ~~
222	k <- 1
223	while((SV<(SV DRF))&(k <= length(r))){
224	tmp$DTC[as.numeric(r[k])] <- t B
225	SV <- tmp$NetReading[as.numeric(r[k])]
226	k=k+l
227	}
228	D x$B <- colSums(filter(tmp,(start<=t B) & (DTC>t B))[,paste(
229	~"D",(l:length(s_loc[,l])),sep="_")]7
230	x<-x+l
231	}
232	}
233	}
234	leak_i$DTC[tmp$ID] <- tmp$DTC
235	}
236	leak i[,"DTC"] <- pmin(max(time all),leak i[,"DTC"]+(t repair)*24*3600)
237	}
238	} # end leak repair times section
239
24 0 # Leak Repair Count
241	{
242	C all[n i,"M21"] <- sum(leak i$M21 < time all[2])
243	C~all[n~i,"C21"] <- sum(leak~i$C21 < time~all[2])
244	C all[n i,"DTA"] <- sum(leak i$DTA < time all[2])
245	C all[n i,"DT "] <- sum(leak i$DT < time all[2])
246	C all[n i,"DTC"] <- sum(leak i$DTC < time all[2])
247	C all[n i,"P 100k"] <- sum(leak i$NetReading==le5)
248	}
249
250	# Calculate Emissions
251	{
252	E all[n i,"M21"] <- sum(leak i$ER kgpd*
253	pmax(0, difftime(
254	pmin(time all[2],leak i$M21),
255	pmax(time all[1],leak i$start),
256	units="days")))
257
258	E all[n i,"C21"] <- sum(leak i$ER kgpd*
259	pmax(0,difftime(
260	pmin(time all[2],leak i$C21),
261	pmax(time all[l],leak i$start),
262	units="days")))
263
264	E all[n i,"DTA"] <- sum(leak i$ER kgpd*
265	pmax(0,difftime(
266	pmin(time all[2], leak i$DTA) ,
267	pmax(time all[l],leak i$start),
268	units="days")))
269
270	E all[n i,"DT "] <- sum(leak i$ER kgpd*
271	pmax(0,difftime(
272	pmin(time all[2], leak i$DT ),
273	pmax(time all[l],leak i$start),
274	units="days")))
275
276	E all[n i,"DTC"] <- sum(leak i$ER kgpd*
277	pmax(0, difftime(
278	pmin(time all[2],leak i$DTC),
279	pmax(time all[l],leak i$start),
280	units="days")))
281	}
282
283	# Save individual iteration files for review
284	if(n_i<=l00){
Appendix E-39

-------
Appendix E2
EPA ORD Software Code
28 5	write.csv(leak i[,c("ComponentCIass","ComponentTag","NetReading",
286	~~	"Level", "cluster", "ID", "x", "y", "D_max", "ER_kgpd",
287	"start","M21","C21","DTA","DT_","DTC")]
288	, pasteO("ilOO/",FACIL, ".", SV metric,LEAK DEF," n",n," i",n i,".csv")
289
290	}
291
292	} # end Monte Carlo loop
293
294	# Save Surrmary Files
295	{
296	base <- pasteO("n", n,"/",FACIL,SV metric, LEAK DEF," n",n)
297
298	write.csv(E all,pasteO(base,	".csv"))
299	write.csv(C all,pasteO(base," count.csv"))
300
3 01 SIMM <- data.frame("DTA"=c(0,0),"DT_"=c(0, 0) , "DTC"=c(0, 0) )
302	row. names ( SIJMM) <- c ( "M21" , "C21" )
303	SUMM[1,] <- colSums((E all[,c("DTA","DT ","DTC")]-E all$M21)<=0)/n*100 # percentage
304	SIJMM[2,] <- colSums((E_all[,c("DTA","DT_","DTC")]-E_all$C21)<=0)/n*100 # percentage
305	print (SIJMM)
306
307	write . csv ( SIJMM, pasteO (base, " SIJMM. csv"))
308	}
309	#### End ############################################################################
Appendix E-40

-------
¦	X	/\ rnA United States
molex f !!lvl llu!'s.	,
Appendix E3
Simulation Files
List of accompanying LeakDAS and simulation software I/O CSV files with PDF-attached
compressed file "Appendix E3_Leak Database and 10 Files.zip" containing the files.
Filename
Description
EquationFHR.csv
FHR correlation equation constants for emission factor calculation.
sensorjocations.csv
LDSN/DRF sensors active at FHR CC facility with location coordinates
LeakDAS Records
Mid Crude Inspections 2013 thru 2019.csv
LeakDAS TVA and AVO LDAR records for the FHR CC Mid-Crude unit
Meta Inspections 2013 thru 2019.csv
LeakDAS TVA and AVO LDAR records for the FHR CC m-Xylene unit
M i d C rude J n a ctive. c sv
LeakDAS deletion records for the FHR CC Mid-Crude unit
mXyleneJ nactive. csv
LeakDAS deletion records for the FHR CC m-Xylene unit
Location Info
midCrudeJeaks2.csv
Mid-Crude component location information provided by Molex
mXylene_leaks3.csv
m-Xylene component location information provided by Molex
output_MIDCRUDE_04012020. csv
Mid-Crude component location information provided by Molex
output_META.csv
m-Xylene component location information provided by Molex
ldentrfy_Leaks.R outputs
MidCrudeleaksall.csv
All leaks for Mid-Crude unit found in 2013-2019 records
MidCrudeleaksAVO.csv
AVO leaks for Mid-Crude unit within the filtering parameters
MidCrude_leaks_FGcsv
All fuel gas area GV leaks for Mid-Crude unit found in 2013-2019 records
MidCrude_leaks_M21 first.csv
Simulation leaks for Mid-Crude unit within the filtering parameters, first SV
mXylene_leaks_all.csv
All leaks for m-Xylene unit found in 2013-2019 records
mXylene_leaks_AVO.csv
AVO leaks for m-Xylene unit within the filtering parameters
mXyjene_leaks_FG.csv
All fuel gas area GV leaks for m-Xylene unit found in 2013-2019 records
mXylene_leaks_M21 .first.csv
Simulation leaks for m-Xylene unit within the filtering parameters, first SV
Progress on LDAR innovation
Appendix E: EPAORD Equivalency
Simulations
Appendix E3: Simulation Files
Page E41 to E41
Appendix E-41

-------
	 .	_c	TITTI„	ruA Uni,edS,a,es	Progress on LDAR Innovation
M	I FT INT HIT I S	Environmental Protection	3
J~ resources.	\/tr^Agency	Appendix F: Molex Equivalency Simulations
Page F1 to F43
Appendix F
Molex Equivalency Simulations

-------
Progress on LDAR Innovation
rt__A Uniteds.„.s	Appendix F: Molex Equivalency Simulations
mOlGX	II |FH!^UHILH	Appendix F1: Molex Simulation Approach
"j® Page F2 to F32
Appendix F1
Molex Simulation Approach

-------
Appendix F1
Leak Simulations Based on FHR Corpus Christi LeakDAS™ Data
Executive Summary
Molex investigated the emission control equivalency of the proposed LDSN-DRF concept relative to the
M21-based CWP for the FHR CC Meta-Xylene and Mid-Crude process units. The two LDAR methods
were compared using a series of emission simulations with a randomized historical simulation and a
Monte Carlo approach. The model simulated process unit leaks using historical facility LeakDAS™
LDAR data and calculated emissions based on both non-growth and linear-growth leak rate models and
the assumed control for four different LDSN detection sensitivity performance scenarios, i.e. detection
from single tag leaks represented by DTA(tag), detection from clustered leaks represented by
DTA(cluster), detections from single leaks with actual distances from nearest sensors represented by
DT(tag), and detection from multiple leaks with actual distances from the nearest sensors represented by
DT(cluster). The historical measurement data was transformed into datasets of leaks occurring during
2014-2018 in Meta-Xylene and 2016-2018 in Mid-Crude. These leaks were simulated with randomized
leak start times to compare LDSN-DRF and CWP multi-year cumulative emissions. A Monte Carlo
approach was used to understand variation and to test LDSN-DRF to CWP emission control equivalency,
defined as >67% of 10000 trials showing benefit.
The average detection threshold value necessary to demonstrate M21 CWP equivalency is the eDTA.
Based on the most conservative approach (i.e. using average DTA(tag)) in the simulation analysis, an
eDTA of 10,625 -12,500 ppm is generated from the historical leak event randomization simulation, while
the Monte-Carlo method alone suggests an eDTA of 11,250 ppm for the Meta-Xylene unit, and an eDTA
of 12,500 for the Mid-Crude unit. This data suggests a DTU as high as 16,875 ppm (DTA 11,250 ppm)
for equivalency to the M21 CWP. At the eDTA of 12,500 ppm, LDSN-DRF also demonstrated significant
emission reduction potential. For instance, LDSN-DRF emissions reduction ranged from 29% to 67%
over 5 years for the Meta-Xylene unit, and 29% to 41% over 3 years for the Mid-Crude unit. These
ranges are based on the simulation analysis of the more realistic DT(tag) and DT(cluster) detection
scenarios in which actual distances between leaking components and sensors were taken into
consideration. These comparisons did not include the emission reduction benefits from early detection
and repair of other LDAR program leaks (such as AVOs) or non LDAR emissions. Although the model
was limited to the duration of the simulation periods, the relative rates of change in cumulative emissions
suggest the LDSN-DRF system would continue to demonstrate emission reductions relative to scheduled
M21 performance.
This appendix describes the simulations in detail and presents the framework for systematically
determining the eDTA. This approach allows LDSN-DRF design practices to be leveraged for other
process units with similar historical leak profiles and response factors.
Appendix F-3

-------
1 Introduction
Method 21 is an EPA protocol for detecting and quantifying leaks of volatile organic compounds in LDAR.
Method 21 inspections are conducted on schedules, typically monthly for pumps, quarterly for valves, and
annually for connectors. In the M21 procedure, each component interface is scanned with a calibrated flame
ionization detector (FID) or photoionization detector (PID) and the highest screening value (SV) is recorded. If
the screening value is equal to or greater than EPA's leak definition, this component fails the inspection and is
considered as a leaking component; an effective repair must be completed within a 15-day period unless
classified as a delay-of-repair (DOR) component. All Method 21 data is recorded in a database system such as
LeakDAS™ including the date and time of each monitoring event, the M21 results, repair information, as well
as the component tag number and chemical stream information. The LeakDAS™ software includes an emission
calculation module which utilizes built-in EPA emissions factors and correlation equations to allow total
fugitive emissions to be estimated for use in annual emission inventories and other mandated reports.
A leak detection sensor network (LDSN) combined with the detection response framework (DRF) provides a
totally different approach to LDAR. The LDSN monitors process units by detecting small gas plumes generated
by component leaks. When sensor detection signals reach preset threshold criteria, a detection notification is
issued and the DRF process is initiated. Due to continuous operation, the LDSN-DRF approach is able to
identify large leaks sooner, and therefore perform equally or better in total emission control when designed and
operated with an appropriate average detection sensitivity.
In order to implement an alternative work practice (AWP), the new work practice "must achieve a reduction in
regulated material emissions at least equivalent to the reduction in emissions achieved" under current federal
regulations (40 CFR 65.8(a)). This document presents an emission analysis which seeks to assess equivalency
based on our current understanding of potential LDSN-DRF performance which has been gained through
controlled releases and actual leaks in laboratory, test range, and complex industrial facilities. Leak simulations
were conducted assuming LDSN-DRF was employed instead of routine scheduled M21 for the past few years.
Based on the LDSN-DRF design, operation, and a set of reality-based assumptions, the total fugitive emissions
were calculated at different sensitivity or detection threshold average (DTA) levels. By comparing with data
from the scheduled Method 21 current work practice (M21 CWP) required by applicable federal regulations, an
appropriate DTA value called eDTA was determined. The eDTA represents the threshold value that the new
method must achieve in order to demonstrate equivalency to the CWP for each of the two pilot test units at FHR
Corpus Christi West Refinery.
1.1 Definitions of Leaks and Leak Model Assumptions
• Emission Duration
The emission duration of any particular LDAR component is defined as the period from the leak start
time to the repair finish time.
The fugitive emissions total from each component is the sum of its daily emissions during the
simulation time period.
Appendix F-4

-------
Baseline Screening Value (SV)
The Method 21 screening value (SV) of a component prior to any leak or after any successful repair
event is defined as 0 ppm.
Leak Definition
Based on Method 21 leak definitions in applicable federal regulations, once the SV of a component is
500 ppm or greater, it is considered a leak event for that component. An exception is that the SV value
for a leak event on pumps in synthetic organic chemical manufacturing industry (SOCMI) service is
1000 ppm or greater.
Time-To-Repair in Method 21 and in LDSN-DRF
For both M21 CWP and the LDSN-DRF, the model assumes that once a specific leak has been detected
it will be repaired 7 days later. This assumption is based on historical records and the requirement under
both methods to conduct an initial repair attempt within 5 days of a leak being detected and a final
effective repair no later than 15 days after the leak was detected. Thus, the model maintained the leak at
a constant emission rate for 7 days after the leak existed for the M21 simulation. There is some
anticipated time lapse between initial LDSN potential source location (PSL) notification and the time
that a specific leak is detected utilizing DRF activities. So, while both the M21 and the LDSN-DRF
simulations similarly assumed 7 days from leak detection until leak repair, the LDSN-DRF simulation
also conservatively included 3 additional days at the elevated rates to account for time from leak
existence until leak detection.
Calculation of Historical Emissions
Historical emissions are calculated from LeakDAS™ data, by using the daily SV and duration for each
individual component and then converting to an hourly emission rate (kg/hr) and a daily emission
amount (kg).
Linear Growth Model
This model assumes the SV will grow with time and that the growth rate is a constant. This model is
designed to represent progressive leaks such as corrosion-initiated leaks and leaks related to wear of
seals and seating surfaces from repeated cycling. For each individual component, the growth rate is
defined as
. . _ ,. ,	the first failing SV-the previous passinq SV
Leak Growth Rate =
time between first failing M 21 scan and previous passing M21 scan
The start time is defined as the date and time of the last passing M21 monitoring record in the
LeakDAS™ database. For any randomized simulation, the leak event is assumed to have randomly
occurred during the simulation period.
Appendix F-5

-------
• Non-growth model
This model assumes that the SV for a leaking component remains constant at the measured value above
the leak definition until it is repaired. This model theoretically represents leaks related to single point
mechanical failures with no progression or cascading effects.
In the non-growth model, the leak start time is defined as half the time between the date and time of the
last passing M21 monitoring record and the date and time of the first failing M21 monitoring record in
the LeakDAS™ database. For any randomized simulation, the leak event is assumed to have randomly
occurred during the simulation period.
• Components with Multiple Leaks
In the simulation, if a new leak event occurred before the previous leak event on the same component
was repaired, the larger leak growth rate (or SV for non-growth model) is assumed to supersede the
current simulated leak growth rate (or SV for non-growth model) starting from the new leak event start
time. If the new leak event starts during the previous event's repair time, the current repair is assumed
to correct both leaks. As shown in Figure B-l (top), the 3rd historical leak (HF3) will be fixed during the
repair of first historical leak (HF1) and the second historical leak (HF2).
For the Monte Carlo simulation, there are several considerations that justify addressing those
components in the dataset that have overlapping leaks in this manner: (1) each set of leak records is
from a single process unit and each component tag has a distinct physical location assigned; (2) in the
field, it is common for a new leak to be detected on a component during subsequent routine scheduled
monitoring after an earlier leak has been repaired. In common parlance, these are called "repeat leaks"
or "chronic leakers" and should be reflected in the simulation; (3) since M21 results are used to validate
the repair status in the model, all overlapping leaks on a component are considered to have been
corrected after the successful repair.
Screen Value (ppm)
Detection threshold
Time of Last
Pass
Historical Fail	HF2S
Time
¦
Emissions during Leak Growth
Emissions during Leak Repair
Baseline Emissions
Historical M21SVTrendline
^ Historical M21 SV Value (Failed)
Appendix F-6

-------
Example of Overlapped Failed Events: Component tag
102276 in the Mid-Crude Unit, DTA
(tag) - 15,000 ppm, 2016-2018
100000
	 102776, Linear Growth Model
	 102776,Non-growth Model
80000
60000
40000
20000
/
365
730
1095
Days
Figure B-l. Illustration of Simulating Overlapped Leak Events for a Single Component (Top Depiction,
Linear Growth Model): Example of Overlapped Leak Events on Actual Component (Tag#102776 in the
Mid-Crude Unit, Bottom Graph)
• Cluster of Multiple Leaks
The LDSN is a chemical plume or emissions monitoring system. When multiple leaks occur from
different leak interfaces on a single component such as a pump, the combined impacts of these leaks
can be anticipated to contribute to the overall size of the plume and the sensor detection signals. A
similar effect can result due to small leaks emanating from different components that are co-located
within a small area of sensor coverage. This leak cluster aggregation effect can be evaluated through a
cluster simulation.
In the cluster simulation, the sum of SVs or distance (D) weighted SVs from a cluster of components is
compared to an assigned threshold to determine if the LDSN has triggered a detection event. Cluster
grouping is determined by both "unit ID" and elevation levels. For examples about leak clusters found
by M21 and detailed analysis, see Appendix C.
• Cluster Repair Threshold
Whenever a detection event has occurred in the cluster simulation, it is assumed that leaks above the
Cluster Repair Threshold will be repaired one by one. If no individual leaks above this threshold are
found, the model assumes that up to three leaks will be repaired under the defined DRF. The sum of the
SV values is assumed to be greater than the cluster repair threshold.
Appendix F-7

-------
Emission Rate Correlation
The emission factors and correlation equations shown in Table B-l are used in the emission rate
calculation. These are based on information in the EPA Protocol for Equipment
Leak Emission Estimates and TCEQ Technical Supplements to Emission Inventory Guidelines.
Table B-l. Emission Factors used in SOCMI equation

Equation Sa
t Class
Type
Chsm Slate
	DMZM	
1
I
MggrtlOOX
Multiplier
Exponent
Default Fasiar
SOCMI



.mmm
.140
s?n
CGW1800
.8240
22BG0L-M
SOCMI
AG


noooena
.140
.820
-xodSW^—
	50o"
.oioaocor™
SOCMI
AG

ML
.ixaK*:/50
.140
620
OQDQ1SOO
.8240
-OISSOMD
SOCMI
AG

LI...
WOOUfht'
.140
620
GffiM 1000
,8240
SlsSOCOO
SOCMI
COMPR


fwnerimo
.140
.62
.goooisc»
.3240
.228COOOQ
SOCMI
l-OMPR

<5V
l<0000?50
.140
.620
.OMCTAtt
.a^-o
.22EODCOO
SOCMI
CONNECT


.00000W
-044
220
oomir*
.8830
.QO'-ffitKJO
SOCMI
CONNECT

GV
onr-core-
044
.220

SHOO
00133000
SOCMI
CONNECT

HL
COtXXJW
044
.220
OJJOMW
.8800
.G0163M0
Sl'iCMl
CONNECT

LL
ooooncri:
C41
0
.000003C5
.6650
.C018300S
SOCMI
SELIEF


0005073"
-14C-
.620
00Q019EO
.8240
10400000
SOCMI
RELIEF

GV
0OMO75C
.MP,
.ticO
.DMOfSDO
.8240
.10400000
SOCMI
R6UEF

HL
iSJGOUOC
.t4C
.620
.00001900
.8240
.00403000
SOCMI
RELIEF

It
r«?5i
.142
.62.0
.OWffltSOO
6240
00073000
SOCMI
SAMK


.CWQ0750
:t:r
.sm
.QQ0&1950
«4C
.013WO0C
SOCMI
3AMK

HL
OTO00750

.szc
M0019CO
.BZ4v
.U19K0DQ
SOCMI


IX
0(K00750
.143
.020
0QQ31KO

.d:«20oo
socw
VAL'.'E


.CDC00066
.024
AW
.MOM 187
733
.OC5S/OGO
sorvc
VALVE

GV
QDQOOOCO
.024
.110
.0000018V
.e?&>
.i>jjD7IX0
SOCVE
VALVE

HL
.0DQOOO49
.DM
.150
.00000«41
.--wa
.PC4D3DCO
SGC/tf
VALVE

LL
,OQOOOQ«9
.036
150
000005*1
.?9?0
.M0230G0
Table B-2. Emission Factors used in Refining Industry equation
irquafl^f







Report Date:
11/2/2018
Eqntlron Set
CI**S
Typ*
Chewi Stale

Pegged 10K
feggrtHOK

Etpomit
Dafault Factor
FHR RafcinB



.sooeiwjo
.073
.110
•20301300
.5890
00000000
FHS Rs^-irtg
COMPR

GV
awwfl
.073
110
•20001380
.mo
.63800003
FHR Rawine
CONNEC i
FLO

CCDC0031
.D8S
.004
00300481
7030
.00025000
FKfc fteSninp
CGMvEC"
SCREWS

CC0C07.50
.IBS
.031'
.GOD00153
7330
.osozsooo
FM1 le&iiro
?!;MP


CCK2400
.074
.180
oDomrn
.6100
.11400000
FHR Toftfiii'-g
R6LSE/


B0GC04D0
073
110
00X1380
.5890
1SQOGGOO
FHR Refining
VALVE

GV
2ODCO780
,064
140
.00000228
7460
.02880000
FHR Ratals
VALVE

LL
nOM0?8O
.064
.140
.00000228
.7460
.01090000
Definitions of Detection Terminology
Detection Threshold (DT)
The detectability or Detection Threshold (DT) of individual sensors is dependent on the distance of the
leak from the nearest sensor or sensors under similar wind conditions. Leaks closer to a sensor, for
example, can be detected at a smaller size than leaks farther away. For each sensor that is part of an
appropriate sensor placement plan, there is a DT band across the sensor coverage radius. DT is
expressed in ppm based on FID measurements per EPA Method 21 procedure.
Appendix F-8

-------
• Detection Threshold Average (DTA)
Detection Threshold Average (DTA) is the average value of the DT band and is expressed as DT A=avg
(DTI, DT2, ...). The DTA is an average measure of LDSN-DRFs sensitivity for the process unit.
• Detection Threshold Average required for Equivalency (eDTA)
To differentiate DTA from actual sensor performance, eDTA is used to represent a target sensitivity
level that LDSN-DRF as a whole must achieve in order to demonstrate M21 CWP equivalency in total
fugitive emissions for a given process unit. The eDTA value is generated by simulations.
• Detection Threshold Upper Limit (DTU)
DTU is the upper limit of the DT band and is expressed as DTU = max (DTI, DT2, ...). The DTU
represents the DT value or smallest leak that could be detected by the sensor network at the farthest
distance away from a sensor. Figure B-2 illustrates the relationship between DTA and DTU assuming
even distribution of components within the sensor coverage. DTU is typically greater than 1.5 x DTA
because the smallest value in the DT band can be very small (particularly when a leak takes place right
next to a sensor).
• Emission Rate (ER)
The emission rate (ER) is total emissions per hour, in kg/hr.
Appendix F-9

-------
1.3
Description of Leak Detection Scenarios for Simulations
Emission Calculation with DTA(tag)
A detection event is defined as the SV of a single component tag reaching DTA.
• Emission Calculation with DTA(cluster)
A detection event is defined as the sum of SVs from multiple leaks within one cluster reaching DTA.
• Emission Calculation with DT(tag)
A detection event is defined as the SV of a single component reaching the DT value at the actual
distance of the component from the closest sensor or sensors. In the DT(tag) simulation, a distance-
related threshold SV/D2, where D is the shortest distance to nearby sensors, is used to determine if the
LDSN has a detection event. In this scenario, leaks farther away from sensors have smaller detection
signals than leaks of the same size closer to the sensors. For details about the distance relationship,
please refer to Appendix C.
• Emission Calculation with DT(cluster)
A detection event occurs when the sum of SV/D2 from a cluster of components reaches the detection
threshold. In this scenario, a leak closer to a sensor has a greater contribution to the total detection
signal than another leak of the same size within one cluster and their contributions to the detection
signal are mathematically modelled.
2 Simulation of Historical M21 Leaks
The simulations used 2014-2018 data from the LeakDAS™ database to calculate historical emissions in the
Meta-Xylene unit. Figure B-3 shows the daily fugitive emissions of components with leak events in that process
unit. Yearly emission "cycles" can be seen which correspond to annual scheduled M21 activities. The Mid-
Crude unit used 2016-2018 data for simulations. The shorter time period was considered to be more
representative than previous years. This is because routine connector monitoring was added to the Mid-Crude
LDAR program in 2016. The daily fugitive emissions of components with leak events in the Mid-Crude unit are
also shown in Figure B-3. The cumulative emissions of these two process units were plotted to illustrate the
total emission development trend for the corresponding periods (Figure B-4).
Appendix F-10

-------
Some points of interest identified when developing the simulations and comparing the simulated M21 results to
historical data recorded in LeakDAS™ include:
(1)	In the historical data, a new leak event starts with one failing monitoring result and ends with one passing
monitoring result. Each of the results has corresponding inspection dates and times. Those dates are primarily
based on EPA's mandated inspection schedules. There is still some inherit variation associated with the
inspection dates in historical date because a component requiring quarterly monitoring might be monitored
earlier in the quarter or later during subsequent quarters. This variation in monitoring schedule can be impacted
by weather ("rain-out" conditions in the field), LDAR components being temporarily out service, the
availability of monitoring personnel, periodic restrictions on access to process units due to safety considerations
during start-ups, etc. In contrast to periodic scheduled monitoring, components may have multiple leak events
during the LDSN simulations, before they were repaired. When the simulated inspection date is different from
the historical data, a component's leak events may overlap with each other if the component had multiple leak
events. For larger process units where the number of components is over 10,000, one uniform inspection date
for every component cannot be set. The direct consequence is that the simulated M21 emissions are highly
dependent on the selection of an inspection date because of the potential for overlapping leaks. This
phenomenon is illustrated in Figure B-5. The blue line shows the historical leak events of component number
251537 (a connector in the Meta-Xylene unit) by connecting all the historical M21 SVs with the actual
installation time and passing values. The yellow line (assumes the annual M21 inspection starts on Jan. 1st) and
green line (assumes the annual M21 inspection starts on June 1st) are the simulated M21 SVs without the real
installation time and randomization of leak start time. As expected, the yellow and green lines extend to the
whole simulation period due to the exclusion of actual installation time. All three lines have the same leak start
time, which coordinates with our assumptions on how the leak developed under the linear growth model. The
M21 inspection schedule for connectors is annual, so the historical leak (blue line) was detected during the
annual inspection and repaired shortly after. For the simulated M21, the component was repaired earlier for the
green line than the yellow line because of the different initial M21 inspection dates (the middle vs. the start of a
year). In this case, because the historical leak events was based on an annual schedule, a different simulated
M21 initial date will generate different simulated M21 emissions.
(2)	LeakDAS™ emission calculations include several parameters for actual field conditions, i.e., the component
service time (installation, removal, and "out of service" time for each LDAR component), passing SVs, and
failing SVs. The yearly emissions reported are based on field measurements and component service time during
the given year, which is different from the simulation data. For a simulated leak event, consideration must be
given for how best to model a component that may have been removed from service during the leak event. The
LeakDAS™ emission calculation module is most typically used to calculate emission estimates for annual
reporting (January 1st - December 31st). However, for longer-term models, the simulated leak events can occur
across yearly boundaries. The emissions from a leaking component might start during Oct. 2015 but not be
detected until routine scheduled monitoring was conducted at some time in the next calendar quarter which
would be in 2016. In this example, the LeakDAS™ emission calculation results for CY2015 would not account
for the increased emission trend of a component that was next monitored outside the calculation range.
Appendix F-ll

-------
(3)	It is not uncommon for audio visual olfactory (AVO) leaks on LDAR components to be entered into
databases such as LeakDAS™. This approach allows the facility to follow the normal leak repair protocols and
maintain records to show compliance with applicable regulations. When an AVO leak is found in the field by
operations personnel, an initial repair attempt is conducted. If this corrects the AVO indications, the component
is considered repaired. If the repair does not seem effective, then a monitoring technician is dispatched to
conduct a M21 inspection to provide a M21 result for comparison with applicable leak definitions. As
expected, the raw LeakDAS™ datasets used for these simulations included several AVO leak records without a
M21 measurement. The LeakDAS™ emission calculator module includes an option to "include failed visuals at
10,000 ppm". Unless that option is selected, the AVO leaks are not assigned a numerical SV for inclusion in
emission calculations performed in LeakDAS™. The raw datasets include readings that range from thousands
of ppm to 100,000 ppm for M21 monitoring that occurred shortly after AVO discovery.
(4)	The leak event duration is the time between the initial failing M21 monitoring results and the follow-up
passing M21 results. The leak event duration varies among components with an average duration of 5.6 days
and 11.5 days in the Mid-Crude unit and Meta-Xylene unit respectively. For leaking components that have been
isolated from VOC service or that will require a process unit shutdown to complete an effective repair, there can
be an allowance to go beyond the 15-day repair deadline. If failing records in the dataset that last more than 30
days are excluded, the average duration is 3.5 days and 5.0 days for these process units.
In response to the items of interest listed above, several different methods have been employed to address each
issue and to conservatively estimate M21 results during the modeled simulations. These methods include:
(1)	The worst-case SV is assumed for the overlapped leak events during the simulation. In both non-growth and
linear growth models, this worst case is conservatively assumed, i.e., the largest leak growth slope or the leak
value will replace the smaller ones if overlap occurs. In this way, if there are multiple leak events for a
component, all will be considered in the model. This approach avoids the potential under-estimation of the
emissions from those leak events.
(2)	The component installation date and removal date parameters are not included in the simulation.
Components are assumed to be emitting during all simulation periods. Component removal dates from the raw
datasets are not modeled as a zero emitter. Instead, the leak event start times were randomized thousands of
times. The resulting simulated average emissions should not be impacted by variations in M21 scheduling.
(3)	It is reasonable to assume that through the use of a continuous monitoring system such as LDSN, the AVO
leaks can be detected sooner and while they are still smaller than those that could be identified by human senses
during general operator or AVO rounds in an operating facility. Earlier detection of large leaks can provide
considerable emission reductions. Due to the complexity of comparing LDSN detections to non-scheduled AVO
detections, the simulations did not include leaks associated with AVOs. Demonstrating M21 CWP equivalency
without the added benefit of early AVO detections of an LDSN is a more conservative approach and likely
understates the overall potential emission reductions available through the LDSN-DRF approach.
(4)	In the M21 simulation, the leak event duration is assumed to be 7 days from the date of leak detection to the
date the leak was repaired. For the LDSN models, a similar 7-day leak duration from leak detection to leak
correction is used. However, because there can be some lapse between the time that an emission anomaly is
detected by the LDSN and the time that a field technician identifies a specific leaking component in the field,
three additional days were added to the detection time in the LDSN-DRF.
Appendix F-12

-------
The methodology and assumptions described above were applied to the historical LeakDAS™ data in order to
model M21 emissions for comparison to LDSN models using similar assumptions. Understandably, emission
estimates based solely on the raw M21 SVs and correlation equations differ from the simulated M21 results.
However, by applying the simulation assumptions neutrally to all simulations including the M21, DTA(tag),
DTA(cluster), DT(tag) and DT(cluster), the results of the simulations can be fairly used to establish emission
control equivalency. Although it is of interest that traditional M21 emission estimation methods don't provide
identical results to M21 simulations, it is more important for this equivalency discussion to ensure the results of
the M21 simulation and the LDSN simulations provide for a fair and equitable comparison of emission control
effectiveness.
Daily Emission of Failed Components in the
Meta-Xylene Unit, 2014-2018 (LeakDAS Historical)
Daily Emission of Failed Components in the
Mid-Crude Unit, 2016-2018 (LeakDAS Historical)
LeakDAS
15.0
5.0
2.5
Figure B-3 Daily emissions of failed components in the Meta-Xvlene Unit 2014-2018 (Left, LeakDAS™) and
Mid-Crude Unit, 2016-2018 (Right, LeakDAS™)
Appendix F-13

-------
Cumulative Emission of Failed Components in the	Cumulative Emission of Failed Components in the
Meta-Xylene Unit, 2014-2018 (LeakDAS Historical)	Mid-Crude Unit, 2016-2018 (LeakDAS Historical)
Days	Days
Figure B-4 Cumulative Emissions of Failed Components in the Meta-Xvlene Unit, 2014-2018 (Left, Leak
DAS™) and the Mid-Crude Unit, 2016-2018 (Right, LeakDAS™)
Daily Emission of Components 251537 in the
Meta-Xylene Unit, 2014-2018 (LeakDAS Historical vs Simulated)
Figure B-5. LeakDAS™ Data vs. Simulated M21 Daily Emissions of Tag 251537 in Meta-Xvlene Unit, 2014-
2018 (Linear Growth Model)
Appendix F-14

-------
3 Emission Simulation of Randomized Historical Leaks
Leak events were randomized for all the historical leaks for three reasons: (1) to minimize the impact of the
variation in the historical M21 schedule on the emission calculation; (2) to determine the detection level that is
equivalent to scheduled M21 emissions under different scenarios; (3) to validate the effectiveness and the
robustness of the LDSN. A randomized leak events profile was generated by randomizing both the leak event
start time and the leak event sequence.
An additional goal was to identify the theoretical detection threshold that would generate equal or less
emissions than CWP. The DTA value that creates equal or less emissions compared to CWP is referred to as the
"eDTA".
3.1 Simulation Basics of Scheduled M21 Emissions
Simulated scheduled M21 emissions were used as the "baseline" for the emission comparison: (1) All historical
leak events were randomized based on the guidelines outlined in the 2000 EPA's Monte Carlo guiding
principles [1], (2) the typical EPA M21 schedule (Table B-3) was used to allow the simulation to be readily
applicable and leverageable to other process units. Based on historical data analysis, it is assumed that leaks
detected by scheduled M21 will be repaired in 7 days after the leak was identified. A simulation example of the
Meta-Xylene unit (Table B-4), shows that the scheduled M21 emissions of the randomized leaks have much
less variance than the historical leaks. A similar situation occurs in the simulated M21 results of the Mid-Crude
unit shown in Table 5. In the following simulations, January 1, 2014 (for the Meta-Xylene unit) and January 1,
2016 (for the Mid-Crude unit) are used as the first inspection date for scheduled M21 inspections.
Table B-3. EPA & ModeledM21 Schedules
Component Type
EPA Monitoring Frequency
Modeled M21 Schedule (days)
Connector
Annual
365
Pump
Monthly
30
Valve
Quarterly
90
Others
Quarterly
90
Appendix F-15

-------
Table B-4. Simulated Scheduled M21 Emissions of the Meta-Xylene Unit, 2014-2018, Linear Growth Model
Simulated Total M21 Emissions (kg)
Inspection Date
Historical Data
Randomized (10000 trials)
1-1
27816
29551
2-1
32750
29799
3-1
37179
30293
4-1
33597
31288
5-1
39057
32551
6-1
44697
34227
Average
35849.2
31285.0
STD
16.2%
5.8%
Table 5 Simulated Scheduled M21 Emissions of the Mid-Crude Unit, 2016-2018, Linear Growth Model
Simulated Total M21 Emissions (kg)
Inspection Date
Historical Data
Randomized (10000 trials)
1-1
3932
3445
2-1
4676
3483
3-1
4938
3591
4-1
5533
3790
5-1
6314
4047
6-1
5636
4405
Average
5171.4
3793.5
STD
16.2%
9.8%
Appendix F-16

-------
3.2 Emission Simulation on the Meta-Xylene Unit
The following conditions are used for the Meta-Xylene unit simulation:
Simulation Setting
Simulation period
2014/1/1-2018/12/31
Components
Components with failed events in simulation period
Emission equation
SOCMI
DRF time for LDSN
10 days
Repair time for M21
7 days
Allow leak overlap
Yes
Leak overlap handling
As described in Section 1
M21 inspection starting date
2014-1-1
AVO-visual records
AVO-visual records related failed events are excluded
Cluster repair threshold
3000 ppm
Component location
Randomized within cluster
Component
Excluded
installation/removal time
Number of trials
10000
Leak events profile
Randomize leak start time and sequence for the same

tag
The Molex LDSN solution demonstrates a reduction in the total fugitive emission of the Meta-Xylene unit by
shortening the duration of large leaks. To explore the effectiveness of the LDSN solution, four scenarios were
simulated using two leak growth models (linear growth and non-growth) for the Meta-Xylene unit. The
scenarios include DTA(tag), DTA(cluster), DT(tag), and DT(cluster). In Tables B-6 and B-7, each scenario is
compared to the simulated scheduled M21 results. In these scenarios, DTA(tag) and DTA(cluster) are used to
simulate single tag leak events and clusters of leaks in local areas, respectively. DT(distance) is used in
simulations based on distance of the leak from the nearest sensor and DT(cluster) is used for simulations based
on both the distance and the cluster assumptions. The DTA(tag) simulation is the most straight forward model.
It relies on the simplest assumptions for the detection event definition. The DTA(cluster) model considers the
additive effects associated with multiple leaks within a small area, which are known to contribute to the
development of plumes detected by the LDSN. Both DT(tag) and DT(cluster) consider the actual distances of
components from the nearest sensors, but DT(cluster) also considers the cluster effect while DT(tag) does not.
Appendix F-17

-------
From the sensor sensitivity testing that FHR, Molex, and EPA ORD conducted using controlled releases of
isobutylene which has a response factor of 1, the LDSN DT band for the Meta-Xylene units is 1,600-6,400 ppm,
with a DTA of 4,000 ppm. The simulation results indicate that the average emissions for DTA(tag) 4,000 ppm
are much less (43 - 70%) than the scheduled M21 emissions for both linear growth and non-growth models.
After applying the distance-based detection threshold to the simulation (with DTA of 4,000 ppm), the average
total emissions are estimated to be reduced by at least 60% for the non-growth model and 80% for the linear
growth model when compared to the scheduled M21 emissions. From the current simulation results, the
DTA(tag) is -7,500 - 12,500 ppm for the Meta-Xylene unit which is higher than the DT band results based on
relative RF. This demonstrates that the DTA of the installed system is lower than the DTA(tag) and is indicative
of an LDSN that is more effective at controlling fugitive emissions than M21 CWP. The simulation results
(with the a DTA level of 12,500ppm) for DT(tag) and DT(cluster) show that the total fugitive emissions of the
Meta-Xylene unit were reduced by 29 - 66% when compared to the simulated M21 CWP. The simulation
results demonstrate that the LDSN is easily equivalent to or better than the simulated M21 CWP.
Table B-6 Average Total Emissions (2014-2018) of the Meta-Xylene Unit with Randomized Leak Events (Linear
Growth Model, 10000 trials)
PPM	DTA(tag), kg DTA(cluster), kg	DT(tag), kg DT(cluster), kg M21, kg
4000	8886	3871	5212	4077
5000	11156	5285	6601	4727
7500	16639	9094	10001	6488
10000	21678	12397	13325	8289
12500	26119	15327	16637	10124
	 29546
15000	30114	18088	20112	11994
17500	33849	20741	23596	13891
20000	37329	23283	26721	15759
22500	40629	25745	29550	17617
25000	43728	28118	32202	19464
Appendix F-18

-------
Table B-7 Average Total Emissions (2014-2018) of the Meta-Xylene Unit with Randomized Leak Events (Non-
growth Model, 10000 trials)
PPM	DTA(tag), kg	DTA(cluster), kg	DT(tag), kg	DT(cluster), kg M21, kg
4000	14391	8536	9391	8~i~83
5000	17198	9993	10808	8876
7500	22081	14483	13842	10751
10000	26593	17910	16035	12267
12500	27637	20234	17910	13566
	 25224
15000	29376	22611	19418	14647
17500	30344	24107	20686	15577
20000	31594	25326	21768	16323
22500	32362	26963	22685	17008
25000	33323	27811	23509	17641
Figures B-6 and B-7 show comparisons of the daily emission rate and cumulative emission rate for both models.
From Figure B-6, the daily emission rates of the various simulation scenarios are generally more consistent than
the scheduled M21 emission rates. This relatively larger peaks associated with simulated M21 daily emissions
are the result of new leaks accumulating over time until the periodic scheduled M21 identifies the leaks that
developed since the previous monitoring cycle. Thus, the continuous monitoring of the LDSN-DRF solution not
only helps reduce the cumulative emissions over the 5-year period but it also reduces the peak short-term
emissions associated with the maximum daily emission rate. The trend of the cumulative fugitive emission
differences between scheduled M21 and DTA(cluster), DT(tag) and DT(cluster) from Figure B-7 either
becomes constant or expands over time. This suggests that the LDSN solution will continue to result in lower
emissions relative to M21 CWP even beyond the duration of the modelled scenarios.
Appendix F-19

-------
Simulated Emission of Failed Components in the
Meta-Xylene Unit, 2014-2018 (Linear Growth Model)
Simulated Emission of Failed Components in the
Meta-Xylene Unit, 2014-2018 (Non-growth Model)
		M21
		DTA(tag) - 7,500ppm
		DTA(cluster) - 17,500ppm
		DT(tag) -DTA, 12,500ppm
		DT(cluster) -DTA, 12,500ppm
Figure B-6. Daily Fugitive Emission Rate of the Meta-Xylene Unit: Linear Growth Model (Left) and Non-
growth Model (Right)
Simulated Cumulative Emission of Failed Components in
the Meta-Xylene Unit, 2014-2018 (Linear Growth Model)
¦	M21
DTA(tag) - 12,500ppm
DTA(cluster) - 25,000ppm
• DT(tag) -DTA, 12,500ppm
¦	DT(cluster) -DTA, 12,500ppm
Simulated Cumulative Emission of Failed Components in
the Meta-Xylene Unit, 2014-2018 (Non-growth Model)
•	M21
DTA(tag) - 7,500ppm
•	DTA(cluster) - 17,500ppm
¦ DT(tag) -DTA, 12,500ppm
DT(cluster) -DTA, 12,500ppm
Figure B-7. Cumulative Fugitive Emissions of the Meta-Xylene Unit (2014 - 2018): Linear Growth Model (Left)
and Non-growth Model (Right)
The probability of achieving equal or lower total fugitive emissions was examined by comparing different
detection thresholds with the scheduled M21 emissions under the same, randomized leak profiles. Simulations
were repeated numerous times, resulting in a certain percentage of the simulations achieving the desired
Appendix F-20

-------
outcome (in this case, estimating the same amount or lower fugitive emissions than M21 CWP). Confidence
level (CL) is the percentage of times that the total fugitive emissions resulting from the LDSN would be less
than or equal to the total fugitive emissions of M21 CWP. CL is a measurement of the reliability of the LDSN
solution under different leak profiles. In general, the resulting CL decreased as the values for DTA(tag) (Figure
B-8 blue) and DTA(cluster) (Figure B-8 orange) increased. The values for DT(tag) (Figure B-8 green) and
DT(cluster) (Figure B-8 red) represent distance-based DT bands covering varying DT values represented as a
DTA. As shown in Figure B-8, the CL is over 67% (EPA guideline target CL) when the DTA(tag) is 7500 -
12500 ppm (non-growth and linear growth models). A DTA(cluster) of 17,500 - 25,000 ppm will achieve the
67% target CL. The CL's for DT(tag) and DT(cluster) consistently show over 67% up to 20,000 ppm DTA.
This is the result of the relatively high sensitivity of the LDSN to the gas species (represented by low response
factor) and relatively high sensor density in the Meta-Xylene unit. The simulation results suggest that LDSN-
DRF is a robust and reliable emission monitoring system and is equivalent to or better than the M21 CWP with
the current system design and sensor placement.
Confidence Levels of Different Detection Thresholds in
the Meta-Xylene Unit, 2014-2018 (Linear Growth Model)
100% -1
* • * *
I ~
• DTA(tag)
~ DTA(cluster)
TDT(tag)
DT(cluster)
~
T
5000 7500 10000 12500 15000 17500 20000 22500
Detection Thresholds(ppm)
Confidence Levels of Different Detection Thresholds in
the Meta-Xylene Unit, 2014-2018 (Non-growth Model)
100% -1
* * * * x
• DTA(tag)
~ DTA(cluster)
TDT(tag)
DT(cluster)
~ ~
5000 7500 10000 12500 15000 17500 20000 22500
Detection Thresholds(ppm)
Figure B-8. CL With Different Detection Simulation Methods (Meta-Xylene Unit, 2014-2018): Linear Growth
Model (Le ft) and Non-growth Model (Right)
Appendix F-21

-------
3.3 Mid-Crude Unit
For the Mid-Crude unit, the following assumptions and settings are used:
Simulation Setting
Simulation period
2016/1/1-2018/12/31
Components
Components with failed events in simulation period
Emission equation
Petro industry
DRF time for LDSN
10 days
Repair time for scheduled M21
7 days
Allow leak overlap
Yes
Leak overlap handling
As described in part 1
M21 inspection starting date
2016-1-1
AVO-Visual records
AVO-visual records related failed events are excluded
Cluster repair threshold
3000 ppm
Component location
Randomized within cluster
Number of trials
10000
Leak events profile
Randomize leak start time and sequence for the same tag
Simulations similar to those previously described for the Meta-Xylene unit were performed for the Mid-Crude
unit. Adjustments required for the Mid-Crude model included: (1) two large heater units are excluded from the
emission calculation. (2) the simulation period is 3 years (2016-2018) instead of 5 years. Since routine
connector monitoring was added to the Mid-Crude LDAR program in 2016, these years were considered more
representative than prior years.
The effectiveness of the LDSN was first validated utilizing an average total fugitive emission simulation. Table
B-8 and Table B-9 summarize the results of this simulation: (1) with a DTA(tag) of between 7,500ppm and
17,500 ppm, the LDSN provides a reduction in total fugitive emissions when compared to the simulated M21.
This is true for both the linear growth and non-growth models; (2) if distance factor is included in the
simulation, the average total fugitive emissions will be further reduced; (3) An LDSN with a DTA(cluster) of
12,500 -17,500 ppm will also reduce the total emissions based on both leak models (linear growth and non-
growth). Fugitive emission reductions were previously shown in the simulation results for the Meta-Xylene unit
which has an average response factor of -0.8. Although the response factor for some LDAR streams in the Mid-
Appendix F-22

-------
Crude unit can be as high as 3 (range is 0.7-3.0), the LDSN-DRF simulations still demonstrate total fugitive
emission reductions for this unit when compared to the M21 CWP.
Table B-8 Average Total Emissions (2016 -2018) of the Mid-Crude Unit with Randomized Leak Events (Linear
Growth Model, 10000 trials)
PPM
DTA(tag), kg
DTA(cluster),
kg
DT(tag), kg
DT(cluster), kg
5000
1403
1200
1357
1363
7500
1859
1665
1815
1744
10000
2246
2048
2078
1969
12500
2586
2382
2304
2178
15000
2887
2677
2496
2362
17500
3165
2948
2700
2563
20000
3417
3189
2904
2761
22500
3659
3416
3122
2973
Table B-9 Average Total Emissions (2016-2018) of the Mid-Crude Unit with Randomized Leak Events (Non-
growth Model, 10000 trials)
PPM
DTA(tag), kg
DTA(cluster),
kg
DT(tag), kg
DT(cluster), kg
M21, kg
5000
2270
1980
1698
1809

7500
2597
2292
1965
2008

10000
3144
2673
2147
2140

12500
3308
2897
2287
2234
3222
15000
3309
3103
2418
2331
17500
3365
3160
2518
2415

20000
3367
3264
2599
2491

22500
3391
3304
2688
2575

Appendix F-23

-------
The trend of the cumulative fugitive emission differences between scheduled M21 in the Mid-Crude unit and
DTA(tag), DTA(cluster) (with corresponding detection thresholds), DT(tag), and DT(cluster) from Figure B-9
either becomes constant or expands over time. This suggests that the LDSN solution will continue to result in
lower emissions relative to M21 CWP during and even beyond the duration of the modelled scenarios.
Simulated Cumulative Emission of Failed Components in
the Mid-Crude Unit, 2016-2018 (Linear Growth Model)
Simulated Cumulative Emission of Failed Components in
the Mid-Crude Unit, 2016-2018 (Non-growth Model)
M21
DTA(tag) - 17,500ppm
DTA(cluster) - 17,500ppm
DT(tag) -12,500ppm
DT(cluster) -12,500ppm
¦	M21
DTA(tag) - 7,500ppm
• DTA(cluster) - 12,500ppm
¦	DT(tag) -12,500ppm
¦	DT(cluster) -12,500ppm
365	730
Days
1095	0
365	730
Days
Figure B-9. Cumulative Emissions of the Mid-Crude Unit (2016-2018): Linear Growth Model (Left) and Non-
growth Model (Right)
The CLs of the Mid-Crude unit were calculated for different detection threshold values. To achieve a 67% CL,
the DTA(tag) and DTA(cluster) values are approximately 7,500 - 17,500 ppm from the non-growth model and
12,500 - 17,500 ppm from the linear growth model, respectively. The CLs of DT(tag) and DT(cluster) at a DTA
of 12,500 ppm are over 95% in both the linear growth and non-growth models. The CLs in the Mid-Crude
simulation results indicate that LDSN-DRF can be an effective and reliable method to reduce total fugitive
emissions when compared to M21 CWP.
Appendix F-24

-------
Confidence Levels of Different Detection Thresholds in
the Mid-Crude Unit, 2016-2018 (Linear Growth Model)
100% -1
" 1 I
% DTA(tag)
~ DTA(cluster)
TDT(tag)
DT(cluster)
5000 7500 10000 12500 15000 17500 20000 22500
Detection Thresholds(ppm)
100% -1
Confidence Levels of Different Detection Thresholds in
the Mid-Crude Unit, 2016-2018 (Non-growth Model)
* •
• DTA(tag)
~ DTA(cluster)
TDT( tag)
DT(cluster)
1 i ~
5000 7500 10000 12500 15000 17500 20000 22500
Detection Thresholds(ppm)
Figure B-10. CL with Different Detection Thresholds (Mid-Crude Unit, 2016-2018): Linear Growth Model
(Left) and Non-growth Model(Right)
4 Emission Simulation by the Monte-Carlo Method
A Monte-Carlo simulation method was developed to further validate the robustness of the LDSN solution. This
method employs random sampling to generate a profile of leak events and then uses this profile to calculate
total fugitive emissions under different emission scenarios. The CL is used to evaluate the performance of
LDSN-DRF. We performed 10,000 trials for both unit in the Monte-Carlo simulation.
4.1 Basic Assumptions in the Monte-Carlo (MC) Simulation
The original leak profile (all leak events plus the index of the leak events) is used to generate a sample pool
(indexed leak events) based on the input set of components. In each simulation trial, randomized sampling was
performed for a determined time period (known as sampling number) and a randomized leak start time was
assigned for every leak event sample to build a new leak profile. The new leak profile is used to calculate the
simulated total fugitive emissions.
• Input Set of Components
The clusters to simulate are used as the input set of components.
• Sample Pool of Leak Events
Appendix F-25

-------
A sample pool contains all the leak events from the input set of components. The leak records in the sample
pool are indexed to back-track the original leak growth rate or leak size.
• Sampling Number
The sampling number is determined by the total number of the leak events within the input set of
components.
• Updated Cluster
The input set of components might be different from the updated cluster because of the randomized
sampling process. From the generated new leak profile, an updated cluster is made up of components with
valid leak events after the sampling process.
Emission Calculation
A flowchart of the simulation process is shown in Figure B-l 1. Once a new leak profile is generated, the total
fugitive emissions are calculated using DT(tag), DTA(tag), DT(cluster), and DTA(cluster) in a similar method
to that described in previous sections.
Figure B-l 1 Monte-Carlo Simulation Process
Appendix F-26

-------
4.2 Monte Carlo Simulation Results
The CL is calculated using different thresholds for both process units to validate the effectiveness and reliability
of the LDSN solution (Figure B-12 and Figure B-13). The CL of a DTA(tag) of 10,000 - 12,500 ppm for the
Meta-Xylene unit Monte-Carlo simulation is over 67%. The DTA(cluster) level is approximately 20,000 -
25,000 ppm with the 67% as the target CL. The DT(distance) and DT(cluster), which take the DT band into
consideration, are more similar to field conditions, and therefore the CL level is more representative of the
anticipated performance of the LDSN. From the MC simulation results of DT(tag) and DT(cluster), the CLs are
mostly over 67% (except for DT(tag) with a DTA(tag) level of 25,000 ppm). This corresponds to the low
response factor and high sensor density in the Meta-Xylene unit. The MC simulation results are similar to the
results of the randomized leak events simulation and provide additional confirmation of the effectiveness and
robustness of LDSN in the Meta-Xylene unit.
The Mid-Crude unit simulation is more complicated due to the fact that the response factor of some of the
LDAR streams are about three times higher than the Meta-Xylene unit. The sensor layout in the Mid-Crude unit
is also less dense than in the Meta-Xylene unit. The CL of a DTA(tag) is around 7,500 - 17,500 ppm using the
67% rule. The DTA(cluster) level is around 12,500 - 17,500 ppm when using 67% as the target CL. The
average total fugitive emissions of DT(tag) and DT(cluster) at a DTA of 12,500 ppm are estimated to be
approximately 38% less than the simulated M21 CWP emissions. The simulations indicate that the LDSN-DRF
solution can significantly reduce the total fugitive emission of the Mid-Crude unit when compared to routine
scheduled M21 monitoring.
Confidence Levels Comparison of the Monte Carlo
Simulation in the Meta-Xylene Unit, 2014-2018
(Linear Growth Model)
100% -1
a
~
~
• DTA(tag)
~ DTA(cluster)
TDT(tag)
DT(cluster)
)0 7500
12500 15000 17500
Detection Thresholds(ppm)
Confidence Levels Comparison of the Monte Carlo
Simulation in the Meta-Xylene Unit, 2014-2018
(Non-growth Model)
# DTA(tag)
~ DTA(cluster)
TDT( tag)
DT(cluster)
0 7500
12500 15000 17500
Detection Thresholds(ppm)
Figure B-12. CL with Different Detection Thresholds (Meta-Xylene Unit, 2014-2018): Linear Growth Model
(Le ft) and Non-growth Model(Right)
Appendix F-27

-------
100% -1
Confidence Levels Comparison of the Monte Carlo
Simulation in the Mid-Crude Unit, 2016-2018
(Linear Growth Model)
~
T
# DTA(tag)
~ DTA(cluster)
?DT(tag)
DT(cluster)
5000 7500 10000
12500 15000 17500
Detection Thresholds(ppm)
20000 22500
T
~
Confidence Levels Comparison of the Monte Carlo
Simulation in the Mid-Crude Unit, 2016-2018
(Non-growth Model)
# DTA(tag)
~ DTA(cluster)
?DT(tag)
DT(cluster)
~
~
5000 7500 10000
12500 15000 17500
Detection Thresholds(ppm)
20000 22500
Figure B-l3. CL with Different Detection Thresholds (Mid-Crude Unit, 2016-2018): Linear Growth Model
(Left) and Non-growth Model (Right)
M21
DTA(tag) - 12,500ppm
DTA(cluster) - 25,000ppm
DT(tag) - 12,500ppm
DT(cluster) - 12,500ppm
M21
DTA(tag) - 10,000ppm
DTA(cluster) - 20,000ppm
DT(tag) - 12,500ppm
DT(cluster) - 12,500ppm
Simulated Cumulative Emission of Failed Components in	Simulated Cumulative Emission of Failed Components in
the Meta-Xylene Unit, 2014-2018	the Meta-Xylene Unit, 2014-2018
(Linear Growth Model, Monte Carlo)	(Non-growth Model, Monte Carlo)
Figure B-l 4. Monte Carlo Simulated Cumulative Fugitive Emissions of the Meta-Xylene Unit (2014 - 2018):
Linear Growth Model (Left) and Non-growth Model (Right)
Appendix F-28

-------
Simulated Cumulative Emission of Failed Components in
the Mid-Crude Unit, 2016-2018
(Non-growth Model, Monte Carlo)
M21
DTA(tag) - 7,500ppm
DTA(cluster) - 12,500ppm
DT(tag) - 12,500ppm
DT(cluster) - 12,500ppm
E 1500
Simulated Cumulative Emission of Failed Components in
the Mid-Crude Unit, 2016-2018
(Linear Growth Model, Monte Carlo)
M21
DTA(tag) - 17,500ppm
DTA(cluster) - 17,500ppm
DT(tag) - 12,500ppm
DT(cluster) - 12,500ppm
Figure B-15. Monte Carlo Simulated Cumulative Fugitive Emissions of the Mid-Crude Unit (2016 - 2018):
Linear Growth Model (Left) and Non-growth Model (Right)
Although the modeling was limited to the duration of the simulation period of time, the relative rates of change
in cumulative emissions shown in Figure B-14 and Figure B-15 suggest the LDSN-DRF system would continue
to demonstrate emissions reductions relative to scheduled M21 performance.
Figure B-16 shows a comparison between average emissions calculated from M21 and from 4 different
detection scenarios under two leak models (linear growth and non-growth). The DTA values are chosen based
on passing simulations which means CL's>67%. Clearly, all the detection scenarios in both leak models
demonstrate significant reductions in total fugitive emissions. This includes a 39 - 49% reduction from the
DT(tag) simulation and 50-67% reduction from the DT(cluster) simulation at the DTA level of 12,500ppm.
These numbers are comparable to 29% - 44% reduction from DT(tag) and 46% - 66% reduction from DT
(cluster) demonstrated during the randomized historical leak simulations.
Appendix F-29

-------
Distribution of Total Emissions in the Meta-Xylene
Unit, 2014-2018 (Linear Growth Model, Monte Carlo)

X
Scheduled M21 DTA(tag)	DTA(cluster)	DT(tag)
12500ppm	25000ppm	12500ppm
DT(cluster)
12500ppm
Distribution of Total Emissions in the Meta-Xylene
Unit, 2014-2018 (Non-growth Model, Monte Carlo)
¥4
JL
Scheduled M21	DTA(tag)	DTA(cluster)	DT(tag)	DT(cluster)
lOOOOppm	20000ppm	12500ppm	12500ppm
Figure B-16 Distribution of Total Emissions in the Meta-Xylene Unit under the Linear-Growth Model (Left)
and The Non-growth Model (Right)
The same comparison was made for the Mid-Crude unit at DTA values with successful Monte Carlo
simulations (CLs>67%). As shown in Figure B-17, approximately 36% - 41% reductions in total emissions can
be estimated for the DT(tag) and DT(cluster) simulations at 12,500 ppm. These are very similar to the 29 - 37%
reduction demonstrated using the randomized historical leak simulations at the same DTA level.
Distribution of Total Emissions in the Mid-Crude
Unit, 2016-2018 (Linear Growth Model, Monte Carlo)
Distribution of Total Emissions in the Mid-Crude
Unit, 2016-2018 (Non-growth Model, Monte Carlo)





T -r-

XT



1

1


Scheduled M21 DTA(tag)
17500ppm
DTA(cluster)	DT(tag)
17500ppm	12500ppm
DT(cluster)
12500ppm
Scheduled M21 DTA(tag)	DTA(cluster)	DT(tag)
07500ppm	12500ppm	12500ppm
DT(cluster)
12500ppm
Figure B-l 7. Distribution of Total Emissions in the Mid-Crude Unit under the Linear-Growth Model (Left) and
the Non-growth Model (Right)
Appendix F-30

-------
DTA simulation results are listed in Table 10 for the Meta-Xylene unit and in Table 11 for the Mid-Crude unit.
The Monte Carlo simulation results are very similar to the results obtained from randomized historical
simulations in Section 3 of this appendix. Because of a different sampling process for the simulation methods,
more components are involved in the randomized historical simulation than in the Monte Carlo simulation. For
instance, in the randomized historical simulation, the pegged values will always be included, which is not
always the case for Monte Carlo simulation trials. The leak size distribution of the randomized historical
simulation should be the same as the historical leak distribution. In contrast, the leak distribution of the Monte
Carlo simulation can be different from the historical distribution. If the trial number is large enough however,
the overall leak distribution of all Monte Carlo simulation runs should be reasonably similar to the historical
leak distribution. In other words, the randomized historical simulation is considered as a unique subset of
Monte Carlo simulation, in which all the historical leaks are sampled and simulated.
Table 10 Summary of Simulation Results of the Meta-Xvlene Unit
Method
Leak Model
DTA(tag)
DTA(cluster)
DT(tag)
DT(cluster)
Randomized
Non-Growth
7500 ppm
17500 ppm
25000 ppm +
25000 ppm +
Historical
Linear-Growth
12500 ppm
25000 ppm +
20000 ppm
25000 ppm +
Monte-Carlo
Non-Growth
10000 ppm
20000 ppm
25000 ppm +
25000 ppm +

Linear-Growth
12500 ppm
25000 ppm +
22500 ppm
25000 ppm +
Table 11 Summary of Simulation Results of the Mid-Crude Unit
Method
Leak Model
DTA(tag)
DTA(cluster)
DT(tag)
DT(cluster)
Randomized
Non-Growth
7500 ppm
12500 ppm
22500 ppm +
22500 ppm +
Historical
Linear-Growth
17500 ppm
17500 ppm
22500 ppm +
22500 ppm +
Monte-Carlo
Non-Growth
7500 ppm
12500 ppm
22500 ppm +
22500 ppm +

Linear-Growth
17500 ppm
17500 ppm
22500 ppm +
22500 ppm +
5 Summary
Due to the complexity of leaks from a wide variety of components under varying conditions, two leak rate
models and four sensor detection scenarios using the same LDSN-DRF system have been evaluated. Although
the non-growth and linear growth models represent both represent valid leak evolution patterns, the average
leak rate change for a process unit is likely influenced by both at different times and on varying components.
For example, a leak may increase and then stay unchanged or even decrease due to changes in weather
conditions, or process conditions such as temperature, pressure, and vibration. A representative eDTA value
should be bounded by the simulation results from these two leak models. Likewise, DTA(tag) represents the
most conservative detection scenario while DTA(cluster) represents a more realistic scenario. Since the LDSN
detects gas plumes at a distance, the sensor detection signals are most likely to be the result of contributions
from multiple leaks near each other or within a small area. During the pilot testing, multiple small leaks were
Appendix F-31

-------
often detected through the DRF activities that had been initiated in response to a single potential source location
(PSL) notification. Thus, it is reasonable to assume that eDTA falls in between the DTA(tag) and DTA(cluster)
values.
Given the actual locations of the leaks and the locations of the sensors in the plant, DT(tag) and DT(cluster)
simulations were added to the analysis at varying DTA values. These additional two scenarios are more site
specific. The most conservative approach is to rely solely on the average DTA(tag) results when determining
the final eDTA value. Together the two methods support an eDTA of 10,625-12500 ppm, while the Monte-
Carlo method alone suggests an eDTA of 11,250 ppm for the Meta-Xylene unit, and a slightly higher eDTA of
12,500 ppm for the Mid-Crude unit. At these eDTA design levels, a substantial reduction potential in total
fugitive emissions can be anticipated.
Although the eDTA is useful when estimating the average detection threshold that must be achieved to
demonstrate equivalence with M21 CWP, the DTU is better suited for leverageable design specifications and an
in-the-field compliance demonstration. DTU represents the smallest leak that is anticipated to be detected by the
sensor network at the farthest distance away from a sensor. DTU is at least 1.5 times DTA. So, with an eDTA
of 11,250 ppm, the corresponding DTU is conservatively assumed to equal 16,875 ppm. A LDSN-DRF system
that detected and located all LDAR leaks above 16,875 ppm for repair would be equivalent to fugitive
emissions control provided by M21 CWP.
It should be noted that all the simulations described above utilized historical data from the facility's
LeakDAS™ database. AVO leaks were not included in the simulation because of a lack of actual Method 21
measurement data at the time each leak was found. Several non-LDAR component leaks were detected during
the in-plant pilot testing but were also not included in the simulations. The ability of the LDSN-DRF system to
provide earlier detection of AVOs and leaks from non-LDAR components provides additional emission
reductions even beyond those conservatively modeled.
References:
1. Guiding Principles for Monte Carlo Analysis, EPA/630/R-97/001, March 1997.
Appendix F-32

-------
Progress on LDAR Innovation
Appendix F: Molex Equivalency Simulations
United States
mnUY	f, |FLINT HILLS	AFPASrenmemaiProtection Appendix F2: FHR CC Test Unit Leak
molex	f Resources.	" cluster Simulation and Analysis
Page F33 to F43
Appendix F2
FHR CC Test Unit Leak Cluster
Simulation and Analysis

-------
Appendix F2
FHR Corpus Christi Test Unit Leak Cluster Simulation and Analysis
Leaks with low mass emissions rates (ERs) and/or high leak to node separation distance can be below the
detection threshold (DT) of the LDSN system. However, as more leaks occur within a localized area, the
signal from these leaks can combine to produce a robust detection event. The set of small leaks within this
local area is called a leak cluster. This document analyzes the "cluster" concept using three different
approaches:
1.	by analyzing and visualizing spatial distribution of historical leak events;
2.	by performing computational fluid dynamics (CFD) simulations to demonstrate similar sensor
response from a single leak and from a cluster of leaks; and
3.	by theoretically predicting the number and distribution of leaks to be detected by Molex LDSN
using historical M21 data during 2013-2019.
1. Historical Leak Distribution
Historical M21 leak events on the ground level (2013-2019) are plotted in Figure la (Meta-Xylene unit)
and Figure lb (Mid-Crude unit). Larger blue dots (with the number nearby) represent sensor locations,
smaller colored dots represent historical leak events, the legend describes the association between dot
color and leak size (in ppm, equivalent M21 measurements). Figures laand lb, illustrate that the LDAR
components are clustered around vessels, pumps, compressors, separation columns, and other large pieces
of equipment. The depiction on the map (Figures la and lb) corresponding to a piece of equipment often
includes a unique identification number shown near the object. An example of a typical cluster (object
"54GA4A/B"', Meta-Xylene unit) is indicated by a dashed circle in Figure la. The radial dashed lines
indicate cluster-to-sensor distances (50,000	#
•	s.000 - 50,000	eooJ		
1,000- 5,000
500 - 1,000	maI	—s—	—I-	-J	I ¦ , J i i i ...I 		1. i J		 1 . - I
%10 2770 2730 2690 2650 2610 2570 2530 2490 2450 2410 2370 2330 2290
Figure 1. Leak size and location distribution of historical M21 events in Meta-Xylene (a) and Mid-Crude
(b) units. Blue circles with an adjacent number represent sensor locations. The green, yellow, orange,
and red circles represent leak events.
Appendix F-34

-------
2. CFD Simulation
In order to gain insight into plume detections, a set of CFD simulations were generated for the Meta-
Xylene unit. A single leak of 10"5 kg/s (equivalent to 50,000 ppm FID) is compared to a "cluster" of 10
leaks with leak sizes randomized in range from 10"' to 10"6 kg/s (500 to 5000 ppm M21 measurements)
and leak spatial locations randomized within a 10 ft s 10 ft box centered around the previously defined
single 50,000 ppm leak location. Figure 2a illustrates a 3D model of the Meta-Xylene unit used in the
simulation. The red circles/balloons represent leak locations within the cluster. Wind (air flow) was
modeled utilizing an isothermal k-epsilon (k-e) turbulence model with standard wall function. Octane
(C8FI18) was used as a representative leak gas species for both Meta-Xylene and Mid-Crude units.
Transient CFD simulations follow a representative 2000 second wind pattern (wind speed and directions)
shown in Figure 2b. The calculated gas responses at nearby sensor SI, S2 and S3 locations are shown in
Figures 2c and 2d. Each response of a single 50,000 ppm leak (green lines in Figure 2c) is a strong match
for the equivalent "cluster" of leaks (blue lines in Figure 2d). During the pilot study, 10 sensors were
installed in the Meta-Xylene unit, but only 3 sensors (SI, S2 and S3) at the same elevation (ground level)
were close enough to the leak or "cluster" to produce a gas response detectable by Molex LDSN (> -~5
ppb).
(a)



(b)
Wind(m/s)
5.4 5.45 5.5 5.55 5.6
time(s) *io4
wind
330 ° 30
300	60
270
240
90
120
(c)
Single leak
(d)
Equivalent "cluster" leak
IV.' MW


| SI 1

1, J. 11,
- 1m j a.. lMitU ikM.pl
0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 200 400 600 800 1000 1200 1400 1600 1800 2000
«10'7		x10~7
J—




S3|
, L. J, ,
1 Ll
J J

1 4 . . .
0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 200 400 600 800 1000 1200 1400 1600 1800 2000
x10'r		-10"7		
5f 1 1 T T T T ' T
0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 200 400 600 800 1000 1200 1400 1600 1800 2000
Figure 2. CFD simulation results comparing a single leak (50.000 ppm FID equivalent) to equivalent 10
leak "clusters". Insert (a) is a 3D model used for simidations. Insert (b) illustrates a representative wind-
Appendix F-35

-------
set used in the transient analysis. Inserts (c) and (d) show mass gas concentrations at sensor locations SI,
S2, S3 for a single leak and for cluster scenarios.
3. LDSN Leak Distribution Simulations
The 2013-2019 historical M21 leak events data from the LeakDAS™ database (Meta-Xylene unit) was
analyzed to determine the leak distribution and number of leaks that would have been detected by the
LDSN/DRF if it were operating during 2013-2019. Each leak event record contains a "location
description" field referencing a physical piece of equipment and corresponding to an object on the map.
For example, the location description, "G/5 NSD 42DA1 TWR", indicates that leak event occurred for a
component located at 5 ft above ground level, and on the north side of tower number 42DA1. The set of
leak events near ground level associated with the object 42DA1 is considered in the historical LDSN
detects simulation as a cluster object "42DA1 ground".
There are several potential approaches for defining detection event criteria in historical simulations. The
simplest approach is to compare the size of the leak against the average fixed detection threshold (DTA)
independent of the distance between a sensor and the leak location. The fixed threshold approach is easier
to enforce in DRF and implement in historical emissions savings calculations, however it penalizes the
emissions calculations because a significant number of smaller leaks near the sensors are detected in real
life, but grossly ignored by the LDSN/DRF in the simulation. Therefore, it's necessary to discuss
methodology to introduce distance as a variable to the cluster effect.
Three theoretical approaches were utilized to qualify the experimentally observed dependence, (C oc
1/D2), of sensor peak gas concentration (C) to the distance from the leak (D). These include a source
magnitude estimation using short range plume equations at each peak [2], computational fluid dynamics
(CFD) calculations, and the analytical Gaussian plume model with dispersion coefficients defined by the
Pasquill-Gifford equations [1, 3,4],
A source magnitude estimation using short range plume equations at each peak [2] can be expressed as:
Sij = nD2CjOw/u	(1)
where:
S = source magnitude estimate based on sensor j peak i, g/sec
V
3
C= peak concentration at sensor j peak I, g/m
v
D = distance to source, m
a = variation of wind velocity over methane peak interval, m/sec
W
u = average wind velocity over peak internal, m/s
The peak concentration at the sensor can be derived from Equation 1 and is proportional to leak source
magnitude Sy but inversely proportional to the square of distance to the sensor (D2)
Cj = Sij x u/nD2a2	(2)
CFD simulations were performed using Ansys CFX software. It is a steady state model with a single leak
source and wind velocity of u = 1 m/s applied along the long axis (X). Air flow was represented by an
isothermal k-epsilon (k-s) turbulence model with standard wall function. Octane (CsHis) was utilized as a
Appendix F-36

-------
representative gas species for the leak. Figure 3 shows a 2D heatmap of downwind and crosswind gas
concentrations for the plume.
C8H18.Mass Fraction
Contour 2
n
1.000e-06
9.334e-07
8.668e-07
8.002e-07
7.336e-07
6.670e-07
6.004e-07
5.338

4.672

4.006
>e-07
3.34C
e-07
2.67^
ie-07
2.008
ie-07
1.342
?e-07
6.76C
ie-08
1.00^
)e-09
Figure 3. CFD simulation results: 2D heatmap of downwind and crosswind gas concentration of the
plume
Figure 4 illustrates calculated gas concentrations along the center of the plume as a function of the
distance from leak sources for four leak sizes: 3.6 g/hr, 1.8 g/hr, 0.9 g/hr and 0.45 g/hr. Assuming that the
realistic minimal detection level by the sensor is 60 ppb (after background concentration removal), the
colored dots illustrate the maximal detection distances for different leak sizes. As anticipated, a 0.45 g/hr
leak can be detected only within a 10 ft radius of the sensor. However, a 3.6 g/hr leak can be detected as
far as 35.6 ft from the sensor.
Appendix F-37

-------
Distance (ft)
Figure 4. CFD simulation results: VOC gas concentration along the plume for different leak sizes (0.45
to 3.6 g/'hr)
For the Gaussian dispersion model, the concentration of gas downwind of a ground level, point leak
source at the origin (x, y, z) = (0, 0, 0) is predicted by the following Gaussian Relationship [1],
C(x,y,z)
-exp

(3)
where, C is the concentration in kg/m3,
O is the flow rate in kg/s,