&EPA
EPA/600/R-14/464 | December 2014 | www.epa.gov/ord
United States
Environmental Protection
Agency
Evaluation of Field-deployed
Low Cost PM Sensors
[Office of Research and Development
jNational Exposure Research Laboratory
-------
EPA/600/R-14/464 | December 2014 | www.epa.gov/ord
Evaluation of Field-deployed Low Cost
PM Sensors
Ron Williams
National Exposure Research Laboratory
Office of Research and Development
U.S. Environmental Protection Agency
Research Triangle Park, NC, USA 27711
Amanda Kaufman
ORISE Participant
Oak Ridge Institute for Science and Education
Oak Ridge, TN, USA 37831
Tim Hanley, Joann Rice
Office of Air Quality Planning & Standards
U.S. Environmental Protection Agency
Research Triangle Park, NC, USA 27711
Sam Garvey
Alion Science and Technology
P.O. Box 12313
Research Triangle Park, NC, USA 27709
-------
Disclaimer
This technical report presents the results of work performed by Alion Science and
Technology under contract EP-D-10-070 for the Human Exposure and Atmospheric Sciences
Division, U.S. Environmental Protection Agency (U.S. EPA), Research Triangle Park, NC. It has
been reviewed by the U.S. EPA and approved for publication. Mention of trade names or
commercial products does not constitute endorsement or recommendation for use.
-------
Acknowledgments
The NERL's Quality Assurance Manager (Sania Tong-Argao) and associated staff
(Monica Nees) are acknowledged for laboratory data audits as well as their excellent
contributions to the development of sophisticated standard operating procedures used in
collection of the data. This research was supported in part by an appointment to the Research
Participation Program for the U.S. Environmental Protection Agency, Office of Research and
Development, administered by the Oak Ridge Institute for Science and Education through an
interagency agreement between the U.S. Department of Energy and EPA (DW 8992298301).
Sam Garvey, Stacey Henkle, and Zora Drake-Richmond, (Alion Science and Technology) are
acknowledged for their contributions in supporting the U.S. EPA in the execution of complex
field data collections and summary analyses. Russell Long, Peter Preuss, Stacey Katz and Gail
Robarge (U.S. EPA) are acknowledged for their efforts to ensure the success of the research
effort reported here.
IV
-------
Table of Contents
List of Tables vii
List of Figures vii
Acronyms and Abbreviations x
Executive Summary xii
1.0 Introduction 1
2.0 Materials and Methods 2
2.1 PM Sensors 3
2.1.1 PM Reference Analyzers 4
2.1.2 AirBase CanarIT 5
2.1.3 CairPol CairClip PM2.5 6
2.1.4 Carnegie Mellon Speck 7
2.1.5 DylosDCllOO 8
2.1.6 Met One Model 831 8
2.1.7 RTIMicroPEM 9
2.1.8 Sensaris Eco PM 10
2.1.9 Shinyei PMS-SYS-1 11
3.0 PM Sensor Results and Discussion 11
3.1 AirBase CanarIT 12
3.1.1 AirBase Results 12
3.1.2 AirBase Discussion 17
3.2 CairPol CairClip PM2.5 17
3.2.1 CairClip PM2.s Results 17
3.2.2 CairClip PM2.5 Discussion 22
3.3 Carnegie Mellon Speck 23
3.3.1 Speck Results 23
3.3.2 Speck Discussion 27
3.4 DylosDCllOO 27
3.4.1 DC1100 Results 27
3.4.2 DC 1100 Discussion 32
3.5 Met One Model 831 33
3.5.1 Met One Model 831 Results 33
3.5.2 Met One Model 831 Discussion 39
v
-------
3.6 RTIMicroPEM 40
3.6.1 MicroPEM Results 40
3.6.2 MicroPEM Discussion 45
3.7 Sensaris Eco PM 47
3.7.1 Sensaris Eco PM Results 47
3.7.2 Sensaris Eco PM Discussion 49
3.8 Shinyei PMS-SYS-1 50
3.8.1 Shinyei PMS-SYS-1 Results 50
3.8.2 Shinyei PMS-SYS-1 Discussion 54
3.9 General Discussion 54
4.0 Study Limitations 57
4.1 Resource Limitations 57
4.1.1 Intra-sensor Performance Characteristics 57
4.1.2 Test Conditions 58
4.1.3 Sensor Make and Models 58
VI
-------
Tables
Table 1-1. Sensors Acquired for Evaluation 1
Table 2-1. Summary of Sensors Evaluated 4
Table 3.6-1. R2 values for all cohorts of all MicroPEMs versus the Grimm 55
Table 3.9-1. Summary of PM Sensor Performance and Ease of Use Features 556
Figures
Figure 2-1. "Bowl on pole" sensor enclosure in closed (left) and open (right) positions 2
Figure 2.1-1. AIRS sampling platform with all shelters shown 3
Figure 2.1-2. Hi-vol shelter open with laptop displayed (left) and with wiring and
laptop inside (right) 3
Figure 2.1.1-1. Grimm data vs. temperature and RH 5
Figure 2.1.2-1. AirBase CanarIT attached to laboratory stand via bailing wire 6
Figure 2.1.2-2. AirBase CanarIT on its laboratory stand perch 6
Figure 2.1.3-1. CairClip PM sensor suspended beneath shelter grating 7
Figure 2.1.4-1. Carnegie Mellon Speck oriented in its shelter with the lid up 7
Figure 2.1.5-1. Dylos DC 1100 oriented in its shelter with the lid up 8
Figure 2.1.6-1. Met One model 831 oriented in its shelter with the lid up 9
Figure 2.1.6-2. Met One model 831 oriented in its shelter with the lid down 9
Figure 2.1.7-1. RTI MicroPEM orientation on the plate of a bowl-on-pole shelter 10
Figure 2.1.8-1. Sensaris EcoPM oriented in its shelter with the lid up 11
Figure 2.1.8-2. Sensaris Eco PM sampling location 11
Figure 2.1.9-1. Shinyei in a Hi-Vol shelter. Note that the lid to the Hi-Vol shelter was closed
during sampling 112
Figure 3.1.1-1. Grimm data and AirBase data overtime 13
Figure 3.1.1-2. 24-hour time-averaged PM data comparing the Grimm reference
sampler with the AirBase CanarIT PM sensor 14
Figure 3.1.1-3. Temperature vs. AirBase 24-hour averaged data 14
Figure 3.1.1-4. RHvs. AirBase 24-hour averaged data 15
Figure 3.1.1-5. RHvs. AirBase (5-min averages) 15
Figure 3.1.1-6. RHvs. AirBase (5-min averages) with data> 20 |ig/m3 removed 16
Figure 3.1.1-7. Grimm vs. AirBase (5-min averages) 16
vii
-------
Figure 3.2.1-1. Grimm data and CairClip data overtime 18
Figure 3.2.1-2. 24-hour time-averaged PM data comparing the Grimm reference
sampler with the CairPol CairClip PM sensor 19
Figure 3.2.1-3. Temperature vs. CairClip 24-hour averaged data 19
Figure 3.2.1-4. RH vs. CairClip 24-hour averaged data 20
Figure 3.2.1-5. RH vs. CairClip (5-min averages) 20
Figure 3.2.1-6. Temperature vs. CairClip (5-min averages). All data taken at
humidities > 95% were removed 21
Figure 3.2.1-7. Temperature vs. CairClip (5-min averages). All data taken at
humidities > 95% and temperatures < 19.8 °C were removed 21
Figure 3.2.1-8. Grimm vs. CairClip (5-min averages) 22
Figure 3.3.1-1. Speck data and Grimm data overtime 23
Figure 3.3.1-2. 24-hour time-averaged PM data comparing the Grimm reference
sampler with the Speck 24
Figure 3.3.1-3. Temperature vs. Speck 24-hour averaged data 24
Figure 3.3.1-4. RH vs. Speck 24-hour averaged data 25
Figure 3.3.1-5. RH vs. Speck (5-min averages) 25
Figure 3.3.1-6. Temperature vs. Speck (5-min averages) 26
Figure 3.3.1-7. Grimm vs. Speck (5-min averages) 26
Figure 3.4.1-1. Grimm data and Dylos data overtime 29
Figure 3.4.1-2. 24-hour time-averaged PM data comparing the Grimm reference
sampler with the Dylos DC 1100PM sensor 29
Figure 3.4.1-3. Temperature vs. Dylos 24-hour averaged data 30
Figure 3.4.1-4. RHvs. Dylos 24-hour averaged data 30
Figure 3.4.1-5. RHvs. Dylos (5-min averages) 31
Figure 3.4.1-6. Grimm vs. Dylos (5-min averages) 31
Figure 3.4.1-7. Grimm and normalized Dylos data (5-min averages) against time 32
Figure 3.4.1-8. Dylos, Grimm, Temperature, and RH from November 27 to
December 2, 2013 33
Figure 3.5.1-1. Grimm vs. Met One Model 831 PMi andPM2.s (5-min averages) 34
Figure 3.5.1-2. Grimm data and Met One Model 831 data overtime 35
Figure 3.5.1-3. 24-hour time-averaged PM data comparing the Grimm reference
sampler with the Met One Model 831 PM sensor 35
Figure 3.5.1-4. Temperature vs. Met One Model 831 24-hour averaged data 36
Figure 3.5.1-5. RH vs. Met One Model 831 24-hour averaged data 36
viii
-------
Figure 3.5.1-6. RH vs. Met One Model 831 (5-min averages) 37
Figure 3.5.1-7. Grimm vs. Met One Model 831 (5-min averages) 37
Figure 3.5.1-8. Grimm and normalized Met One Model 831 data (5-min averages) against
time 38
Figure 3.5.1-9. Grimm and Met One Model 831 data (5-min averages) with data from 04:00 to
14:00 on December 4 removed 38
Figure 3.5.1-10. Grimm and renormalized Met One model 831 data against time
(5-min averages) 39
Figure 3.6.1-1. A trace of MicroPEMunit 1 and the Grimm overtime 41
Figure 3.6.1-2. A trace of MicroPEMunit 2 and the Grimm overtime 41
Figure 3.6.1-3. A trace of MicroPEMunit 3 and the Grimm overtime 42
Figure 3.6.1-4. Scatterplot of MicroPEM 1 vs Temperature 42
Figure 3.6.1-5. Scatterplot of MicroPEM 1 vs Relative Humidity 43
Figure 3.6.1-6. Scatterplot of MicroPEM 1 vs the Grimm. The data has been divided into three
time periods following zeroing of the unit 43
Figure 3.6.1-7. Scatterplot of MicroPEM 2 vs the Grimm. The data has been divided into three
time periods following zeroing of the unit 44
Figure 3.6.1-8. Scatterplot of MicroPEM 3 vs the Grimm. The data has been divided into three
time periods following zeroing of the unit 44
Figure 3.6.1-9. RTI MicroPEM with zero air filter attached 46
Figure 3.6.1-10. RTI MicroPEM inlet alongside the gasketed cup which serves as an attachment
point for the zero air filter 46
Figure 3.7.1-1. Sensaris EcoPM concentration measurements overtime 47
Figure 3.7.1-2. 30-s time-averaged PM data comparing the Grimm reference
sampler with theEco PM sensor 48
Figure 3.7.1-3. RH vs. EcoPM (30-s averages) 48
Figure 3.7.1-4. Temperature vs. EcoPM (30-s averages) 49
Figure 3.8.1-1. A trace of the Shiny ei and the Grimm overtime 51
Figure 3.8.1-2. Grimm vs. Shinyei (5-min averages) 51
Figure 3.8.1-3. Scatterplot of the Shinyei vs Temperature 52
Figure 3.8.1-4. Scatterplot of the Shinyei vs Relative Humidity 52
Figure 3.8.1-5. Scatterplot of the Shinyei vs Wind Speed 53
Figure 3.8.1-6. A trace of the Shinyei and the Grimm overtime 53
Figure 3.8.1-7. Scatterplot of the fully processed Shinyei data vs the Grimm 54
IX
-------
Acronyms and Abbreviations
AC/DC alternating current/direct current
ACE Air, Climate, and Energy Program
AIRS Ambient Air Innovation Research Site
FEM federal equivalent method
FRM federal reference method
GC-MS gas chromatograph-mass spectrometer
GFCI ground fault circuit interrupter
GMT Greenwich Mean Time
GSM Global System for Mobile Communication
hi-vol high volume
NAAQS national ambient air quality standards
NERL National Exposure Research Laboratory
NCh nitrogen dioxide
NRMRL National Risk Management Research Laboratory
OAQPS Office of Air Quality Planning and Standards
ORD Office of Research and Development
PID photoionization detector
PM particulate matter
PM2.5 particulate matter of diameter 2.5 microns or less
ppb parts per billion
ppm parts per million
QAPP quality assurance project plan
R2 coefficient of determination
RH relative humidity, i.e., water vapor content of air expressed as a percentage of vapor
pressure of water at a given temperature and pressure
ROP research operating procedure
RTF Research Triangle Park
SIM subscriber identity module
UTC Coordinated Universal Time
VAC volts alternating current
VDC volts direct current
VOC volatile organic compound
-------
WA work assignment
WA COR WA Contracting Officer's Representative
XI
-------
Executive Summary
Background
Particulate matter (PM) is a pollutant of high public interest regulated by national
ambient air quality standards (NAAQS) using federal reference method (FRM) and federal
equivalent method (FEM) instrumentation identified for environmental monitoring. PM is
present in the atmosphere in concentrations that can vary greatly according to location,
temperature, and a number of circumstances that influence local air quality. Citizen scientists and
other researchers have a desire to monitor this pollutant, and there is a need for increased
accessibility to portable and economical monitoring and sampling equipment. The evolution of
low cost PM sensors has resulted in a number of such instruments becoming commercially
available. However, this evaluation was not conducted to assess the suitability of these PM
sensors to serve as either FRM or FEM sampler instruments. This activity represents the first
step in evaluating some of the commercially available low cost PM sensors and comparing their
data-collection capabilities to that of collocated FEM samplers during field evaluations.
Study Objectives
As part of its Air Climate & Energy (ACE) research program on emerging technologies
(ACE EM-3), the US EPA developed a research effort with the goals of: conducting a world-
wide market survey of low cost PM sensors (<$2500), acquiring such sensors, and then
conducting collocated field evaluations of these sensors in direct comparison with FEM
instrumentation. A total of eight such devices were obtained and sited in the established PM
sensor test platform on the US EPA's RTF, NC campus (AIRS). The collocated PM2.5 FEM
instrumentation with 5-minute time resolution provided the means to investigate both short
duration and daily (24-hr) comparisons between the test devices and the FEM response. Potential
data confounders such as temperature and relative humidity were obtained to aid in the
investigation. The relationship between FEM response and the various sensors was established in
a regression. Ancillary findings related to ease of use, portability, data collection efficiency,
among others, were established based upon our experiences over approximately one month of
continuous operation.
Study Approach
Direct manufacturer contact, as well as internet searches, surfaced eight prospective low
cost sensors meriting incorporation into this study. In some instances, sensor developers
contacted the research team and expressed interest in having their device evaluated. Any device
accepted under such conditions was incorporated without restrictions or direct involvement of
the developer. Despite there being a large number of PM sensors on the market, many appeared
to lack specific properties that discouraged us from incorporating them into the research. We
focused on sensors that demonstrated direct reading, provided either true or estimated size cut
point data (preferably PM2.s), and were responsive to at least some outdoor monitoring. Not
xii
-------
every sensor that was evaluated met these criteria. Recent sensor-related conferences hosted by
EPA1 and other scientific exchanges (including peer review literature2'3) clearly indicated that
PM sensors reporting only particle number (or counts) were both available at low cost and may
prove comparable to more expensive light scattering (nephelometric) and direct mass measuring
(Tapered Element Oscillating Microbalance-TEOM) instrumentation. A number of these devices
were secured and evaluated to meet the apparent growing use rate among both research
professionals and citizen scientists.
Concerning outdoor monitoring applications, only one of the sensors evaluated came
fully weather protected, and allowances (shelters) were developed to protect the remaining
devices. In several instances, the sensor developers expressed that their devices were primarily
intended for indoor monitoring. Regardless of how a manufacturer defines the applicability of a
given low cost PM sensor, it is highly likely that citizen scientists and others would try to use
such devices to the greatest extent possible while perhaps ignoring cautions about primary siting
requirements. Outdoor monitoring is a prime example of such a scenario, and was therefore fully
assimilated into the study design. As a result, one might consider the performance characteristics
defined in this report as potentially representing a worst-case scenario. Regardless, we protected
all sensors from weather conditions (ambient temperature, moisture, stray light) to the best of our
ability.
For approximately one month, these collocated low cost sensors were cited on a PM
monitor test platform with a Grimm Model EDM180 PM2.5 (EQPM-0311-195) FEM on the US
EPA's RTF, NC campus. The units operated continuously during this time with the exception of
data recovery, flow checks/calibration, and general servicing as required by the various
manufacturers. Once the monitoring period was completed, data from the FEM, sensors and
meteorological findings were compared to determine how these variables influence low cost
sensor response.
Sensor Performance Results
Discreet statistical evaluation of sensor performance was established with respect to
collocated data associated with the Grimm FEM. When possible, resulting regression
characteristics were optimized with respect to data normalization and influence of confounders.
1 EPA Air Sensors Workshop, 2014. Posters, presentation slides, and abstracts.
https://sites.google.com/site/airsensors2014/home
2 Hagler, G., Solomon, P.A., and Hunt, S.W. New Technology for Low-Cost, Real-Time Air Monitoring;
EM January 2014, 6-9.
3 Watkins, T, Snyder, E., Thoma, E., Williams, R., Solomon, P., Hagler, G., Shelow, D., Hindin, D., Kilaru, V., Preuss,
P. Changing the paradigm for air pollution monitoring. Environmental Science and Technology, 47: I 1369-1 1377
(2013).
XIII
-------
Ease of Use Features Evaluation
Concerning ease of use features, several key findings were evident. In general, these
included, but were not limited to:
Power Requirements: None of the units tested had the ability to operate for extensive (multi-
day) periods without electrical assistance. Since our goal was to obtain as much collocated data
as possible, we purposefully removed such a variable (battery life) from the research. That
being said, certain sensors required specific power supplies (such as a USB computer
connection), while others simply required a 'step-down' 115V transformer. Upon battery
power alone, the sensors would expect to operate from 8 hours to 3 days, depending upon
sensor type.
Data collection/transmission/storage/recovery: There were numerous data
collection/transmission/storage/recovery approaches observed between the various sensor
devices. Therefore, extensive efforts had to be performed to ensure data recovery to perform
the evaluations. Cellular communication, WiFi hot spots, direct storage via laptops, or
electronic tablet connections had to be established, developed, or in some cases unexpectedly
refined as to the manufacturer's suggested protocols. Data communication issues had to be
fully vetted to ensure both consistent and reliable data recovery.
Data Schemes: Data schemas were widely variable between the sensors evaluated. This lack of
standardization across manufacturers and the often-unique pattern of their data formatting (and
the types of data being reported) made data recovery and insertion into statistical analysis
schema somewhat difficult. Individual data recovery programs often had to be established for
each sensor so that data could be recovered. In some instances, communication with the
developer was necessary to understand what their output was so that we could correctly
identify variables for analysis.
Installation and WiFi considerations: Almost all of the low cost sensors were easy to install
following our development of weather-shielded assemblies. Their low mass and small sizes
were highly advantageous for siting. Even so, all of the units had to have external power
supplies. Some of the sensors required direct computer connections, which in our opinion
minimizes its capabilities relative to outdoor use. Even so, it should be recognized that
manufacturers are not necessarily trying to market these as outdoor-worthy PM samplers. It
cannot be underestimated that when used outdoors, establishment of data communication can
be difficult, especially if cellular communication or a local WiFi hot spot is required. In our
situation, we were able to establish a local WiFi hot spot or other needed communication
requirements. We sometimes had to work directly with a manufacturer to develop digital data
storage internal to the unit or via other means such as transferrable data storage card when
necessary to ensure sufficient data recovery for our purpose.
Sensor Performance Characteristics
With rare exception, most of the low cost PM sensors demonstrated an ability to provide
at least some short duration response variability (some on the order of 1 second). Data clearly
indicated that time weighted averages of approximately one to 5 minutes are more acceptable
when it came to end users being able to understand the general response encountered by the
simple noise of the instrument itself.
xiv
-------
Precision: Only the MicroPEM was evaluated for precision capabilities. Three collocated
sensors were operated for a period of approximately one month and their general inter-
variability established.
Linearity: The sensors typically provided coefficient of determination (R2) in comparison with
FEM measures of < 0.8. In a number of situations, there was little or no statistical agreement
(R2 < 0.1). Estimates of either particle count or algorithm-based mass concentrations (|ig/m3)
were equally capable of reasonable FEM agreement or equal lack of agreement. Since all
algorithm-based mass concentration estimates are only as good as the base light scattering
determination itself, it would appear that much of the lack of agreement probably lies with the
latter. As established by the design of the field studies reported here, a reasonable estimation of
mass concentration from particle counts could have been established for one of the sensors
(DylosDCllOO).
Relative Humidity and Temperature Changes: There was wide disparity in the response of
individual sensors to extremes of either RH or temperature challenge. Both minimal impacts as
well as extreme impacts were observed as they relate to the sensors successfully reporting the
challenge concentrations as environmental conditions changed. Some of this was expected due
to the very nature of the sensing mechanism (approach) often employed in low cost sensors.
Considering that all of the sensors tested were based upon light scattering principles where
particle hydroscopic properties are known to be an influencing factor in mass concentration
estimation, it is uncertain why such a wide range in RH influence (as noted by R2 relationships)
were obtained. Likewise, some sensors were highly collinear with respect to changes in
outdoor temperature while others showed no such relationship.
Response Range: Response range of the sensors varied widely. It was not unusual to see
multiple order of magnitude differences between sensors and the concentrations they were
reporting. It should be clearly stated here that environmental impacts of relative humidity and
temperature are often a significant influence in sensor response (light scattering). RH was not
accounted for with sensor algorithms, with only one exception (MicroPEM), and therefore a
widespread variety of responses with changing meteorological conditions was to be expected.
Light scattering optics, cell geometry, and other key engineering features are known to be
highly influential relative to nephelometric response and therefore the variability observed here
in the findings reflects not only the physics of light scattering devices in general, but also how
such features have or have not been incorporated into these low cost devices.
Conclusions
While both the discreet performance characteristics and ease of use characteristics for
each device were highly variable, some of the devices appeared to provide reasonable agreement
with the collocated FEM mass concentration estimates. The frequent lack of agreement between
the sensor and the FEM is a clear indication that citizen scientists and others employing such
devices (especially under outdoor monitoring conditions) must remain aware of the uncertainty
surrounding the data being generated. At times, meteorological conditions (temperature, RH) had
a significant impact upon low cost sensor responses and it was necessary to remove some data to
improve the performance statistics. It should be noted that the end users of these devices need to
understand where data exclusion might be necessary, as often little or no instructions on such
xv
-------
matters are clearly defined by the sensor manufacturers. It would appear that collocation in the
general test area would provide a reasonable approach for end users to ascertain the ability of a
low cost sensor to be provide useable data. The information provided in this report represents a
first step towards ensuring that the next generation of low cost air quality sensors has even more
capabilities, meeting a wide variety of air quality monitoring needs. The study also provides
potential low cost sensor users with key information regarding sensor performance and the
criteria that must be addressed in order to collect data successfully.
XVI
-------
1.0 Introduction
EPA's Office of Research and Development (ORD) recently performed a sensors/
applications challenge in response to an EPA-sponsored new technology workshop4'5. This
challenge is a high priority for EPA and one in which ORD's National Exposure Research
Laboratory (NERL) is taking a leadership role6. Consequently, EPA established as a priority
providing critical feedback to groups or individuals considering the use of citizen science
application community-based data collections. As PM is a pollutant of great interest, the NERL
sought out novel sensor technologies for the measurement of ambient particulates through a
general appeal to inventors and developers of these technologies.
The effort reported here aimed to provide data for identifying which technologies might
prove valuable in measurement of PM for a variety of potential users.
As part of this evaluation, we obtained a total of eight PM sensors costing under $2500.
This is a general cost consideration we anticipate being a ceiling for many citizen scientists. It is
recognized that a sizeable number of potentially more accurate PM sensors exist at higher cost
($3-$6K) but these were purposefully excluded from the testing due to the consideration defined
above. Table 1-1 lists the sensors purchased for evaluation. Research operating procedures
(ROPs) were developed for each sensor prior to testing.
Table 1-1. Sensors Acquired for Evaluation
Sensor
CanarIT
CairClip PIVhs
Speck
DC1100
831
MicroPEM
EcoPM
PMS-SYS-1
Manufacturer
AirBase
CairPol
Carnegie Mellon
Dylos
Met One
RTI
Sensaris
Shinyei
City/State
Israel
Mejannes les Ales, France
Pittsburgh, PA
Riverside, CA
Grants Pass, OR
Research Triangle Park, NC
Crolles, France
Chuo-ku, Japan
-Cost
$1500
*
$150
$300
$2050
$2000
*
$1000
Website
http://www.myairbase.com/#!technoloav
http://www.cairpol.com/index.php7lancFen
http://specksensor.org/
http://www.dylosproducts.com/ornodcairqum.html
http://www.metone.com/particulate-831.php
http://www.rti.org/page.cfm/Aerosol Sensors
http://v2.sensaris.com/store/index.php7ro ute=pro
duct/product&product id=66
http://www.shinyei.co.ip/STC/optical/main pmmo
nitor e.html
* Manufacturers had not yet established a consumer-based cost point at the time of EPA acquired
these devices for evaluation. These devices were acquired at costs ranging from $500 to $1000.
4 https://sites.google.com/site/airsensors2014/home
5 Vallano, D., Snyder, E., Kilaru, V., Thoma, E., Williams, R., Hagler, G., Watkins, T., Air Pollution Sensors. Highlights
from an EPA workshop on the evolution and revolution in low cost participatory air monitoring. Environmental
Manager. December 2012. 28-33 (2012).
6 http://www.epa.Hov/heasd/airsensortoolbox/
-------
2.0 Materials and Methods
"Bowl on pole" sensor shelters were devised and constructed for the field evaluations.
The shelters, shown in Figures 2-1 through 2.1-2, were constructed in-house of aluminum.
Thermostated thermal heating pads were attached to the tops of the bowls in an attempt to
maintain interior shelter conditions where the sensors were housed at or above 6° Celsius. Even
so, it must be recognized that these heaters were purposefully selected to provide for a minimal
degree of general heating and that internal temperatures of the sensors registering at or just
below freezing were sometimes observed. These aforementioned enclosures were constructed to
ensure sensor protection from windblown rain as well as direct sunlight upon the inlets of the
devices. The shelters did not fully protect the inlets of the devices from the effects of any face
velocity issues (wind speed and/or its direction). Even so, the interface of the sensor inlet did
attempt to place a shield between the immediate sensor inlet opening and the ambient
atmosphere. That shield is viewable in Figure 2-1 with the sensor often placed directly above or
its inlet in one of the openings to provide unencumbered access to ambient conditions. Effects of
sensor PM starvation or stagnation would not be expected to have occurred under the test
conditions.
Figure 2-1. "Bowl on pole" sensor enclosure in closed (left) and open (right) positions.
-------
2.1 PM Sensors
The on campus Ambient
Air Innovation Site (AIRS; RTF,
NC) was selected for all PM sensor
testing. The custom-made "bowl
on pole" shelters were attached to
the railing of the monitoring
platform as shown in Figure 2.1-1.
In order from left to right were the
Dylos DC 1100, the Met One model
831, the Carnegie Mellon Speck,
the RTI MicroPEM, the CairPol
CairClip, and the Sensaris Eco PM.
The AirBase CanarIT included its
own shelter and was placed to the
right of the Sensaris Eco PM.
Figure 2.1-1. AIRS sampling platform with all shelters shown.
Two aluminum shelters
were used to house a laptop
computer for data recovery from all sensors and most of the electrical connections. Any
connections that could not be made inside the aluminum high volume (hi-vol) shelter were
encased in a zip-lock bag that was closed with zip ties to further protect against water. The setup
inside one of the hi-vol shelters is shown in Figure 2.1-2. All power and data lines were secured
in place with zip ties. With the exception of the MicroPEM, primary data collections reported
here were performed during the November-December 2013 time period. The MicroPEM was
operated during July 29-September 2, 2014.
Note that the Sensaris Eco PM and the AirBase CanarIT both transmit their data to
proprietary websites. As such, data recovery for these sensors was performed via an internet
download.
Figure 2.1-2. Hi-vol shelter opened with laptop displayed (left) and with wiring and laptop inside (right).
-------
The previously mentioned operation schedule is intended to provide a general
understanding of the data collection periods for each of the sensors evaluated in this report. It
should be clarified that initial investigation (~ 30 day) collocation trials involving the RTI
MicroPEM were performed in the fall/winter of 2013 and that data were successfully captured.
Data findings from these evaluations were voluntarily provided to the manufacturer. The device
had results indicating generally poor agreement with the collocated FEM. Further discussions
with the manufacturer indicated significant hardware and/or software upgrades had been
performed. To provide the greatest value to the scientific community at large, we obtained
upgraded versions of the device and summarily retested them. Only the retest findings for this
sensor are being reported here. It should be recognized that the retest conditions were conducted
during summer/fall conditions as compared to generally colder conditions for the remaining
sensors. It should also be mentioned here that the Airbase CanarIT is now no longer available
under that name following its acquisition by a secondary party (Perkin-Elmer) and is now
marketed as the ELM7. Discussions with this new vendor indicated significant changes to the
original device we tested have occurred. We have no data findings to report on this upgraded
device at this time.
Table 2.1: Summary of Sensors Evaluated
Sensor
AirBase
CanarIT
CairClip PM
Carnegie
Mellon Speck
DylosDC1100
Met One 831
RTI MicroPEM
Sensaris Eco
PM
Shinyei PMS-
SYS-1
Method
Optical
Optical
Optical
Optical
Optical
Optical
Optical
Optical
Size
Fraction
Undefined
PM2.5
Undefined
Undefined
<10|jm
PM2.5
PM2.5
PM2.5
Measurement
Unit
ug/m3
ug/m3
Particle
counts
Particle
counts
ug/m3
ug/m3
ug/m3
ug/m3
~ Weight
(Ib)
_C
-0.4
-0.5
-4
-4
-1
-0.5
-0.5
Shortest
Time
Resolution
20 sec
1 min
1 sec
1 min
1 min
10 sec
<1 min
1 sec
Base Power
Accessory
AC/DC
Adapter
Battery
USB
AC/DC
Adaptor
Battery
Battery
USB
Power Circuit
Board
Data Retrieval
Method
Proprietary Web
Server
Proprietary
Software
Proprietary
Software
Proprietary
Software
Proprietary
Software
Proprietary
Software
Proprietary Web
Server
Proprietary
Software
2.1.1 PM Reference Analyzers
A Grimm Technologies, Inc. (Douglasville, GA) Federal Equivalent Method (FEM)
Model EDM180 PM2.s (EQPM-0311-195) monitor and an RM Young (Model 41382VC) RH and
temperature sensor were operated by EPA's Office of Air Quality Planning and Standards
(OAQPS) alongside meteorological instrumentation at the AIRS monitoring station on the EPA
campus in Research Triangle Park (RTF), NC. The established reference method operation was
7 http://elm.perkinelmer.com/
-------
covered under a QAPP for that study8'9. Data from the Grimm were available during the data
collection period of the sensor evaluation as 1-min, 5-min, or 60-min averages. Sensors tested in
this study featured time resolutions between 1-s and 5-min. We selected a matched data
integration period (average) of 5 minutes for comparison with the sensors. General relationships
between the Grimm response and environmental conditions are reported in Figure 2.1.1-1.
E
CD
-J
E
0
Temperature (°C)
25 ' , y = 0.121 3x + 8.0222
20. / R" 0.0237
15
10 ,'« ^^ '_~ \
5
0
-10 0 10 20 30
Relative Humidity (%)
25 ' y = 0.084X + 3.0625
20 , R* = 0.0697
15
10 " ' ^'^"""
5 "'"^',' " " ^ ' >A '
0 - -
0 20 40 60 80 100
Figure 2.1.1-1. Grimm data vs. temperature and RH.
2.1.2 AirBase CanarIT
Because the AirBase CanarIT was too large for the customized shelters and was
adequately sheltered by its own housing, it was attached to a large laboratory stand as shown in
Figure 2.1.2-1. This laboratory stand was in turn attached to the railing of the AIRS sampling
platform via a C-clamp such that its height matched those of the other sensors. It was oriented so
that its main inlet faced the platform as shown in Figure 2.1.2-2.
8 U.S. Environmental Protection Agency (EPA). July 2013. QAPP. Raleigh Multi-Pollutant Near-Road Site: Measuring
the Impact of Local Traffic on Air Quality. Research Triangle Park, NC.
9 Alion Science and Technology. 2013. Quality Assurance Project Plan: PM and VOC Sensor Evaluation, QAPP-RM-
13-01(1), November 14, 2013. Research Triangle Park, NC.
-------
Figure 2.1.2-1. AirBase CanarIT attached to laboratory stand via bailing wire.
Figure 2.1.2-2. AirBase CanarIT on its laboratory stand perch.
2.1.3 CairPol CairClip PM2.5
The CairClip was originally placed on top of the shelter grating with the inlet flush to a
hole in the grating. On December 13, 2013, following a review of the data in hand (relatively low
concentrations being reported), it was suspended underneath the grating with zip ties, as shown
in Figure 2.1.3-1, to maximize airflow. The reason for this being the concern that inadequate
fresh air supply (stagnation) might be the cause of a lack of observed day-to-day PM
concentration variability with this sensor. The repositioning of the sensor to a fully open nature
-------
did not subsequently change its basic performance characteristics and all data captured regardless
of positioning were used in the subsequent statistics.
Figure 2.1.3-1. CairClip PM sensor suspended beneath shelter grating.
2.1.4 Carnegie Mellon Speck
Because the Carnegie Mellon Speck's inlet is on its bottom surface, it was simply placed
on the grating as shown in Figure 2.1.4-1. The Speck experienced two interruptions in data
collection, both of which began while the operator was in the field. This suggests that it failed to
restart data collection after a data download was completed. This might be the result of operator
error and not necessarily the fault of the device.
Figure 2.1.4-1. Carnegie Mellon Speck oriented in its shelter with the lid up.
7
-------
2.15 DylosDCHOO
The Dylos DC 1100 has all of its vents, inlet, and outlet on its backside. Therefore, it was
placed on its back with the vents resting directly on the grated floor of the shelter, as pictured in
Figure 2.1.5-1. There was one interruption in sampling, the reasons for which remain unknown.
Figure 2.1.5-1. Dylos DC1100 oriented in its shelter with the lid up.
2.16 Met One Model 831
The Met One model 831 was positioned upside down so that its inlet protruded beneath
the grating of its shelter as shown in Figures 2.1.6-1 and 2.1.6-2. The Met One experienced one
interruption in sampling, which began while the operator was in the field. This suggests that it
failed to restart data collection after a data download was completed. This might be the result of
operator error and not necessarily the fault of the device.
-------
Figure 2.1.6-1. Met One model 831 oriented in its shelter with the lid up.
Figure 2.1.6-2. Met One model 831 oriented in its shelter with the lid down.
2.1.7 RTI MicroPEM
The RTI MicroPEM is an optical particulate matter sensor that uses a size-selective inlet
to measure PM2.5. Three RTI MicroPEM units were simultaneously tested from July 29 through
September 2, 2014 at the AIRS sampling site. On the advice of the manufacturer, they were
arranged in the bowl-on-pole shelters as shown in Figure 2.1.7-1. As shown, they are placed on
the grating on their side with the opening to the nozzle facing down. Each MicroPEM unit was
-------
assigned a number, 1, 2, or 3, based on its position on the sampling platform. The operator was
kept blind to the serial number of each unit while it was in the field. There was one interruption
in sampling from 8/12/14 to 8/18/14 caused by the tripping of the ground fault circuit interrupter
(GFCI) circuit powering the devices.
Figure 2.1.7-1. RTI MicroPEM orientation on the plate of a bowl-on-pole shelter.
2.1.8 Sensaris Eco PM
The Sensaris Eco PM was placed on its side so that one of its several ventilation holes
would be in contact with the grate. The AIRS platform proved to be too far away from the only
WiFi hotspot at the AIRS monitoring site. As such, the Sensaris Eco PM was relocated first to a
hi-vol shelter and then to a "bowl on pole" shelter on top of the trailer containing the AIRS WiFi
hotspot. This relocation placed it approximately 50 m from the other sensors but still in close
proximity (< 10m) to the collocated Grimm FEM analyzer. Care was taken to place it at
approximately the same altitude as the other sensors. The Sensaris Eco PM orientation and
location are shown in Figures 2.1.8-1 and 2.1.8-2. The Sensaris Eco PM suffered from many
interruptions in overall data collection. Connectivity problems were believed to have influenced
overall data collection rates for this device.
10
-------
,
Figure 2.1.8-1. Sensaris Eco PM oriented in its shelter with the lid up.
Figure 2.1.8-2. Sensaris Eco PM sampling location (circled above).
2.1.9 Shinyei PMS-SYS-1
The Shinyei PMS-SYS-1 is an optical PM sensor that uses a size-selective inlet to
measure PM2.5. One unit was tested from July 29 to September 2, 2014 and then again from
September 15 to October 17, 2014 at the AIRS sampling site. The first test was performed with
the Shinyei sensor attached to the bottom of a bowl-on-pole shelter. The intention was to
maximize airflow to the sensor. However, the unit was found to be extremely sensitive to light
11
-------
interference. Whenever the sun was shining, the unit reported nearly 800 |ig/m3. As such, the
initial test was discarded and the unit relocated to a Hi-Vol shelter where it would be better
protected from sunlight. The position and orientation of the unit in the second test is shown in
Figure 2.1.9-1. The unit was attached to the lid of the Hi-Vol shelter via double-sided tape.
Figure 2.1.9-1: Shinyei in a Hi-Vol shelter. Note that the lid to the Hi-Vol shelter was closed
during sampling.
3.0 PM Sensor Results and Discussion
3.1 AirBase CanarIT
3.1.1 AirBase Results
The CanarIT (AirBase) is a multi-sensor unit capable of measuring PM (|ig/m3), total
VOCs (ppb), and NCh (ppb). Several other parameters were measured by the AirBase, but only
the unit's PM response is discussed in this report. Data that might have been affected by the
presence of an operator's vehicle (general disruption of the local air quality) were removed
starting 15 min before the operator's arrival and ending 15 min after departure. Such review was
consistently performed across all data collected for all sensors.
As seen in the trace (5-min) data shown in Figure 3.1.1-1, the AirBase did not correlate
well with the Grimm. During late November through early December for example, the AirBase
indicated a lower PM load, while the Grimm indicated that this is a period of increased PM
loading. This lack of correlation is quantified in the 24-hour average data scatter plot shown in
Figure 3.1.1-2. In addition, the AirBase showed poor correlation with temperature (Figure
3.1.1-3) and RH (Figure 3.1.1-4) measurements.
12
-------
Since RH fluctuates constantly over the course of a day, it was important to investigate
the 5-min average RH versus the sensor data even if the 24-hour data indicated some correlation.
The graph of that data in Figure 3.1.1-5 shows that the outliers were not correlated with RH. A
second graph with all AirBase data above 20 jig/m3 removed (Figure 3.1.1-6) also shows no
correlation between the rest of the data and RH.
Given the data detailed above, no basis for any correction factors or removal of outliers
can be found. The final scatter plot of Grimm vs. AirBase data is shown below in Figure 3.1.1-7.
The scale has been chosen manually to better illustrate the bulk of the data.
Grimm and AirBase Trace
Grimm AirBase
o
CO
en
CjJ
Time
N
p
5
CD
CO
NJ
O
CO
Figure 3.1.1-1. Grimm and AirBase data overtime.
13
-------
Grimm vs. AirBase
24-hour average PM data
y = -0.1Q23X + 6.5448
Rz = 0.0394
0
10 15
Grimm (ug/m3)
25
Figure 3.1.1-2. 24-hour time-averaged PM data comparing the Grimm reference sampler with the
AirBase CanarIT PM sensor.
Temperature vs. AirBase y = -0.0465x + 5.746
24-hour average PM data R2 = 0.0098
B>
w
ro
m
0
5 10' 15
Temperature (°C)
20
25
Figure 3.1.1-3. Temperature vs. AirBase 24-hour averaged data.
14
-------
19 -
*-* \f\ _
1 -
c
Relative Humidity vs. AirBase y = 0.0305x + 3.2698
24-hour average PM data R2 = 0.03
+
*
* »
4p _uw^
» *«;»«»»'
> 20 40 60 80 100 120
Relative Humidity (%)
Figure 3.1.1-4. RH vs. AirBase 24-hour averaged data.
Relative Humidity vs. AirBase y = -0.001 sx + 5.4442
5-min averages R2 = 2E-05
20
40 60 80
Relative Humidity (%)
100
120
Figure 3.1.1-5. RH vs. AirBase (5-min averages).
15
-------
0
Relative Humidity vs. AirBase
5-min averages (data > 20 ug/m3 removed)
= 0.0003x +4.9753
R2 = 5E-06
20
40 60 80
Relative Humidity (%)
100
120
Figure 3.1.1-6. RH vs. AirBase (5-min averages) with data > 20 |jg/m3 removed.
3
1
O
0
u
I-
w
(0
CQ
Grimm vs AirBase
5-min averages
= -0.1013x +6.4979
R2 = O.OQ44
10 15 20 25
Grimm Concentration (ug/m3)
Figure 3.1.1-7. Grimm vs. AirBase (5-min averages).
16
-------
3.1.2 AirBase Discussion
The AirBase has several features that are useful for remote sampling operations. The unit
runs on 12V DC power, which is normally supplied by an AC/DC adapter. With minimal wiring,
however, the unit could be modified to work using any number of battery options. The stainless
steel housing of the AirBase, which includes a protective cover over all sampling inlets, allows
the AirBase to perform outdoors without any additional sheltering.
The AirBase transmits all data to a proprietary server where it can be accessed online.
The model tested used a Global System for Mobile Communication (GSM) subscriber identity
module (SIM) card and data plan for this purpose. This design decision eases remote operation,
as the unit requires fewer in-person operator checks. However, it does add a recurring cost of
operation since cellular data plans currently cost approximately $50 per month.
During the evaluation, interruptions in transmission to the server were experienced after
every few days of operation. These interruptions required us to cycle power to the AirBase.
However, it appeared the AirBase still collected and stored data even when it stopped
transmitting. Upon reestablishing a connection to the server, it appeared from the flashing data
transmission indicator lights that the AirBase transmitted its backlog of data at a much higher
rate than during normal operation, which is supported by the fact that no gaps occurred in the
data despite several transmission interruptions.
The trace of the AirBase PM sensor data does not appear to follow that of the Grimm
FEM analyzer. Scatter plots show that the AirBase PM data had minimal correlation with the
Grimm or with any other factors. No speculation can be provided as to why this lack of
agreement was observed.
3.2 CairPol CairClip PM2.5
3.2.1 CairClip PM2.5 Results
The CairPol CairClip PM2.5 sensor is a single sensor unit used for measuring PM in
micrograms per cubic meter (|ig/m3). It should be stated that the device tested was a prototype
model kindly released by the manufacturer to accommodate our research desire. Data that might
have been affected by the presence of an operator disturbing the general air quality were
removed starting 15-min before the operator's arrival and ending 15-min after departure.
As seen in the trace (5-min) of the CairClip and Grimm data in Figure 3.2.1-1, the
CairClip appears to have substantial sensitivity issues. It recorded 0 jig/m3 for the vast majority
of the sampling time. This was the justification for reconfiguring the device following an initial
data review. Reorientation did not appear to improve the response. The 24-hour average data
show no correlation between the CairClip and the Grimm (Figure 3.2.1-2), but a strong
correlation with temperature (Figure 3.2.1-3) and a possible correlation with RH (Figure 3.2.1-
4).
17
-------
RH was examined first because of a known correlation between RH and the presence of
outliers in many optically based PM sensors10. The 5-min averaged RH data clearly show that all
of the highest points detected occurred at greater than 95% RH (Figure 3.2.1-5). These data
points, which are significantly higher than any others, were considered meteorology-impacted
outliers. As such, all data at RH greater than 95% were removed.
As shown in Figure 3.2.1-6, the CairClip produced detectable responses only at
temperatures above 19.8 °C. Figure 3.2.1-7 is the same graph using only data at temperatures
above 19.8 °C. This clearly shows correlation between temperature and the CairClip signal.
Figure 3.2.1-8 shows that even with high humidity and low temperature data removed, no clear
correlation is observed between the CairClip and the Grimm FEM data.
Grimm and CairClip Trace
Grimm CairClip
O)
c
O
i
c
-------
CairClip ((ig/m3)
1,2
1
00
Oc
.0
OA
.4
09
Grimm vs. CairClip PM2s y = -0.0053x-«-0,1311
24-hour average PM data R2 = 0.01 1 7
^
A ^ A ^Au . * AA ^ -**: A.
W V WV^VF ^V W^V WWV I W W^ < "W
0 5 10 15 20 25
Grimm (Mg/rn3)
Figure 3.2.1-2. 24-hour time-averaged PM data comparing the Grimm reference sampler with the CairPol
CairClip PM sensor.
Temperature vs. CairClip PM2,5 y = 0.0228x - 0.0951
24-hour averages R2 = 0.3357
-0.2
5 10 15
Temperature (°C)
Figure 3.2.1-3. Temperature vs. CairClip 24-hour averaged data.
19
-------
Relative Humidity vs. CairClip PMZ 5 y = 0.001 7x - 0.0425
24-hour averages R2 = 0.01 1 8
1 **
;:
~u> 0.8
a
± C
O n A
U.'f
00
^
*
*
_______ =
^^ "~^~ -*>. A- A^^ A
' W " ^ W W W^V ^F V ^f WW ^^WPV W *V W*
0 20 40 60 80 100 120
Relative Humidity (%)
Figure 3.2.1-4. RH vs. CairClip 24-hour averaged data.
Relative Humidity vs. CairClip
5-min averages
en
DU
en
ou
f
I
"a 30
|
-------
n
u>
Q.
U
'<5
U
7
A
3
)
1
-1
Temperature vs. CairClip
5-min averages
*
*
* *»
* M»
* *«
* * *
* «M*
* *»«»»
* * «
* * **
*» *
* *
«» #* » »
« » «* »
* *
«* *
*
** »*»
* *
* * *
0-50 5 10 15 20 25 30
Temperature (°C)
Figure 3.2.1-6. Temperature vs. CairClip (5-min averages). All data taken at humidities > 95% were
removed.
CairClip (pg/m3)
Temperature vs. CairClip
5-min averages > 95% and < 19.8 °C removed
4
'
1
0 -
c
*
*
.
** - -»
. ' ra£
y = 0.79S4X - 1 6.584 ." * &*?
R* = 0.6568 ,*» v"^T?
# / *
y *
^ *^» * *jr_t __*»
** * /*£**
^tI/~
\ 5 10 15 20 25 30
Temperature (°C)
Figure 3.2.1-7. Temperature vs. CairClip (5-min averages). All data taken at humidities > 95% and
temperatures < 19.8 °C were removed.
21
-------
CairClip (ug/m3)
Grimm vs. CairClip y = -0.2289x + 2.61 07
5-min averaged PM data R; = 0.0637
i
*
4
4 4
4 4 44 **
« 4 4*44 44 4
* +4 44 4 4
* 4> 4444
* 44 444
* * 4 **
* 444 44 4> *M4
* 444 4
* * 444
4 4
__^^^ 4 444
" - _____ *« 4 4*44 4 44
*~~m~m-**-_J* '* *'* V~4 ' 4 #
* ^*^**___ *
* '~~~~~~~-*- *
* * * ^~~~~~~~~ -^_
0 5 10 15 20
Grimm (pg/m3)
Figure 3.2.1-8. Grimm vs. CairClip (5-min averages).
3.2.2 CairClip PMz.5 Discussion
The CairClip sensor operates under battery power for approximately 24 hours at a time,
although it can be (and was for this study) operated continuously using a powered mini-USB
cable connection. The unit is lightweight and very portable, which makes it viable for mobile
applications. Data are collected once per minute and must be downloaded at least every 20 days
or data files are at risk of being overwritten. The device maintained excellent uptime throughout
the study, in part because of the ease of use of both the software and hardware. Upon opening the
software, a warning message in French pertaining to ports intermittently appeared along with an
OK button. This warning message popped up repeatedly when clicking on the OK button, but the
software opened normally after sufficient clicking of the OK button. The same warning was seen
with other models of the CairClip used in other EPA studies and it seems to be a software design
issue rather than a fault of the sensor itself. Aside from the inconvenience of clicking OK
multiple times, there was no evidence that this function impeded operation of the unit in any
way.
Due to the temperature correlations previously discussed, the CairClip PM instrument
would not appear to be useful for monitoring below 20 °C. While no correlation with the Grimm
reference data was established, only three days out of the entire study featured temperatures
above 20 °C reducing the overall database used for comparison. Additional data are required
before any conclusions can be drawn regarding the CairClip's performance at higher
temperatures.
22
-------
3.3 Carnegie Mellon Speck
3.3.1 Speck Results
The Carnegie Mellon Speck is an optical PM sensor that measures particle counts once
per second. The raw data included many highly defined response peaks (spikes), but the response
had reasonable characteristics and did not possess sufficient noise features to be viewed as
electronic noise, so those data 'spikes' were not removed from the raw data. Even so, Figure
3.3.1-1 shows that spikes in the data completely obscured any correlation that might be present.
The 24-hour averaged data depicted in Figures 3.3.1-2 through 3.3.1-4 suggest a strong
correlation with humidity that is likely obscuring any correlation that might be present with
temperature and the Grimm. Relative humidity can change rapidly over the course of a day,
necessitating a further examination of the correlation between humidity and sensor response at
the 5-min averaged time resolution, as shown in Figure 3.3.1-5.
The Speck data showed greatly increased variability at high humidity. Consequently, all
data taken at times when RH was greater than 90% were removed. While this removes the
largest spikes, at least two large spikes at low humidity remain. Close inspection of the data
found nothing to suggest these spikes were related to high humidity or rain events. Figure 3.3.1-6
shows Speck particle counts vs. temperature with the high humidity data removed. Some large
outliers remain, suggesting some relationship between the potential range of these outliers and
temperature, but causality has not been defined.
Many attempts were made to associate the remaining spikes to a factor that could be
corrected for or removed, but these attempts were unsuccessful. Taking the square root or the log
of the Speck data was also futile. With no clear method to identify additional outliers, the plot of
Speck vs. Grimm data in Figure 3.3.1-7 shows no correlation.
Speck and Grimm vs. Time
ro
K)
ro
ro
ro
r-o
ro
--J
ro
ro
ro
ro
ro
Time
Figure 3.3.1-1. Speck data and Grimm data overtime.
23
-------
1200
in/in
Speck (particle counts)
K> -^ O) 00 C
0 0 0 0 C
0 0 0 0 C
0
Grimm vs. Speck y = -6, 5434x+ 119.11
24-hour average PM data R2 = 0.0342
*
+ir7A^m--_ *.
^r^P~^^^r~^r^0^9r~ &91r r^^W ^r i
0 5 10 15 20 25
Grimm (ug/m3)
Figure 3.3.1-2. 24-hour time-averaged PM data comparing the Grimm reference sampler with the Speck.
t nnn
V 100°
O ann
ouu
y
_w
^
i
3
y
V
Q.
07 ^00
-j
Temperature vs. Speck y = 3,7986x + 31 ,565
24-hour average PM data R2 = 0,01 53
*
*
5 0 5 10 15 20 25
Temperature (°C)
Figure 3.3.1-3. Temperature vs. Speck 24-hour averaged data.
24
-------
mnn
w 100°
O3 ann
ouu
° finn
i 6°°
3.
"sT 4nn
0
0)
0.
co 200
(
Relative Humidity vs. Speck _
24-hour average PM data y 4p12 - n V^f
r\ U, I ou*r
*
^J^^r^ww^^
) 20 40 60 80 100
Relative Humidity (%)
Figure 3.3.1-4. RH vs. Speck 24-hour averaged data.
Relative Humidity vs. Speck
n
c 10000
o
u
4)
part
J£
O
-------
y=1.6775x- 3.9824
Temeprature vs. Speck R2 = 0.01 93
onnn
ouuu
"uT 9*inn
wj ZOUU
3
o 9nnn
y ZUUU
_«
Q
9
0.
"~* ( nnn
^ I UUU
0
1
Q_
-------
3.3.2 Speck Discussion
The Speck unit does not contain a battery and therefore requires a constant connection to
power via a mini-USB cable. Data are preset by the manufacturer to be generated once per
second, which causes data to accumulate very quickly. It is important to note that due to the
massive file sizes involved, data must be downloaded at least every 10 days, or the files will
contain too many lines to import into Microsoft Excel without manipulating the output text file.
Finally, it is recommended that Speck Gateway software remain running continuously while the
unit is in operation, as it can take several hours to download a backlog of a few days of data.
Data are time stamped in UTC seconds (9 digits), which is the number of seconds since
midnight, January 1, 1970, GMT. Data are also time stamped in UTC milliseconds (12 digits)
when downloaded. This convention left the raw data for the Speck impossible for operators to
scan visually as 9- and 12-digit numbers are not easily mentally converted to dates and times.
Thus, making sure the correct data were downloaded required exporting the data to Excel and
converting the time stamps into an easily readable format.
The data contained large groupings of very small values interspersed with very large
spikes; not all of these spikes could be explained. No correlation could be found with the Grimm
FEM analyzer.
It should be mentioned here that based on post-analysis summarization of the Speck data
and information on the development of a more advanced Speck that a second round of testing
was performed during the early fall of 2014 using the newest version available from the
developer. Unfortunately, the device we obtained suffered a mechanical issue, which resulted in
its failure, and no updated findings can be shared here. Resource limitations prevent us from
conducting a third data collection attempt with this sensor. We encourage readers to review
information provided by the manufacturer that indicated the device now reports output in units of
ug/m3 and with a response algorithm developed versus collocated reference monitoring
(www.specksensor.org). Based upon the information shared by the manufacturer, the device has
been upgraded substantially. Even so, we have no data relative to the upgraded model.
3.4 DylosDCHOO
3.4.1 DC1100 Results
The Dylos DC 1100 measures PM in particle counts at two size cutoffs. "Large" particles
are defined by the manufacturer as particles 2.5 jim in diameter or larger. "Small" particles are
defined by the manufacturer as particles 0.5 jim in diameter or larger. By subtracting the count of
large particles from the count of small particles, PM2.5 particle counts can be approximated. It is
important to note that particles less than 0.5 |im in diameter were not measured. In addition, any
conversion factor between particle counts and its conversion to |ig/m3 would depend on the
particle density profile remaining constant. The manufacturer provided no conversion between
counts and mass concentration.
For comparison with the Grimm reference data, 5-min averages were calculated for all
data from the Dylos DC 1100. The 5-min averaged large particle counts were then subtracted
from the 5-min averaged small particle counts to yield data defined as 5-min averaged
difference. Figure 3.4.1-1 shows that the Grimm and the Dylos data compare well despite using
27
-------
different units on dramatically different scales. This comparison is further explored
quantitatively with the DC 1100 24-hour averaged data plotted against the Grimm reference data
(Figure 3.4.1-2) as well as temperature (Figure 3.4.1-3) and RH (Figure 3.4.1-4). The 24-hour
average data suggest a strong correlation with the Grimm reference data. No correlation with
temperature was observed while a potential correlation with humidity was evident.
RH fluctuates over the course of a day, necessitating a further look at the correlation
between RH and sensor response at a 5-min averaged time resolution (Figure 3.4.1-5). The Dylos
signal showed increased variability at high humidity. The upper bound of this variability appears
to increase exponentially with RH. The production of artificially high results in the presence of
high RH is a well-documented phenomenon with optically based particulate monitors11. As such,
all data at RH greater than 95% were removed.
A comparison of the 5-min averaged data for the Grimm and the Dylos yielded an R2
value that was sufficiently high to warrant normalization of the Dylos data. The best-fit line
shown in Figure 3.4.1-6 was used to normalize the Dylos data against the Grimm, producing the
trace in Figure 3.4.1-7.
" Chakrabarti, B., Fine, P.M., Delfino, R., and Sioutas, C. 2004. Performance evaluation of the active-flow personal
DataRam PM2.s mass monitor (Thermo Andersen pDR-1200) designed for continuous personal exposure
measurements. Atmospheric Environment 38:3329-3340.
28
-------
Grimm and Dylos Trace
Grimm Dylos
1600000
1400000
1200000 |
1000000 «
800000 ~
600000 §.
400000 .8
200000 Q
0
Figure 3.4.1-1. Grimm data and Dylos data overtime.
Dylos (particle counts)
Grimm vs. Dylos V = 21 562x + 86531
24-hour average PM data R2 = 0.533
7flnoflfl
/ uuuuu
ennnnn
ennnnn
ouuuuu
Annnnn
onnnnn
onnnnn
iUUUUU
mnnnn
I UUUUU
0
* ^^*
r* * ^*^
% - J^ t
. **^ *
^^<* *
A * ***
* ^
0 5 10 15 20 25
Grimm ({tglm3}
Figure 3.4.1-2. 24-hour time-averaged PM data comparing the Grimm reference sampler with the Dylos
DC1100 PM sensor.
29
-------
7nnnnn
/uuuuu
"jr ennnnn
*J OUUUUU
D
o snnnnn
_o>
y Annnnn
OUUUUU
(0
& onnnriri
^" OUUUUU
H
O
>, onnnnn
Q ZUUUUU
-i nnnnn
1 UUUUU
-i
Temperature vs. Dylos y = -3881 .8x + 331 399
24-hour average PM data R2 = 0.0265
* +
1 +"
+ * *
^T^V^^^~^-
/ . - "
^ *
50 5 10 15 20 25
Temperature (°C)
Figure 3.4.1-3. Temperature vs. Dylos 24-hour averaged data.
Relative Humidity vs. Dylos y = 4564. 4x - 26798
24-hour average PM data R2 = 0,1829
7nnnnn
finnnnn
Is"
£ ennnnn
3 OUUUUU
u
4) Annnnn
w
(0 ^nnnnn
>
Oonnnnn
_ ZUUUUU
Q1
-i nnnnn
1 UUUUU
++
*. ** * ~
J?^
+^r -% »
^2 1 »»*
* * *
0 20 40 60 80 100 120
Relative Humidity {%)
Figure 3.4.1-4. RH vs. Dylos 24-hour averaged data.
30
-------
1600000
1400000
1200000
1000000
I 800000
(0
o
>,
0
Relative Humidity vs. Dylos
5-min averages
y = 1Q4.48X3- 8875.3X + 345685
R2 = 0.3087
20
40 60
Relative Humidity {%)
80
100
120
Figure 3.4.1-5. RH vs. Dylos (5-min averages).
1200000
js 1000000
o
800000
o
ffi 600000
Q.
- 400000
o
200000
Grimm vs. Dylos
5-min averages
= 21368x + 51547
R2 = 0.5483
10
20 30
Grimm (Mg/rn3)
40
50
Figure 3.4.1-6. Grimm vs. Dylos (5-min averages).
31
-------
Grimm and Dylos Trace
Grimm Dylos (Normalized)
OJ
Figure 3.4.1-7. Grimm and normalized Dylos data (5-min averages) against time.
3.4.2 DC1100 Discussion
The Dylos DC 1100 does not contain a battery and must be connected to AC power to
operate. In addition, only data recorded directly to a computer via the Dylos Logger software
contains time stamps. Consequently, the Dylos should be considered for stationary applications
only. When preparing to operate a Dylos DC 1100, it is important to note that an RS-232
connection to a computer is required.
Raw data are produced once per minute. Visual inspection of the raw data showed it to be
smooth and devoid of fast time resolution spikes, which indicate no obvious malfunctions,
electrical noise, or other errors occurred during its operation. The device showed no correlation
with temperature and minimal correlation with humidity. Removing data taken at 95% RH and
above was sufficient to bring the R2 value to 0.55 when compared with the Grimm reference
monitor. Analysis of the differences between the normalized Dylos data and the Grimm data
compared to temperature and humidity suggested that further removing data above 90% RH
while removing data obtained at temperatures below 0 °C might yield a further improvement in
R2. However, this represented removal of a large volume of data while only increasing R2 to 0.6.
A closer look at the data reveals discrepancies between the Dylos (normalized) and the
Grimm FEM data (Figure 3.4.1-8). On the afternoons of November 28, November 29, and
December 1, 2013, the Dylos showed significant and protracted spikes in particulates, whereas
the Grimm indicated only very modest increases. The three spikes appear to correlate with a
sudden increase in temperature and a drop in humidity, but this pattern was not consistently
repeated in the rest of the data. These spikes might be related to meteorological phenomena that
were not tracked in this experiment, but which feature sudden temperature and humidity
changes. It is also possible that these spikes indicate a localized combustion event (e.g., idling
32
-------
diesel engine) that produced large numbers of low-density particles affecting the device. Even
so, we have no record of such an event occurring and it is only speculation as to one possible
explanation.
Grimm. Dylos. Temperature, and Humidity Trace
Grimm Dylos Temperature Humiditiy
Figure 3.4.1-8. Dylos, Grimm, Temperature, and RH from November 27 to December 2, 2013.
3.5 Met One Model 831
3.5.1 Met One Model 831 Results
The Met One Model 831 is an optical PM sensor that uses a proprietary algorithm to
calculate particle density in micrograms per cubic meter (|ig/m3) from particle counts at four
different size fractions (PMi, PM2.5, PM4, and PMio).
Early attempts to interpret the Met One data focused on the PM2.5 channel as it was
hypothesized that data from this channel would provide the best match with the Grimm PM2.5
data. The PM2.5 channel was found to contain many outliers in the form of sharp spikes on an
order of magnitude or greater than the adjacent data. Many attempts were made to identify and
remove outliers from the PM2.5 data prior to calculating 5-min averages. Despite these efforts,
5-min averages of raw PMi data were found to have a coefficient of determination relative to the
Grimm reference data more than three times greater than the PM2.5 with the Grimm. Figure
3.5.1-1 clearly shows that compared to the PMi channel (which had no outliers removed), the
PM2.5 channel (which had many outliers removed) displayed significantly more spikes. For these
reasons, only data for the PMi channel are reported in the remainder of this section as a best-case
scenario.
33
-------
Figure 3.5.1-2 shows that the responses from the Grimm and the Met One compare well.
This comparison is further illustrated using Met One 24-hour averaged data plotted against the
Grimm reference data (Figure 3.5.1-3) as well as temperature (Figure 3.5.1-4) and RH (Figure
3.5.1-5). The 24-hour averaged data suggests a correlation with the Grimm reference data, no
correlation with temperature, and a strong correlation with humidity.
As RH naturally fluctuated over the course of any given day, further investigation into
the correlation between humidity and sensor response at the 5-min averaged time resolution was
necessary. These results are shown in Figure 3.5.1-6. The Met One signal showed increased
variability at high humidity. The upper bound of this variability appears to increase exponentially
with rising relative humidity. As a result, all data taken at times when the relative humidity was
greater than 90% were removed.
The 5-min averaged data scatter plot comparing the Grimm to the Met One yielded an R2
value sufficient to warrant its normalization to examine potential improvement. The best-fit line
of Figure 3.5.1-7 was used to normalize the Met One data against the Grimm, producing the
trace in Figure 3.5.1-8.
The spike seen on December 4, 2013 straddles data that were removed because they were
taken at greater than 90% RH. It is possible there was an unrecorded drizzle or light rain event
during this time that might have caused the spike. Consequently, all data collected between 04:00
and 14:00 on December 4, 2013, were removed. The scatter plot of the Met One data vs. the
Grimm was remade in Figure 3.5.1-9 and renormalized in Figure 3.5.1-10.
Met One PMt and PM2.5 vs Grimm
25
20
i
I 15
o
200
PM1
PM2.5
Linear(PMi;
Linear (PM 2.5)
y=0.1269x-0.1516
R2 = 0.166
y=0.3604x +0.2432
R2 = 0.0504
160
140 I
120
100
10
20 30
Grimm (ug/m3)
50
Figure 3.5.1-1. Grimm vs. Met One Model 831 PMi and PM25 (5-min averages).
34
-------
OJ
Grimm and Met One Trace
N>
y
OJ
KJ
KJ
w
Time
OJ
OJ
ro
w
OJ
ro
^j
OJ
Figure 3.5.1-2. Grimm data and Met One Model 831 data overtime.
Grimm vs. Met One
24-hour average PM data
y = 0.1301x-0.1285
R* = 0.2307
10 15
Grimm {ug/m3)
25
Figure 3.5.1-3. 24-hour time-averaged PM data comparing the Grimm reference sampler with the Met
One Model 831 PM sensor.
35
-------
Temperature vs. Met One
24-hour average PM data
y = 0.0325x +0.8165
Rz = 0.0238
6
- 5
o> ,
O
4)
5 10 15
Temperature (°C)
Figure 3.5.1-4. Temperature vs. Met One Model 831 24-hour averaged data.
6.
"£ 4
1
"~r o
0) O
° 2
S 2
* 1
_1
C
Relative Humidity vs. Met One y = o.0466x -2,1 905
24-hour average PM data R2 = 0.31 82
+
* *
*
_^^r^o*
* 1 ^t-^^1^^ ^1 * *
^-^^
» 20 40 60 80 100
Relative Humidity {%)
Figure 3.5.1-5. RH vs. Met One Model 831 24-hour averaged data.
36
-------
25
Relative Humidity vs. Met One
20
"5)15
^
0)
O 10
20
40 60
Relative Humidity {%)
100
Figure 3.5.1-6. RH vs. Met One Model 831 (5-min averages).
Grimm vs. Met One
= 0.057x-0,0562
R2 = 0,6368
Grimm
Figure 3.5.1-7. Grimm vs. Met One Model 831 (5-min averages).
37
-------
120
Grimm and Met One Trace
Grimm Met One (normalized)
a
^
^i
OJ
ro
3
OJ
3
-i
w
Time
s
w
to
y
OJ
K)
^J
OJ
Figure 3.5.1-8. Grimm and normalized Met One Model 831 data (5-min averages) against time.
Grimm vs. Met One
= 0.0493x-0.008
R2 = 0,7729
10
20 30
Grimm (pg/m3)
40
50
Figure 3.5.1-9. Grimm and Met One Model 831 data (5-min averages) with data from 04:00 to 14:00 on
December 4 removed.
38
-------
Grimm and Met One Trace
60
Grimm Met One (normalized)
w
OJ
ro
I
w
W
co
Time
Figure 3.5.1-10. Grimm and renormalized Met One Model 831 data against time (5-min averages).
3.5.2 Met One Model 831 Discussion
While the Met One Model 831 does contain a battery, the operational duration of that
battery was not tested as part of this study and remains unverified. The device was easy to
operate and ran smoothly with only one section of missing data (11/27/13 through 12/2/13).
Because this gap spans exactly from one operator visit to the next, the failure was likely a result
of operator error. The Met One does require flow checks and zero checks, but neither required
any adjustment during the evaluation. The only caveat is that flow rate checks and zero checks
require an unusually tiny hex key making it difficult to use and hard to replace if misplaced or
lost.
Raw data are produced once per minute. The PM2.5 and larger channels featured many
abnormally high spikes, while the PMi channel was comparatively smooth. Further analysis
showed that the PMi channel matched the reference analyzer to a far greater degree than the
others. As such, this report focused on the PMi channel only.
The device showed no correlation with temperature but a significant correlation with RH.
Removal of data taken at RH greater than 90% improved the coefficient of determination
between the Met One and the Grimm to 0.64. Several outlier spikes remained, however. Closer
examination of these spikes reveals they were immediately before or after time periods
associated with high humidity. Even so, they are not present in the majority of such periods. In
addition, there are multiple periods of high humidity in which the Met One data is devoid of
spikes and matches the Grimm data extremely well. It is possible that light mist or drizzle might
have influenced the Met One response but with rainfall accumulation too small to be adequately
measured.
39
-------
3.6 RTI MicroPEM
3.6.1 MicroPEM Results
The RTI MicroPEM is an optical paniculate matter (nephelometer) sensor that uses a
size-selective inlet to measure PM2.5. The device as originally received produced data of poor
quality during the November to December 2013 testing. This included many outliers. One of the
more obvious and prevalent features of these was a frequent negative spike to
approximately -600 |ig/m3. Subsequent discussions with RTI International on the findings
indicated a recent upgrade on the device was available that should resolve the issues we were
observing (poor peak trends versus the Grimm, high degree of temperature and RH influence in
concentration response). Based upon this information, the MicroPEM was upgraded to meet the
latest component configuration and then a new round of testing was performed. It is data from
that round of testing that we report.
It should be clearly stated here that the MicroPEM is not designated by RTI as a device
intended for 24-hr outdoor monitoring. Therefore, the evaluation performed involves factors
beyond its general scope of use (personal and/or indoor monitoring). Even so, the evaluation
performed here should be viewed as one that should provide practical guidelines on the use of
this device, which the authors of this report consider as one of the more advanced PM2.5 sensors
relative to its potential for meeting a variety of monitoring needs. We protected the device from
stray light as much as practically possible by operating it within the aluminum shelters
previously mentioned.
Raw data was inspected visually for large outliers. Less than ten outliers were found and
were removed manually. These outliers were highly fluctuating positive and negative signal
responses, which appeared to be possibly electrical noise in nature. Data was then compiled into
5-minute block averages. Traces of each MicroPEM response over time overlaid with a trace of
the Grimm overtime are shown in Figure 3.6.1-1, Figure 3.6.1-2, and Figure 3.6.1-3.
All three MicroPEMs appear to track the Grimm well. There are, however, frequent
spikes during which the MicroPEM signal greatly exceeds the Grimm's signal. Most of these
spikes occur in all three MicroPEM units simultaneously and as previously mentioned may have
been related to a common electrical spike at the site. This suggests they are systemic to the
design. All three units were re-zeroed on 8/12/14 and 8/25/14. All three units show significant
baseline shifts at these times. Based upon our observations, a more frequent zeroing frequency
(e.g. every 24 hrs) might have provided benefit to the comparison performed here. Temperature
and humidity are examined as possible confounding factors for MicroPEM 1 in Figure 3.6.1-4
and Figure 3.6.1-5.
Figure 3.6.1-4 demonstrates that there is no correlation between the performance of
MicroPEM 1 and temperature. This is in sharp contrast to the experiments conducted in the
winter of 2013-2014 during which strong correlations were reported. Figure 3.6.1-5 demonstrates
that relative humidity has no effect on the MicroPEM's signal below 90% RH. There is a
significant cluster of aberrantly high data points when RH > 94%.
All data with RH > 94% was removed. The remaining data was compiled into one-hour
rolling averages to smooth it. Finally, the data was divided into three cohorts (7/29/14 to 8/12/14,
8/12/14 to 8/25/14 and 8/25/14 to 9/1/14) in order to account for the significant baseline shifts,
which occurred when the MicroPEMs were re-zeroed. Figures 3.6.1-6, 3.6.1-7, and 3.6.1-8 are
40
-------
scatterplots of this data for each unit vs the Grimm. Table 3.6.1 compiles the R2 figures for each
unit and cohort.
Grimm and MicroPEM 1 over Time
8/8 8/13 8/18 8/23 8/28 9/2
-10
7/29
Figure 3.6.1-1. A trace of MicroPEM unit 1 and the Grimm overtime.
Grimm and MicroPEM 2 over Time
50
8/13 8/18 8/23 8/28
-10
7/29
9/2
Figure 3.6.1-2. A trace of MicroPEM unit 2 and the Grimm overtime.
41
-------
Grimm and MicroPEM 3 over Time
8/8 8/13 8/18 8/23 8/28 9/2
Figure 3.6.1-3. A trace of MicroPEM unit 3 and the Grimm overtime.
MicroPEM 1 vs Temperature (5-Min Avg)
325
y = -0.5574x +25.606
R2 = 0.0351
15
20 25
Temperature(°C)
30
35
Figure 3.6.1-4. Scatterplot of MicroPEM 1 vs Temperature.
42
-------
325
275
225
01
S 175
ra
MicroPEM 1 vs Relative Humidity (5-Min Avg)
y = 0.1887x-3.1912
R2 = 0.0669
45
55 65 75
Relative Humidity (%)
85
95
Figure 3.6.1-5. Scatterplot of MicroPEM 1 vs Relative Humidity
MicroPEM 1 vs Grimm
y = 1.1198x+3.778
R2 = 0.573
y = 1.2038x-0.0068
R2 = 0.7951
y = 1.1094x-8.4691
R2 = 0.7186
7/29 to 8/12
8/12 to 8/25
8/25 to 9/1
10
15 20
GRIMM PM (ug/m3)
25
30
Figure 3.6.1-6. Scatterplot of MicroPEM 1 vs the Grimm. The data has been divided into three time
periods following zeroing of the unit.
43
-------
MicroPEM 2 vs Grimm
y=1.4196x-7.5874
R2 = 0.8713
y=1.3772x-2.1549
R2 = 0.8758
y = 1.1638x-1.4041
R2 = 0.8036
7/29 To
8/12
8/12 to
8/25
10
15 20
GRIMM PM (ng/m3)
25
30
Figure 3.6.1-7. Scatterplot of MicroPEM 2 vs the Grimm. The data has been divided into three time
periods following zeroing of the unit.
MicroPEM 3 vs Grimm
y=1.4734x- 9.3819
R2 = 0.7294
y=1.619x-2.1884
R2 = 0.6246
y=1.2934x-2.8132
R2 = 0.572
7/29 to 8/12
8/12 to 8/25
8/25 to 9/1
10 15 20
GRIMM PM (ng/m3)
25
30
Figure 3.6.1-8. Scatterplot of MicroPEM 3 vs the Grimm. The data has been divided into three
time periods following zeroing of the unit.
44
-------
7/29 to 8/12
8/12 to 8/25
8/25 to 9/1
Average
Std. Dev.
MicroPEM 1 MicroPEM 2 MicroPEM 3
0.61
0.80
0.59
0.88
0.87
0.78
0.76
0.62
0.54
0.67
0.11
0.84
0.06
0.64
0.11
All Units
0.72
0.13
Table 3.6.1. R2 values for all cohorts of all MicroPEMs versus the Grimm
3.6.2 MicroPEM Discussion
The MicroPEM is a relatively simple unit to use, although it does require signficantly
more maintainence than any of the other sensors. Filters must be changed multiple times a week
depending on particulate loading, and the nephelometer should be zeroed frequently (daily if
possible) to take full advantage of its capabilities. The flow rate requires calibrating/auditing at
regular (e.g., twice weekly) intervals.
The MicroPEM is capable of running on either AC power on on battery power, although
using AC power is recommended. Despite running on AC power, a functioning coin cell battery
must be in place to record accurate time stamps. If the coin cell has run down, the device is
capable of running on AA batteries instead; however, the operators found that the lifespan of a
set of AA batteries in the absence of a coin cell battery was a few days at best. In the event the
device has no battery power but is running on AC power, time stamps will revert to a "default"
time and begin counting from there. In all instances of running on default time, the amount of
time recorded on default time corresponded almost exactly with the amount of time missing from
the accurate time stamps. This allowed operators to use the default time stamped data with less
than 5-min uncertainty of when the data were taken. Finally, the software delivers the same
battery warning regardless of which battery system has failed.
An interesting effect that stands out in the operation of this device is the difficulty in
properly zeroing the instrument. Since each of the three units was re-zeroed three times, there are
a total of 9 zeroing events to evaluate. The degree of error of each zeroing is equal to the Y
intercept of the scatterplot between the unit and the Grimm. In only one of the nine zeroings was
the zero set too low, resulting in a positive baseline shift error. In seven of the nine, the zero was
set too high resulting in a negative baseline shift error. In three instances, this error was greater
than 5 |ig/m3. A zeroing which is set too high might be the result of particles slipping into the
system past the zero air filter. The variability in the observed severity of this error suggests an
operator error component rather than simple equipment failure. It is likely that the seal between
the zero air filter assembly and the MicroPEM inlet was to blame. The gasketed cup which
connects the MicroPEM inlet to the zero air filter is not much deeper than the opening of the
MicroPEM inlet. Slight errors in seating this cup may result in outside air, laden with particles,
leaking into the MicroPEM during zeroing. This would cause the observed abnormally high
zeroes. The problem may be solved by fabricating a deeper cup to more easily provide a seal
between the MicroPEM and the zero air filter. Figure 3.6.1-9 illustrates how the zero air filter
45
-------
attaches to the MicroPEM; Figure 3.6.1-10 demonstrates the relatively shallow nature of the
gasketed cup compared to the inlet of the MicroPEM.
Finally, a look at the response factors for each of our scatterplots shows that the
MicroPEM is between 10% and 60% more sensitive to PM load than the Grimm. Some of this
excessive response is in the form of spikes that form in rapidly changing high humidity
conditions.
Figure 3.6.1-9. RTI MicroPEM with zero air filter attached.
Figure 3.6.1-10. RTI MicroPEM inlet alongside the gasketed cup which serves as an attachment
point for the zero air filter.
46
-------
3.7 Sensaris Eco PM
3.7.1 Sensaris Eco PM Results
The Sensaris Eco PM produces data in 1-second and 30-second averages for PMi and
PM2. The data were highly discontinuous and large portions were missing. These problems were
so great as to make a comparison of the trace of the Eco PM sensor and the Grimm reference
sampler of no value. The 24-hour averages were similarly inappropriate because of this sporadic
data. All four channels are plotted against time in Figure 3.7.1-1. It should be recognized that this
device was "prototype" and kindly provided by Sensaris and therefore the results observed here
may not reflect the ability of the developer's final version.
Most of the data recorded on both PM2 channels was 0.00|ig/m3; therefore, the remainder
of the analysis effort focused on the PMi 30-second averaged data. The Eco PM sensor and the
Grimm sampler are compared in a scatter plot in Figure 3.7.1.-2. The R2 value of 0.3153
suggests some correlation, but there are other significant factors at work. Relative humidity and
temperature were both checked as potential confounding factors in Figures 3.7.1-3 and 3.7.1-4,
respectively. There is no clear evidence of a trend with humidity. The temperature graph (Figure
3.7.1.4) shows an R2 of 0.3133, indicating a possible correlation. However, the Grimm displays
higher measurements at the same points where the Eco PM measurements are higher, suggesting
that the correlation with temperature might be coincidental. Thus, more data are required before
a case can be made for a temperature correction factor.
All Sensaris Eco PM Channels vs. Time
PM1 1s PM1 30s PM21s PM230s
1.40
OJ
to
to
OJ
fO
to
K>
to
CO
Time
to
to
co
o>
Figure 3.7.1-1. Sensaris Eco PM concentration measurements overtime.
47
-------
Sensaris Eco PM PM,
30-s average PM data
y=0.034x + 0.0037
R2 = 0,3153
10 15
Grimm PM2 5 (ug/m3)
20
25
Figure 3.7.1-2. 30-s time-averaged PM data comparing the Grimm reference sampler with the
Eco PM sensor.
Relative Humidity vs. Sensaris Eco PM PM, y = Q.OOOSx + 0,251
30-s average PM data R* = Q.0031
0.00
40 60 80
Relative Humidity {%)
100
120
Figure 3.7.1-3. RH vs. Eco PM (30-s averages).
48
-------
Temperature vs. Sensaris Eco PM PM, y = Q.Q248x + 0,0757
30-s average PM data R? = 0,3133
-5
5 10
Temperature {°C)
15
20
25
30
Figure 3.7.1-4. Temperature vs. Eco PM (30-s averages).
3.7.2 Sensaris Eco PM Discussion
The Sensaris Eco PM must communicate with an android device via Bluetooth, which in
turn must have WiFi access. Data are transmitted to Sensdots.com and are not stored locally. An
attempt was made by the vendor to provide a version of the software that would allow local
storage of data, but this new version did not work after a full day's experimentation and
troubleshooting. Time and budgetary restrictions prevented further attempts at troubleshooting.
Perhaps the single-most interesting problem encountered in the entire study occurred
while initially configuring the Eco PM sensor. Early testing attempts were made at a coffee shop
near the EPA-RTP office in order to take advantage of available WiFi. These efforts met with no
success. During the troubleshooting process, we were informed that the Eco PM, upon activation
of its Bluetooth antenna, immediately attempts to pair with the first Bluetooth-capable iOS
device it detects. Therefore, the first discovered iOS device was unrelated to this study (likely
located in a bystander's pocket), used the wrong operating system, and did not have the Android
app required to operate the Eco PM. As a result, the pairing can be a problem and can only be
deactivated by powering down the Eco PM. It is, therefore, mandatory that there be no iOS
devices within Bluetooth range while the Eco PM is initializing. This particular feature in the
system might limit urban applications of the Eco PM.
The Eco PM also struggled to maintain uptime. Despite all attempts to correct the issue
by ensuring all transmitters and receivers were close to one another and shutting off
sleep/hibernation modes for all devices involved, the Eco PM was frequently found to have
ceased recording within 24 hours of being reset. In addition, recorded data were highly
discontinuous. At no point were data points recorded within 5 consecutive minutes. Only 328
49
-------
data points were recorded, and they were so spread out that this became 239 5-min "averages."
Many of these averages are only a single data point.
The Sensaris Eco PM supposedly reports PMi and PM2 data at two different averaging
times; however, the data reported for PMi is consistently greater than the data reported for PM2.
This should not be possible since all of PMi data should be contained within PM2. All of the
channels recorded very low values. The 1.3 |ig/m3 recorded on the PMi channel 30-second
averaging time was the largest concentration recorded.
3.8 Shinyei PMS-SYS-1
3.8.1 Shinyei PMS-SYS-1 Results
The Shinyei was set to collect 5-minute average data. The trace data from the Shinyei
compared to the Grimm FEM data is shown in Figure 3.8.1-1.
The Shinyei appears to track the Grimm, but with significant deviations. Figure 3.8.1-2
shows that these deviations are significant enough to cause the coefficient of determination (r2)
between the Shinyei and the Grimm to be extremely poor.
Temperature and relative humidity were explored as possible sources of these deviations
in Figures 3.8.1-3 and 3.8.1-4. Temperature was found to have no correlation, while relative
humidity had no correlation below 95%. Above 95% RH there was a significant cluster of
aberrantly high data points.
Data in which RH > 95% was removed, but significant spikes remained. Daily rainfall
totals from NOAA were found to correlate highly with the remaining spikes. Rainfall data from
the OAQPS Triple Oaks near road monitoring station (35°51'54.53"N, 78°49'10.80"W) was
gathered to provide a more nuanced view of the rainfall data. All data collected within one hour
of detected rainfall was removed. Significant spikes remained, however.
It was discovered that many of these spikes occurred several hours before rain was
detected. A detailed evaluation of the wind data recorded at the Triple Oaks site found that the
Shinyei was much more likely to report particulate concentrations higher than the Grimm FEM
analyzer when the one hour average wind speed was greater than 1.7 m/s. In addition, when the
wind speed was greater than 1.7 m/s, there was a positive correlation (r2 =0.3144) between the
difference between the Shinyei and the Grimm and wind speed. At wind speeds less than 1.7 m/s
there was no correlation. This is detailed in Figure 3.8.1-5. Data was removed that contained 1-hr
average wind speed greater than 1.7 m/s.
Figure 3.8.1-6 is a trace of the Shinyei data with high humidity, high wind, and rain removed
alongside the Grimm data over time. Figure 3.8.1-7 is a scatterplot of the Shinyei vs the Grimm.
50
-------
Grimm and Shinyei over Time
30
Grimm
Shinyei Ihourave)
9/15 9/17 9/19 9/21 9/23 9/25 9/27 9/29 10/1 10/3 10/5 10/7 10/9 10/1110/1310/1510/17
Date and Time
Figure 3.8.1-1: A trace of the Shinyei and the Grimm overtime.
120
100
80
T60
01
c
£40
20
Grimm vs. Shinyei
y = 0.0659X + 5.2302
R2 = 0.0033
10
15 20
Grimm (|ig/m3)
25
30
35
Figure 3.8.1-2: Grimm vs. Shinyei (5-min averages).
51
-------
120
Shinyei vs. Temperature
y = 0.0659x + 5.2302
R2 = 0.0033
15 20
Temperature (°C)
Figure 3.8.1-3: Scatterplot of the Shinyei vs Temperature.
120
01
4-»
^40
u
t
ra
°-20
Shinyei vs. Relative Humidity
10 20 30 40 50 60 70
Relative Humidity (%)
90
100
Figure 3.8.1-4: Scatterplot of the Shinyei vs Relative Humidity.
52
-------
50
Shinyei vs. Wind Speed
Wind Speed > 1.7
Wind Speed < 1.7
10
-20
-30
y=1.5074x-3.791
R2 = 0.0318
Wind Speed (m/s)
Figure 3.8.1-5: Scatterplot of the Shinyei vs Wind Speed. The graph is broken into two parts to
illustrate the change in correlation at 1.7m/s wind speed.
Grimm and Shinyei over Time
Grimm Shinyei
9/15 9/17 9/19 9/21 9/23 9/25 9/27 9/29 10/1 10/3 10/5 10/7 10/9 10/1110/1310/1510/17
Date and Time
Figure 3.8.1-6: A trace of the Shinyei and the Grimm overtime. Data affected by high humidty
(>95%), winds (> 1.7m/s in a one-hour average), or within one hour of measure rainfall has been
removed.
53
-------
Shinyei vs. Grimm
60
50
^40
~30
01
10
0.2917X+ 2.756
R2 = 0.1516
10
15
Grimm (|ig/m3)
20
25
30
Figure 3.8.1-7: Scatterplot of the fully processed Shinyei data vs the Grimm.
3.7.2 Shinyei PMS-S YS-1 Discussion
The Shinyei is unusually sensitive to light and wind, therefore the device would need to
be housed in a well-designed enclosure to improve sensor performance. The need for an
enclosure is compounded by the fact that most of the circuitry for the device is in the form of a
plain circuit board with no housing whatsoever. It is up to the end user to not only house the unit
in such a way that it will be well shielded from light, moisture and wind while preventing air
stagnation, but also to protect the circuitry from electrical shorts.
The Shinyei is incapable of recording data without a constant connection to a computer
via Ethernet crossover cable. As a result, the mobility of this device can be limited. In addition,
Ethernet crossover cables can be difficult to acquire but can be made with an Ethernet cable
crimper. The requirement of a specialized cable or a specialized tool to make the cable may be
challenging for citizen science user groups/applications.
Finally, even after accounting for light intrusion, humidity, rain, and wind speed, the
coefficient of determination was poor for quantitative measurements (r2= 0.1516).
3.9 General Discussion
The performance of all PM sensors tested is summarized in Table 3.9-1. The terms used
in the table are defined as follows:
R2: coefficient of determination of the final scatter plot of 5-min averaged data
against the Grimm FEM data. This column determines the linearity of the sensor.
54
-------
Response: slope of the best-fit line of the final scatter plot of 5-min averaged data
against the Grimm FEM data. This column can be used as a calibration factor for the
sensor. Calculated by either particle counts or |ig/m3 (sensor reporting units), as
appropriate, divided by |ig/m3 (reference analyzer reporting units).
RH limit: the highest relative humidity at which the sensor can produce reliable data.
Temp Effects: if a direct relationship exists between temperature and the sensor's
signal, the R2 of that relationship is displayed.
Time Resolution: the measure of how frequently the sensor produces a PM data point.
Uptime: qualitative assessment by the operator about the frequency of data loss.
Ease of Installation: qualitative assessment by the operator about the level of effort
required to bring the sensor to operational status in the field.
Ease of Operation: qualitative assessment by the operator about the level of effort
required to operate the sensor, take data, and process the data.
Mobility: qualitative assessment by the operator about the level of infrastructure
required to operate the sensor in the field using the current ROP. Other procedures
might have different requirements.
It should be recognized that uptime, ease of installation, ease operation, and mobility
descriptors provide here are somewhat arbitrary as no definitive criteria exist for their
quantitation. As reported here, they define what we observed when trained technical staff
attempted to operate the device in an outdoor environment. As examples, uptime rating was
highly dependent upon the ability of the device to maintain data collection operations for an
extended period of time. An excellent rating would indicate near flawless data collection
capability. Ease of installation was influenced by how quickly the device could be placed
outdoors as provided directly from the manufacturer. A poor rating is indicative of the need to
work well beyond the primary directions provided by the manufacturer to establish basic data
collection operations. Ease of operations was defined as how easy it was to start, complete and
recover data collections. A fair rating was indicative of the fact that such operations were
eventually completed but with some effort needed to make this a repetitive process. Lastly,
mobility was defined as how easy it would be to move the device from one location to another.
A poor rating would equate to a sensor that had to be hard wired to a computer, an AC/DC power
supply, or other features (e.g., weather shielding, WiFi hotspot) that would limit the ease of
movement with respect to successful data collections.
55
-------
Sensor
AirBase
CanarIT
(|jg/m3)
CairClip PM
(|jg/m3)
Carnegie
Mellon
Speck
(particle
counts)
Dylos
DC1100
(particle
counts)
Met One
831 (ug/m3)
RTI
MicroPEM
(ug/m3)
Sensaris
Eco PM
(ug/m3)
Shinyei
PMS-SYS-1
(ug/m3)
R2
0.004
0.064
0
0.548
0.773
0.720
0.315
0.152
Response
-0.101
-0.229
0.06
21368
0.049
1.35 ±0.12
0.034
0.292
RH
Limit
100%
95%
90%
95%
90%
95%
100%
95%
Major
temp
effects
None
0.657
None
None
None
0.588
0.313
None
Time
resolution
20s
1 min
1 s
1 min
1 min
10s
Unknown
1 s
Uptime
Excellent
Excellent
Very good
Very good
Excellent
Very good
Bad
Good
Ease of
installation
Good
Good
Good
Good
Good
Good
Poor
Fair
Ease of
operation
Excellent
Very good
Fair
Good
Good
Fair
Bad
Good
Mobility
Very good
Excellent
Good
Poor
Good
Fair
Poor
Fair
Table 3.9-1. Summary of PM Sensor Performance and Ease of Use Features
A summary of each sensor and ease of use is reported below:
AirBase CanarIT: Once the AirBase CanarIT has been set up, all it requires is power
and the occasional reboot when it loses connection to the server. Even in the event connection is
lost, the sensor continues recording and saving data for transmission once connection has been
reestablished. The requirement that it be furnished with a GSM SIM card and data plan adds a
recurring expense to operations.
CairPol CairClip PMi.s: The prototype CairClip sensor does not appear to function at
temperatures below 19 °C. As a result, there is very limited data with which to draw any further
conclusions. The operator urges further testing in a warmer environment. The long battery life,
simple software, and simple operation contribute to its high scores in uptime, ease of use, and
mobility.
Carnegie Mellon Speck: The 1-second time resolution causes file sizes to get large and
cumbersome very quickly. As a result, download times are very long. While the use of UTC
seconds for time stamps likely saves memory, it also inhibits the ability of the operator to verify
correct operation in the field.
Dylos DC1100: The low mobility score is because the Dylos DC 1100 does not record
time stamps internally. It must therefore be connected to a computer at all times via RS-232 to
collect meaningful data.
56
-------
Met One Model 831: All calculations were performed on the PMi size fraction. Larger
size fractions are highly prone to outliers and do not match Grimm FEM data nearly as well even
after concerted efforts have been made to remove those outliers.
RTI MicroPEM: The MicroPEM is a comparatively high-maintenance instrument. Time
stamps sometimes malfunction when battery power is low, even if the device is operating on
external power.
Sensaris Eco PM: The sensor to tablet via Bluetooth to website via WiFi method of data
recovery is a highly questionable design decision. The Bluetooth connection clearly has
problems, and it is possible the WiFi connection does as well. Data are apparently not stored at
any point until it reaches the server. As such, dropped packets at any point in the process will
result in lost data. This sensor required a substantially greater level of effort than any of the other
PM sensors at every stage of the project and yielded the least amount of data. Given the small
volume of data collected, the quantitative measurements reported above should be considered
highly suspect. The mobility score is low because it requires a location that has WiFi and no iOS
devices present.
Shinyei PMS SYS-1: This sensor required extensive waterproofing in preparation for
field placement and modest electrical knowledge in making the necessary signal/power
connections. It was observed to be extremely light sensitive and extraordinary precautions had to
be performed to collect data useable for the evaluation.
4.0 Study Limitations
It must be recognized that the scope of this low cost sensor performance evaluation was
limited with respect to a number of primary parameters:
The resources of the U.S. EPA to conduct the extensive field tests defined herein, and
The scope of the performance testing that could be performed while being extensive was not
meant to fully compare the devices versus FEM standards.
4.1 Resource Limitations
4.1.1 Intra-sensor Performance Characteristics
Resource limitations routinely permitted for only a single sensor of a given manufacturer
to be examined. Therefore, this report provides very limited findings on intra-sensor performance
characteristics. As with any examination of data precision, a sufficient amount of information
from multiple instruments is necessary to truly assess the ability of a monitoring device to
accurately measure the challenge concentration and to do so in a repeatable manner. Likewise, it
has been our experience that low cost sensors sometimes fail without any obvious warning and
therefore the findings being reported here may reflect comparisons not truly representative of the
device's normal performance characteristics. We can only assume that the devices operating here
were functioning properly based upon their normal operating guidelines and lack of fault
indicators (if such warnings were available). If a fault warning was observed, all such data
57
-------
collection periods were parsed from the resulting analyses. When possible, a substitute device
was obtained and testing continued with the replacement.
4.1.2 Test Conditions
Resources also prevented the U.S. EPA from examining the sensors under a wide variety
of environmental and interfering agent conditions. While the research effort was initially
anticipated to begin in the late summer of 2013 and then continuing into the early winter of that
same year, a government-wide furlough during 2013 along with availability of FEM comparison
data due to previously scheduled instrument maintenance outside of our planned monitoring
dates, curtailed such plans. Ultimately, the field effort associated with nearly all the sensors was
condensed to just a two month period (November to December 2013). This limited the variability
of temperature extremes one would have liked to have available relative to challenging the
sensors.
It also resulted in the sensors being challenged to more extremes with respect to cold
temperature conditions, which many of the sensors were clearly not built to function without
significant modifications. It should be clearly stated here that with the exception of the AirBase
CanarIT device, none of the other sensors tested were manufactured to meet environmental
conditions associated with outdoor monitoring and our tests results need to be considered
possibly worst-case scenarios relative to their performance. Even so, we weather protected
devices to safeguard their operation, provided modest warming to their enclosures, protected
them from stray light and excessive wind, and provided ancillary power supplies as needed. We
anticipate citizens attempting to use low cost PM sensors in outdoor circumstances regardless of
what manufacturers have suggested as operating conditions, and the tests conducted here
attempted to mimic the anticipated use patterns of citizens.
Data findings associated with the MicroPEM and Speck were limited to the summer/fall
of 2014 following a repeat of testing for these devices once it was established they had
undergone a significant upgrade in both hardware and software relative to the initial round of
testing. New testing was subsequently performed to ensure that the data reported here provided
the most positive conditions for performance challenge with respect to the manufacturer's latest
design specifications.
4.1.3 Sensor Make and Models
This work represented a limited examination of low cost PM sensors costing under $2500
per unit. Other more expensive devices are commercially available but were considered outside
the scope of what most citizen scientists might be willing to purchase relative to cost. Based on
an extensive market survey prior to initiating this effort, other low cost devices often having the
same sensing system (light scattering sensor) but housed in a different packaging by another
manufacturer did exist. Our eventual study design was defined by trying to select a cross section
of many of those available while trying to ensure various features specific to each of the units
offered interesting aspects not redundant in the others. Even so, it must be recognized that even
the same sensing system engineered differently by various manufacturers could offer
significantly different performance characteristics. Therefore, the summary analysis provided
here does not in any way try to 'define' low cost PM sensor capabilities. It does not serve as the
58
-------
primary guide one might use in selecting a sensor for various applications. The summary of the
evaluation activity is solely intended to provide the reader with an understanding of EPA's
experiences with the various sensors and how well their data compared with that from a
collocated FEM under the conditions of this field study.
59
-------
oEPA
United States
Environmental Protection
Agency
Office of Research and Development (8101 R)
Washington, DC 20460
Official Business
Penalty for Private Use
$300
PRESORTED STANDARD
POSTAGES FEES PAID
EPA
PERMIT NO. G-35
------- |