4
As Figure 4 shows, both cluster- and subcluster-level attrition
fall within the acceptably low range when plotted on the attri-
tion standards graph.
Note: According to current HHS evidence standards,
cluster RCTs with low attrition at the cluster level but high
attrition at the subcluster level are assigned the moderate
study rating. Cluster RCTs also receive a moderate rating
if sample members were added during the intervention
period (for example, if a study of a multiyear pregnancy
prevention program for high school students included in
the impact analysis new students who transferred into the
school the year after the program began).
Quasi-experimental designs
Attrition standards are not applied to quasi-experimental studies. This
is because these studies are reviewed based on the baseline equiva-
lence of their nal analytic samples, from which there is no attrition.
Strategies for limiting attrition in TPP
evaluations
Attrition is driven by the loss of sample members who were ini-
tially randomized but were not included in the ultimate impact
analysis. Common sources of attrition in TPP evaluations
include nonconsent after random assignment, dropping out of a
study, and item or full survey nonresponse at the focal follow-
up period used to estimate intervention impacts.
As described earlier, the attrition calculations are based on two
key sets of numbers: (1) the number of youth (and clusters, if
applicable) assigned to each condition; and (2) the number of
youth (and clusters, if applicable) observed at follow-up. There-
fore, researchers must keep track of these numbers carefully at
the design and analysis phases, and understand what to do if
their study is likely to fail the attrition standard. The following
strategies can be used to help limit the threat of sample attrition:
● Collect follow-up data from all people assigned to condition,
even if they do not complete the program or if they have a
low dose of the program.
● Plan to conduct follow-up assessment using several modes to
allow for multiple opportunities to gather data from respondents.
Consider mailing the assessments to youth who move or providing
assessments online for those absent for in-person data collection.
● Plan several days of in-person data collection at each location,
to the extent possible.
● Collect extensive contact information at baseline and update
this information throughout the study to enable the study
team to locate follow-up nonresponders.
● When possible, conduct consent before random assignment,
because nonconsent after random assignment is considered
a form of attrition.
● When possible, use incentives to obtain higher response rates.
Finally, although this does not address attrition, it is good practice
to collect baseline assessments of the outcome of interest, because
they can be used to (1) improve precision of the impact estimate,
and (2) establish baseline equivalence for the study to receive a
moderate evidence rating (if the study does have high attrition).
Reviews of studies with high levels
of sample attrition
If a study has problematic levels of sample attrition, that study
will not be eligible to achieve the highest rating under HHS evi-
dence standards. However, if the study establishes that the nal
analytic sample is equivalent at baseline on key variables that
inuence the outcome of interest, the study will still be eligible
for a moderate rating. See the TPP Eval TA brief on matching
techniques for recommended approaches to creating compari-
son groups that are equivalent on observable characteristics.
Endnotes
1
When there are multiple outcomes to be examined and some item
non-response across the outcomes, the TPP Eval TA team recom-
mends identifying a single, common analytic sample that does not have
missing data across the outcomes of interest, and using that common
sample for the purposes of analysis and attrition calculations. Using a
common analytic sample will produce an easy-to follow and under-
standable presentation of the analyses across multiple outcome mea-
sures. If, however, there is substantial item-non response across two or
more outcomes, then it is recommended to consider each outcome as
requiring its own, unique analytic sample, which will require multiple
attrition scenarios for the various outcomes examined.
2
The WWC has two attrition thresholds. Selection of the threshold
for a particular topic is contingent on the likelihood of attrition being
related to the outcome. Because many TPP programs are voluntary,
the HHS evidence review selected the WWC’s conservative attrition
threshold, which accounts for the fact that attrition might be related to
the outcomes when estimating the potential bias due to attrition. For
more information on the WWC attrition standards, see the “Assessing
Attrition Bias” white paper on the WWC website.
References
Mathematica Policy Research. “Identifying Programs That Impact Teen
Pregnancy, Sexually Transmitted Infections, and Associated Sexual
Risk Behaviors Review Protocol Version 3.0.” Retrieved from
http://tppevidencereview.aspe.hhs.gov/pdfs/Review_protocol_v3.pdf.
U.S. Department of Education, Institute of Education Sciences, What
Works Clearinghouse. “Procedures and Standards Handbook Version
3.0.” Retrieved from http://ies.ed.gov/ncee/wwc/pdf/reference_
resources/wwc_procedures_v3_0_standards_handbook.pdf.
This brief was written by Russell Cole and Seth Chizeck from Mathematica Policy Research for the
HHS Ofce of Adolescent Health under contract #HHSP233201300416G.