Coming Soon!

Scrutinizer Logo

  • Sucralose needs a safety re-evaluation following metabolite discovery, say scientists
  • Study: OxyContin Reformulation Led to Rise in Hepatitis C Rates
  • Açaí fruit can transmit Chagas disease
  • How Safe Is Your Kids’ Food? A new report highlights food additives that may be harmful to kids
  • Why Your Workplace Might Be Killing You
  • That New Organic Study Doesn’t Really Show Lower Pesticide Levels
  • Push ups and mens health at 40
  • Pumped breast milk has higher levels of potentially harmful bacteria than nursing, according to a new study
  • Mental Health and Gun Violence

Upcoming presentations: American Public Health Association (APHA) Annual Meeting, November 2019

Workplace Stressors and Negative Health Outcomes

What is the headline saying?

Why Your Workplace Might Be Killing You – Stanford scholars identify 10 work stressors that are destroying your health.

What is the news article saying?

The workplace environment that is created by an organization impacts the mental and overall health of employees which, in turn, contributes to rising health care costs. Stressors such as job insecurity, working long hours, lack of health insurance and increased demands at work contribute to poor health and even mortality that is comparable to the 4th or 5th largest causes of mortality in the U.S. Workplace wellness programs that are implemented in work environments with management practices that create stress are not effective because overall, “It costs more to remediate the effects of toxic workplaces than it does to prevent their ill effects in the first place.” The article makes sure to point out the main limitations provided by the researchers: they can only claim association, not causation, because the studies are observational. The researchers also only looked at 10 stressors that could be attributed to management practices.

Does the headline ultimately support claims made by the news article? Does it truly summarize the key points of the news article?

Yes and no. The article seems to be mostly focused on management practices that affect employee wellbeing. Also, the headline makes it seem as if the 10 stressors definitively lead to poor health and increased mortality- These are claims that the researchers say their study can’t make. However, in the video at the end of the article, one of the researchers does say that the workplace is killing employees as a result of long hours and stress.

What are the implications of this headline?

Workplaces may be contributing to ill health in America as a result of specific stressors.

What are the implications of this news article?

Physical AND psychological effects of the workplace harm employees and the nation as a whole. Workplace wellness programs cannot overlook the negative psychological aspects of the workplace that influence the decisions employees make regarding their health (such as eating healthy and exercising), especially in the face of various workplace stressors.

What evidence currently exists to counter or support these implications?
Countering views:

Supporting views:

Work stress and risk of death in men and women with and without cardiometabolic disease: a multicohort study

State of the American Workplace

Burn-out an “occupational phenomenon”: International Classification of Diseases

Employee Burnout, Part 1: The 5 Main Causes

Workplace Mental Health: Data, Statistics, and Solutions

Are there similar and/or opposing headlines from other news outlets? Do the news outlets only link back to other news outlets?

There are similar headlines from other news outlets. The majority of these news articles link to different data sources (see above).

What are the data sources (i.e. memo, official statement, official document, research study, validated surveillance system, official report, etc.) supporting the article?

The Relationship Between Workplace Stressors and Mortality and Health Costs in the United States

Agency for Healthcare Research and Quality (2011a) Medical
expenditure panel survey. Accessed November 7, 2011.

Agency for Healthcare Research and Quality (2011b) Total health services—Mean and median expenses per person with expense and distribution of expenses by source of payment: United States, 2008. Medical Expenditure Panel Survey, Household Component data. Generated interactively. Accessed November 7, 2011.

National Opinion Research Center (2011) General Social Survey. Accessed November 7, 2011.

Are these data sources credible when applied to the news story? Why or why not?

Yes. The researchers apply relevant methodologies and incorporate national-level datasets in their analyses. Thorough justification is provided for each analysis performed as well.


228 articles were included in a meta-analysis that was conducted (115 studies
used longitudinal data (these included panel studies), 115 studies used cross-sectional data, and 2 studies used both types of data). Both a multiplicative model and conservative model were used to derive change in cost and change in mortality:

The model focuses on the U.S. civilian labor force in 2010 and divides the analysis according to four subpopulations: (men, women) _ (employed, unemployed). The unemployed are assumed to be exposed to only two stressors: unemployment and no insurance. The employed are exposed to all the stressors except unemployment. The model estimates the increased prevalence of four categories of poor health (henceforth termed outcomes) associated with the 10 workplace stressors (henceforth termed exposures) and then combines them with separate estimates of the increase in health spending associated with each of the categories of poor health. The four outcomes that we consider are those that are commonly measured in the medical literature: poor self-rated physical health, poor self-rated mental health, presence of physician-diagnosed health conditions, and mortality. Self-rated health measures are included because (a) they have been shown to be excellent proxies for actual health and mortality (e.g., Marmot et al 1995, Idler and Benyamini 1997); (b) they are easy to assess in surveys, including surveys of healthcare costs; and (c) they are commonly used in epidemiological studies.

First, we assume each of the four subpopulations considered to be statistically homogeneous. This allows us to focus our analysis on a characteristic individual within each subpopulation and estimate for that individual the annual healthcare spending and the probability of mortality associated with workplace stressors. The corresponding population-level estimates are obtained by scaling the individual-level estimates (by the subpopulation’s size) and summing across the four subpopulations. Second, we assume that exposures to the 10 stressors and outcomes are binary; that is, we do not account for a more nuanced interaction between stressors and outcomes that takes into account the duration of the exposure to a stressor. This is because most of the studies used to obtain the parameters in our model also employ a binary model of exposures and outcomes.

Input Parameters

-Joint probability distribution of exposures – Average prevalence of (and correlation between) stressors faced by workers in the United States (based on data from the General Social Survey (GSS) and from the health insurance component of the Current Population Survey (CPS) were used to calculate the joint probability distribution of exposures in order to capture)

-Relative risk for each exposure-outcome pair – Incremental probability/relative risk of individuals having a certain outcome that were exposed to a certain stressor compared to those who were not exposed to the stressor (based on data from a meta-analysis of relevant epidemiological studies

-The probability of observed prevalence of each category of poor health in the United States – Status quo prevalence of each outcome (based on data from the Medical Expenditure Panel Survey (MEPS) and mortality data from the Centers for Disease Control and Prevention)

-Excess healthcare spending per year associated with each category of poor health in the United States or the average increase in healthcare spending for those with a certain outcome, compared to those without the outcome – incremental cost of each outcome per year (based on data from MEPS, also controls for overlapping healthcare cost contributions from multiple health outcomes

*Estimate confidence intervals for cost and mortality using Monte Carlo simulations and/or mathematical characterizations along with standard distributional assumptions


The model was designed to overcome limitations with using inputs from multiple data sources. Specifically, the model separately derives optimistic and conservative estimates of the effect of multiple workplace exposures on health, and uses optimization to calculate upper and lower bounds around each estimate, which accounts for the correlation between exposures.

Sensitivity Analyses to address some limitations

1. The meta-analysis computed pooled relative risks by combining study populations from various countries. The assumption is that these estimates are relevant to our U.S.-based target population. To test this assumption, we performed sensitivity analyses in which we restricted the studies for the metaanalysis calculations to populations drawn from G8 countries and high-income Organisation for Economic Co-operation and Development (OECD) countries (Sensitivity Analysis 2).

2. To generate relative risk estimates for the mortality outcome in our meta-analysis, we pooled studies that estimated the risks of all-cause mortality and cause-specific mortality. To test the effect of this assumption, we repeated our analysis but excluded studies with cause-specific mortality (Sensitivity Analysis 3).

3. We pooled studies using longitudinal and cross-sectional data to estimate the relative risks in the base model. Because cross-sectional data have limitations as outlined before, we conducted Sensitivity Analysis to study how only our final estimates change if only studies that use longitudinal data are included in the meta-analytic sample.

4. In our base model, the meta-analytic sample contains studies that use either logistic regressions or Cox regressions. We test this assumption in Sensitivity Analysis 5, where we excluded studies that use Cox regressions.

5. To derive the relative risk estimates for NOINSURE, we included studies that group respondents with public insurance (Medicaid) together with the limitations uninsured. In Sensitivity Analysis 6, we excluded studies that performed this pooling.

6. Un-insurance was assumed to be independent of the remaining exposures for the employed subgroup. In Sensitivity Analysis 7, we extended the robustness analysis to allow correlation between these exposures and no insurance.

7. Our definition of physician-diagnosed medical condition included any respondent in the MEPS data who had one or more health conditions within a list of conditions. To test sensitivity to that assumption, we repeated our estimation, but varied the threshold of conditions present needed to determine whether someone had a physician-diagnosed medical condition (Sensitivity Analysis 8).

8. We pooled exposure prevalence data from 2002, 2006, and 2010 in our base model, which assumes that the exposures were similar in those years. We tested this assumption in Sensitivity Analysis 9, where we repeated the analysis separately for each year 2002, 2006, and 2010 by using exposure data that were specific to that year.

What are the data sources saying? Are they being interpreted correctly in the article and are limitations provided? Are there multiple ways to interpret the data or various conclusions that may be drawn from the data?

We find that more than 120,000 deaths per year and approximately 5%–8% of annual healthcare costs are associated with and may be attributable to how U.S. companies manage their work forces. Our results suggest that more attention should be paid to management practices as important contributors to health outcomes and costs in the United States.

The data sources conclude that stressors in the work environment can be associated with health outcomes and costs in the U.S., based off of various estimates that were generated from the models. The researchers report that, in all instances (using all models/estimates), there were more than 120,000 excess deaths each year associated with key workplace stressors. Incremental costs related to the workplace comprised 5-8% of the total national healthcare expenditure in 2008.

Observations made by researchers from the results:

1. Estimates generated by our model are consistent with estimates reported previously in the literature. In particular, our results show that not having insurance is associated with about 50,000 excess deaths per year, a number quite close to the 45,000 reported by Wilper et al. (2009). This provides some confidence that our other estimates, derived and presented here for the first time, are likely to be reliable.

2. Absence of insurance contributes the most toward excess mortality, followed closely by unemployment. Low job control is, however, also an important factor contributing an estimated 31,000 excess deaths annually.

3. Not having health insurance, being in jobs with high demands, and work–family conflict are the major exposures that contribute to healthcare expenditures.

4. The exposures that contribute the most to healthcare expenditures differ from the highest contributors to mortality. This is because incremental costs stop when someone dies, so exposures with higher deaths are not necessarily associated with higher costs.

5. Although each of the exposures contributes to healthcare expenditure, not all of them contribute, at least from our estimates, to incremental deaths. This is partly due to data limitations: our analysis excluded relative risk estimates that were generated only by two or fewer studies. From Table 3, we observe that several exposures for mortality fall into this category.

6. Because of the nonlinear manner in which each workplace exposure contributes to the final estimate of either expenditure or mortality, the sum of the marginal contributions from each exposure does not add up to the totals reported in Table 6.

Our model estimates significantly lower workplace-associated expenditures and mortality for 2006, which is when the U.S. economy was doing well, relative to estimates in the base model and for 2010, when the U.S. economy was bruising from the global financial crisis. The estimates for 2002 were moderately lower, which was around the time of an economic recession in many developed countries. Overall, these results corroborate the intuition that people experience greater workplace stressors during times of economic turbulence, and that these can have significant impact on health costs and outcomes. This suggests that workplace exposures could be used to better understand how the economic climate affects health, which is a subject that is an interesting direction for future research.


…The estimated effect of these workplace stressors is substantially large, with the number of deaths associated with such stressors exceeding the number of
deaths from diabetes, for instance, and with a reasonable estimate of the total costs incurred in excess of $180 billion. Our analysis suggests that these stressors could potentially be fruitful avenues for policy attention to improve health outcomes and costs.

The results reported in this paper suggest that the association between employer actions and healthcare outcomes and costs is strong. Although we stop short of claiming that employer decisions have a definite effect on these outcomes and costs, denying the possibility of an effect is not prudent either. Analyzing how employers affect health outcomes and costs through the workplace decisions they make is incredibly important if we are to more fully understand the landscape of health and well-being.

What does this mean for the general public?

Poor management practices can create stressful workplace environments which contribute to the negative health outcomes of employees in the United States. Although workplaces cannot be free of all stressors, improvements can be made to management practices. Organizations should take into consideration the workplace environment created by management practices and not just individual-level interventions such as exercising, smoking cessation, and healthy eating- Activities usually promoted through workplace wellness programs. Employees should also consider how their workplace environment might be contributing to poor health decisions they may be making as a result of stress.

Scrutinizer Product
work sc


Summary of Scrutinizer Challenge Initiative Preconference Workshop (Pre-Assessment Data)

64 public health professionals registered for the Texas Public Health Association’s (TPHA) preconference session: Scrutinizer Challenge Initiative – Using Strategic Thinking to Examine Headlines that Impact the Public’s Health. The session was sponsored by the TPHA Epidemiology Section. The largest group of professionals that attended the session were epidemiologists/those with an epidemiology background (38%) followed by professionals working in community health/health promotion roles (18%). Students and nurses each accounted for about 8% of registrants (a combined total of 16%).

31 out of 64 public health professionals that signed up for the pre-conference session completed a pre-assessment. Assumptions made when completing the pre-assessment included:

  • Headlines are generally attached to some type of news article
  • News articles generally quote or reference specific data sources
  • Research and/or surveillance studies generally serve as a data source for health-related news articles
  • The methodology/methods section of a research study frames the limitations of the data that has been collected and analyzed. It also frames what conclusions can/cannot or may/may not be made based on a given data source.

The pre-assessment asked questions to gauge the extent of exposure to health-related news headlines, emotions resulting from exposure to health-related news headlines, and skills/general practice of public health professionals when it comes to responding to, sharing, and interpreting health-related news headlines. 97% of respondents reported being exposed to headlines on a routine basis (i.e. daily or a few times a week).  Regarding emotions resulting from exposure to health-related news headlines, respondents were asked to select all emotions that applied (i.e. selections were not mutually exclusive). 87% of survey respondents reported experiencing positive emotions after reflecting on a health-related news headline they’ve read compared to 68% of respondents who experienced positive emotions after reflecting on a health-related news headline that was shared with them by someone they knew. The top positive emotions experienced were affirmation, enthusiasm, clarity on an issue or topic, and empowerment (52-70% of respondents). 100% of survey respondents reported experiencing negative emotions after reflecting on a health-related news headline they’ve read compared to 77% of respondents who experienced negative emotions after reflecting on a news headline that was shared with them by someone they knew. The top negative emotions experienced were anger, sadness, confusion, and shock (48-67% of respondents). Peace and despair were experienced by the least amount of public health professionals when reading health-related news headlines- less than 5% of respondents reported experiencing peace while 20-25% of respondents reported experiencing despair.

The perceived skills of public health professionals with reviewing health-related headlines, the articles associated with these headlines, and data sources referenced in these associated articles were also assessed. 84% of survey respondents reported having the skills necessary to accurately break down the information that is presented in health-related headlines. 94% of respondents reported having the skills necessary to accurately break down the information that is presented in articles that are attached to health-related headlines. 81% percent of respondents reported having the skills necessary to accurately analyze the data sources that are quoted and/or referenced in articles attached to health-related headlines. 62% of respondents reported having the skills necessary to identify whether an appropriate methodology was used to generate the data that are quoted and/or referenced in articles tied to health-related headlines. All except one respondent reported having access to other professionals that have the skills necessary to accurately break down the information presented in health-related headlines, and professionals that have the skills necessary to accurately break down the information presented in articles that are attached to health-related headlines. Two respondents reported not having access to other professionals that have the skills necessary to accurately analyze the data sources quoted and/or referenced in articles attached to health-related headlines. 87% of respondents reported that they have access to other professionals with the skills necessary to identify whether an appropriate methodology was used to generate the data that are quoted and/or referenced in articles tied to health-related headlines. Table 1 below shows how respondents were applying their skills and engaging with others in their spheres of influence prior to participating in the Scrutinizer Challenge preconference session.

sc practice etc
Table 1.

Although the majority of respondents read the articles attached to health-related headlines (77%), this percentage decreases to slightly more than half when it comes to digging deeper to review the methodology behind the studies referenced in articles (55%). A similar trend is seen when headlines are shared with others: Although 65% of respondents reported encouraging others to read the articles attached to headlines, only 19% encourage others to review the methodology behind the studies referenced in articles. Overall, 55% of respondents agree with the statement that, in general, health-related headlines empower the public to take action. However, only 23% of respondents agree that health-related headlines (in general) present information accurately and 16% believe that health-related headlines provide solutions to complex problems.


The premise behind the Scrutinizer Challenge initiative is that epidemiologists and other public health professionals have the tools/skills needed to examine hype in health-related headlines and deliver reliable messages to communities by:

  • Identifying and determining the credibility of data sources tied to headlines
  • Comparing the findings of data sources to claims being made
  • Assessing the overall implications of news stories or study headlines and
  • Creating analyses and short, actionable summaries to disseminate findings to the general public as well as specific populations

Scrutinizing headlines and contributing to reliable messages for communities is important because these headlines have implications for the public, leading to hype or awareness. Hype may result in fear, anxiety, confusion, despair, or even contribute to a false sense of security. Public health professionals can counter the hype by contributing to efforts which inspire action and awareness through informed discussions, changes in policy, changes in practice, political action, and empowerment.

The data collected from the pre-assessment support the assumption that epidemiologists and public health professionals are equipped to scrutinize headlines, seeing as the majority of respondents reported that they have the skills needed to break down the information in health-related headlines, accurately analyze data sources, and determine whether an appropriate methodology was used to generate data that are quoted or referenced in articles (or have access to others with the skills listed above). The data also indicate that public health professionals are routinely exposed to and can be personally affected by health-related headlines. 100% of respondents reported experiencing negative emotions when reflecting on health-related headlines compared to 87% reporting experiencing positive emotions. 77% of respondents reported experiencing negative emotions after reflecting on a news headline that was shared with them by someone they knew compared to 68% reporting positive emotions. In both cases, it appears that negative emotions were experienced by more respondents than positive emotions and that sharing headlines with others may also have some impact.

Now what, So What?

Although there are public health professionals who see the value of the Scrutinizer Challenge initiative and specific individuals (administrators, public health professionals, students, as well individuals from other fields) have taken the time to voice their appreciation of the initiative, it has been difficult to spark active involvement. The TPHA preconference session was no exception, although requests were made for workshops to be delivered to teams and smaller groups of professionals at public health agencies. Attempts to assess the reason for the lack of involvement among TPHA Epidemiology Section members fell short due to there being a very low response rate to an electronic survey that was sent out months before the preconference session. The response rate for the post-assessment after the preconference was also low. Further exploration should be done to see what is preventing public health professionals, particularly epidemiologists, from actively participating in the initiative either through an organizational framework or individually. Reviewing feedback from evaluations submitted after the session may also shed some light of what is preventing public health professionals from actively participating in the initiative. It is possible that some individuals believe that the efforts they are currently engaging in are sufficient, or that there are other more pressing needs for them to attend to using their skills. For now, the initiative has been shared this year at the International Society for Disease Surveillance conference (ISDS) and the Texas Public Health Association annual meeting. An abstract has been submitted to the American Public Health Association annual meeting as well. In the future, an assessment may be conducted to see if there has been any change to how preconference attendees now engage with and share health-related news headlines.

OxyContin Reformulation and the Rise in Hepatitis C

What is the headline saying?
Link to article Study: OxyContin Reformulation Led to Rise in Hepatitis C Rates

Reformulating OxyContin in order to make it more difficult to abuse has inadvertently led to an increase in acute Hepatitis C cases.

What is the news article saying?
Although reformulating OxyContin may lead to a decrease in OxyContin abuse, it has encouraged people to seek out other cheaper drugs like heroin. Hepatitis C is on the rise (it has in fact tripled after falling to stable rates prior to 2010). This is associated with the reformulation of OxyContin and subsequent use of heroin/injectable drugs.

Does the headline ultimately support claims made by the news article? Does it truly summarize the key points of the news article?

What are the implications of this headline?
Measures that were taken to curb the opioid crisis have had unexpected and detrimental public health effects.

What are the implications of this news article?
Efforts to curb opioid abuse have led desperate addicts to switch to cheaper, readily available alternatives such as heroin. This has resulted in an increase in hepatitis C cases that are being reported throughout the United States based on data from the CDC.

What evidence currently exists to counter or support these implications?
Countering views:
An influx of cheap heroin from Mexico may have contributed to the rise in hepatitis C

Efforts to limit the availability of prescription opioids may have contributed to the rise in hepatitis C

Supporting views:

Research published by the National Bureau of Economic Research last year found that heroin deaths began climbing just a month after the new version of OxyContin hit the market in August 2010, and that “each prevented opioid death was replaced with a heroin death.”

CDC Press Release

The changing face of heroin use in the United States: a retrospective analysis of the past 50 years. 

Abuse-Deterrent Formulations and the Prescription Opioid Abuse Epidemic in the United States: Lessons Learned From OxyContin

Changes in Prevalence of Prescription Opioid Abuse after Introduction of an Abuse-Deterrent Opioid Formulation

Changes in US Lifetime Heroin Use and Heroin Use Disorder Prevalence From 2001-2002 to 2012-2013 National Epidemiologic Survey on Alcohol and Related Conditions

Are there similar and/or opposing headlines from other news outlets? Do the news outlets only link back to other news outlets?
The news articles tie back to the same study.

What are the data sources (i.e. memo, official statement, official document, research study, validated surveillance system, official report, etc.) supporting the article?

Are these data sources credible when applied to the news story? Why or why not?
Yes. These are some of the most comprehensive and representative data that are available.


To measure each state’s initial nonmedical use of OxyContin and other pain relievers, we used self-reported data from the public-use National Survey on Drug Use and Health, which provided aggregated state-level data in two-year waves. This nationally representative household survey of people ages twelve and older is administered by the Substance Abuse and Mental Health Services Administration and is the largest US survey on substance use disorder. The survey asks about “nonmedical OxyContin use” within the past year as well as about “nonmedical prescription pain reliever use.”

We computed the rate of OxyContin misuse in each state before the reformulation from 2004 (the first year for which data were available) through 2009. We pooled the pre-reformulation years to reduce measurement error. There is substantial geographic variation in this measure, as shown in online appendix A. The National Survey on Drug Use and Health is the only data source to specify both OxyContin (the exact drug product affected by the reformulation) and nonmedical use in the survey question. This measure has been shown to be strongly correlated with administrative measures of state oxycodone supply and state OxyContin prescriptions in verified claims data.

Analyses performed at the state level

Divided states into two groups and tested whether there was differential growth in hepatitis C infection rates after reformulation of OxyContin:

  1. States with above-median initial rates of OxyContin misuse were compared to
  2. States with below-median initial rates of OxyContin misuse

Also used a falsification exercise focused on the misuse of prescription pain relievers other than OxyContin

Estimated a difference-in-differences model- studied the relationship between changes in a state’s hepatitis C infection rate before vs. after reformulation

Testable assumption- OxyContin misuse rates were not predictive of hepatitis C infection trends before the reformulation. 

Fit a multivariate regression of state’s hepatitis C infection rates from 2004-2015 as a function of state and year indicators, and interactions of the state’s initial rate of OxyContin misuse with year indicators

Control variables- State unemployment rate, demographic composition variables (age, race, education), state policy variables that may independently affect Oxycontin and heroin use

Weighted regressions by state population size

Used Huber-White robust standard errors clustered at the state level to account for serial correlation


  1. Hepatitis C infection rates from the National Notifiable Diseases Surveillance System are known to understate true infection rates
  2. Some states, especially those with increasing rates of hepatitis C infection, may have improved their reporting practices over time
  3. Misuse rates were based on self-reported data (always a possible source of bias in studies)
  4. Other events that occurred around the time of reformulation have been hypothesized to drive hepatitis C infection rates, however, Oxycontin reformulation occurred before these events and would not be correlated with a state’s initial OxyContin misuse rate

What are the data sources saying? Are they being interpreted correctly in the article and are limitations provided? Are there multiple ways to interpret the data or various conclusions that may been drawn from the data?

This study shows that the introduction of abuse-deterrent OxyContin played a leading role in the rapid increase in hepatitis C infections in the United States. The infections increased three times faster in states that were most affected by the reformulation—states with above-median rates of initial OxyContin misuse—than in states with below-median rates, and this differential increase began immediately after the reformulation in 2010. Before the reformulation, there was almost no difference in hepatitis C infection rates across the two groups of states.

In contrast, growth in hepatitis C infection rates was not associated with initial misuse rates of other pain relievers, which suggests that the source of the differential rise in the infection rates found in our analysis was unique to OxyContin. These patterns point to the OxyContin reformulation, and not to other policies that broadly affected opioids, as the primary driver of the differential growth. Finally, the results were not sensitive to controlling for other opioid policies such as the adoption of PDMPs and pain clinic regulations, or excluding Florida—which experienced a significant crackdown on pill mills around the time of reformulation.


The shift to injection drug use affects more than just overdose risk; it also raises the risk of spreading highly lethal diseases that will place an enormous burden on the health care system in the future.

What does this mean for the general public?

Making Oxycotin more difficult to abuse inadvertently led more people to start looking for alternatives and has caused a new public health crisis- a rise in acute Hepatitis C cases. Overall, medical and law enforcement communities must recognize the critical transition from prescription drugs to other drugs that may be injected. Additional interventions must be considered- a safer in-between drug perhaps or develop policies that will alleviate the harms associated with illicit drug use.

Scrutinizer Product

sc hep c

Prenatal Fluoride Exposure and Attention Deficit Hyperactivity Disorder (ADHD) in Children

What is the headline saying?
Link to article Prenatal Fluoride Exposure Linked to ADHD in Kids

What is the article saying?

Prenatal exposure to higher levels of fluoride not only impairs cognitive development but also significantly increases the incidence of attention-deficit/hyperactivity disorder (ADHD) in children, new research shows.

…it is the first [study] to find an increased incidence of ADHD with prenatal fluoride exposure.

We observed a positive association between higher prenatal fluoride exposure and more behavioral symptoms of inattention, which provide further evidence suggesting neurotoxicity of early-life exposure to fluoride.

Does the headline ultimately support claims made by the article? Does it summarize key points of the article?
Yes. However, the lead author makes sure to say that this study does not summarize the debate about whether or not fluoride should be added to water sources. See below:

For the past 50 years, the medical establishment has claimed that fluoride is safe and effective; should the official position on fluoridation change? I do not believe our study alone can be used to answer this question.

What are the implications of the headline and article?
Countries and geographical areas that artificially add fluoride to water, have it naturally occurring in the environment, or add it to salt are putting fetuses and, consequently, kids at risk for ADHD.

They fuel the debate about whether or not fluoride should be removed from the drinking water in countries that have implemented this public health intervention.

What evidence currently exists to counter or support these implications?

Fluoridation and attention deficit hyperactivity disorder – a critique of Malin and Till (2015)

Fluoride exposure and reported learning disability diagnses among Canadian children: Implication for community water fluoridation

Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: and ecological association

Are there similar and/or opposing headlines from other news outlets? Do the news outlets only link back to other news outlets?
There are similar headlines all tied to the same study – Google Search

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?
Research Study: Prenatal fluoride exposure and attention deficit hyperactivity disorder (ADHD) symptoms in children at 6-12 years of age in Mexico City.

Are these data sources credible when applied to the news story? Why or why not?
Data source is not credible due to limitations of the study. Please continue reading for more details. Also, please read the limitations section of the study:

Study participants

Three different cohorts of women in the Early Life Exposures to Environmental Toxicants (ELEMENT) birth cohort study with available maternal urinary samples during pregnancy, along with child assessments of ADHD-like behaviors at age 6-12.


Screening for ADHD

  • Mothers completed Conners’ Rating Scale-Revised (CRS-R)
  • Conners’ Continuous Performance Test (CPT II) was administered to children ages 6-12 years of age
  • Conners Scale for Assessing ADHD
    • As with all psychological evaluation tools, the Conners CBRS has its limitations. Those who use the scale as a diagnostic tool for ADHD run the risk of incorrectly diagnosing the disorder or failing to diagnose the disorder. Experts recommend using the Conners CBRS with other diagnostic measures, such as ADHD symptom checklists and attention-span tests.

To measure fluoride levels in urine:

The 24-hour urine collection should be used wherever possible…but 14–16 or even 8-hour collection can be used if necessary. Where 24-hour or continuous supervised collection periods are not possible, spot samples of urine can sometimes provide valuable information…

A spot urine sample is defined as an un-timed “single-void” urine sample. This method is the least informative method for studying fluoride exposure, because the amount of fluoride excreted per day or per hour cannot be calculated from the concentration alone.

If spot samples are collected, it is best to take them at several times within a day. Urine that has accumulated in the bladder over a short period may reflect a short-lived peak level of the fluoride concentration. Hence, the longer the urine is retained in the bladder, the more representative it is of 24-hour results. For each spot sample, the hour when it was obtained should be recorded. When spot samples are collected in a follow-up assessment of urinary fluoride, the time of day at which the urine is passed should be approximately equal to the collection times in the initial excretion study. In programmes where fluoride is given once or twice per day, spot urine samples are not useful unless they are scheduled in such a way as to be directly associated with the fluoride intake. 

Attention outcomes of interest:

  • Diagnostic and Statistical Manual of Mental Disorders – 4th edition (DSM-IV) criteria for ADHD, Conners’ ADHD Indices (CRS-R), Conners’ Continuous Performance Test (CPT-II)
    • DSM-IV Inattention Index
    • DSM-IV Hyperactive-Impulsive Index
    • DSM-IV Total Index (Inattentive, Hyperactive-Impulsive)
    • Cognitive Problem/Inattention and Hyperactivity Index
    • Conners’ ADHD Index and CGI: Restless-Impulsive


Selected apriori – based on theoretical relevance or observed associations with fluoride exposure and/or the analyzed neurobehavioral outcomes

Used questionnaire at first pregnancy visit to collect information on maternal factors. Used questionnaire to collect demographics on infant at pregnancy. Mothers took socioeconomic status questionnaire during visit where psychometric tests were conducted. The Home Observation for Measurement of the Environment (HOME) Inventory was given to a subset of participants at the same time as the neurobehavioral tests.

Data analysis

After univariate and bivariate analysis:

  • Initial fully adjusted linear regression
    • Outcomes = skewed residuals
  • Corrected with Gamma regression with identity link GLM with a Gamma-distributed Dependent Variable
    • Used to examine the adjusted association between prenatal fluoride and each neurobehavioral outcome
  • Model adjustments
    • Model adjusted based on maternal factors, adjusted based on infant-specific factors and socioeconomic status, and adjusted for potential cohort and Ca+ intervention effects
    • Potential confounders – sensitivity analyses involving subset
      • HOME Inventory
      • Child contemporaneous fluoride exposure measured by child urinary fluoride adjusted for specific gravity
      • Maternal blood mercury
      • Maternal bone lead
  • Cook’s D
  • Generalized Additive Models (estimated using cross validation in R)
    • Visualize adjusted association between fluoride exposure and measures of attention to examine non-linearity (tested using the inclusion of a quadratic term in the model)
  • Applied Benjamin-Hochberg false discovery rate procedure to address multiple testing corrections (Q = .5, m = 10 tests)


Only 10% of mother-child pairs fell within clinically significant range for CRS-R and MUFcr (average of all urinary samples)

  • Inattention was based on CRS-R; not hyperactivity or CPT-II outcomes (CPT-II fell within average range)
  • Higher concentration of MUFcr associated with parent-endorsed symptoms (statistically significant even after multiple corrections)
    • DSM-IV Inattention
    • DSM-IV Total ADHD
    • Cognitive Problem/Inattention Index
    • ADHD Index


Sensitivity analyses did not change CRS-R scores

Observations of MUFcr and CRS-R suggest that higher levels of urinary fluoride concentration did not increase ADHD-like symptoms


From article:

  • Cohort study not initially designed to look at fluoride exposure
  • Did not take routine samples for the majority of participants for each trimester
  • Cannot relate how intake of fluoride relates to concentration in pregnant women
  • No family history or genetic markers collected
  • No clinical diagnosis of ADHD
  • No teacher reports of ADHD using CRS-R
  • No functional consequences of symptoms characterized to clinically diagnosis the disorder

My additions:

  • Convenience sample
  • Focused on women in Mexico who consume water that naturally has fluoride (it is not artificially added) as well as salt that also has fluoride; not broadly generalizable
  • Limited opportunity for public health intervention regarding the source of naturally-occurring fluoride in water
  • Spot samples not adequate to make strong conclusions, need to record the time samples were taken (and also take multiple samples)


From authors of the study:

In summary, we observed a positive association between higher prenatal fluoride exposure and more behavioral symptoms of inattention, but not hyperactivity or impulse control, in a larger Mexican cohort of children aged 6 to 2 years. The current findings provide further evidence suggesting neurotoxicity of early-life exposure to fluoride. Replication of these findings is warranted in other population-based studies employing biomarkers of prenatal and postnatal exposure to fluoride.


What does this mean for the general public?

The headline and news article mirror the conclusions made by the authors, however, the claims by the authors that their findings provide evidence that suggests neurotoxicity of early-life exposure to fluoride are not valid based on the limitations of this study (many of which the authors point out themselves).


Scrutinizer Product


Scrutinizer Analysis_actionable summary1_Fluoride


Supplemental Vitamins and Minerals for Disease Prevention and Treatment

What is the headline saying or claiming?
Link to article: There’s even more evidence to suggest popular vitamin supplements are essentially useless

What is the article saying?

Popular vitamin supplements such as vitamin C and calcium don’t have any major health benefits…

Folic acid and B vitamins with folic acid could reduce the risk of cardiovascular disease and stroke…

Niacin and antioxidants could actually cause harm…

Multivitamin use has increased although there is little to no evidence to show that this prevents disease and mortality and the U.S. Dietary Guidelines Advisory Committee recommends that people meet nutritional requirements by eating a healthy diet that is largely plant-based.

What are the implications of this headline?
Everyone should stop taking multivitamins because they are useless.

Are there similar and/or opposing headlines from other news outlets?
Do the news outlets only link back to other news outlets?

Similar article that looks at the same study:
New Evidence Your Daily Multivitamin Doesn’t Help Heart Health or Help You Live Longer

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?
Supplemental Vitamins and Minerals for CVD Prevention and Treatment

Vitamin, Mineral, and Multivitamin Supplements for Primary Prevention of Cardiovascular Disease and Cancer: U.S. Preventive Services Task Force Recommendation Statement

Are these data sources credible when applied to the article? Why or why not.
Yes. The sources review multiple studies as well as previous recommendations made by the U.S. Dietary Guidelines Advisory Committee.

What are the data sources saying?
A systematic review of data and trials published over the past 5 years (Jan 2012 – Oct 2017) shows that even if multivitamins do not harm people, they do not benefit them either (particularly, when evaluating whether they can reduce the risk of cardiovascular disease, heart attack, stroke, or early death). However, folic acid and B vitamins may actually reduce the risk of cardiovascular disease and stroke, according to the 2013 U.S. Preventive Services Task Force.

Are the data sources being interpreted correctly?

Results from the systematic reviews and meta-analyses revealed generally moderate- or low-quality evidence for preventive benefits (folic acid for total cardiovascular disease, folic acid and B-vitamins for stroke), no effect (multivitamins, vitamins C, D, β-carotene, calcium, and selenium), or increased risk (antioxidant mixtures and niacin [with a statin] for all-cause mortality). Conclusive evidence for the benefit of any supplement across all dietary backgrounds (including deficiency and sufficiency) was not demonstrated; therefore, any benefits seen must be balanced against possible risks.

What is the study design?
Researchers conducted a review and meta-analysis of existing systematic reviews and meta-analyses and randomized controlled trials published in English (using Cochrane Library, MEDLINE, and PubMed). They also conducted searches for individual supplements of the vitamins and minerals in the USPSTF report of 2013 for CVD outcomes and total mortality.

Data analysis:

  • Researchers used forest plots to identify relevant articles as well as 2 independent investigators to review full papers and perform data abstraction. The information gathered included the number of cases and participants in the intervention and control groups.
  • Where both supplements and dietary intakes of nutrients in foods were combined as total intakes, data were not used unless supplement data were also presented separately

  • Assessed multivitamins that include the majority of vitamins and minerals as well as B-complex vitamins and antioxidant mixtures as (composite entities with >10 RCTS, all-cause mortality data available for both types of supplements).
  • Summary plots were also undertaken as summaries of pooled effect estimates to include all cardiovascularoutcomes, and cumulative plots were undertaken to illustrate what was already significant or had becomesignificant since the USPSTF 2013 assessment.
  • Using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation (GRADE) tool), evidence was graded as high-quality, moderate quality, low-quality, or very low-quality evidence. By default, RCTs were graded as high-quality evidence. Criteria used to downgrade evidence included: study limitations (as assessed by the Cochrane Risk of Bias Tool), inconsistency (substantial) unexplained by interstudy heterogeneity, I2 > 50%, and p < 0.10; indirectness (presence of factors that limited the generalizability of the results); imprecision (the 95% confidence interval [CI] for effect estimates crossed a minimally important difference of 5% [risk ratio (RR): 0.95 to 1.05] from the line of unity); and publication bias (significant evidence of small study effects).
  • Attention was drawn to outcomes of meta-analyses that showed significance with moderate- to high-quality evidence (with >1 RCT). In this way, [they] reduced the risk of type 1 errors in the multiple comparison undertaken and avoided the use of corrections, such as the Bonferroni correction, which might have been too conservative.
  • Review Manager (RevMan)
  • Stata (publication bias analysis)
    • Mantel-Haenszel method (used to obtain summary statistics, data presented for random effect models only)
    • Cochran Q Statistic: p < 0.1 (assess heterogeneity), I2 statistic (used to quantify the Q statistic, greater than or equal to 50% = high heterogeneity
  • Funnel plots and quantitative assessment using Begg’s and Egger’s tests (p < .05 = small study effects, publication bias, only conducted when >10 trials available in meta-analysis)
  • Number needed to treat (NNT), Number needed to harm (NNH) (inverse of Absolute Risk Reduction)



What does this mean for the public?

Otherwise healthy individuals should meet nutritional requirements by eating a healthy diet that is largely plant-based in order to prevent cardiovascular disease, heart attack, stroke, or early death- instead of depending on supplements and multivitamins.


Scrutinizer Product

Scrutinizer Analysis_actionable summary1_Multivitamins.png

Using Social Media to Change Health Behavior/Service Utilization

What is the headline saying or claiming?
Link to article: Using Social Media for Public Health, Patient Behavior Change

Social media campaigns can be useful for sparking conversation about public health issues and driving patient behavior change and education

What is the research article saying?

Social media can be an effective tool for disseminating public health messages and support better patient access to mental healthcare… (example given is the “Bell Let’s Talk” campaign which introduced Twitter as the main platform in 2012)

More awareness about mental health treatment and reducing the stigma often associated with mental health treatment access may help encourage some patients to utilize treatments when they otherwise would not have done so…

There were temporal increases in care access during the Bell Let’s Talk Twitter campaigns.

What are the implications of this headline?
Social media campaigns can drive behavior change when it comes to health issues

Are there similar and/or opposing headlines from other outlets?
N/A. Social media campaigns are often used to raise awareness about an issue.

What are the data sources?
Research study that assessed the Bell Let’s Talk Campaign to see if the social media campaign impacted youth outpatient mental health services in the province of Ontario, Canada. Researchers studied the impacts of the campaign on rates of monthly outpatient mental health visits between 2006 and 2015: Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis

What is the study design?
The researchers used a cross sectional time series analysis of youth that accessed outpatient mental health services during the time period mentioned previously.

Additional data source that I referred to:
Interrupted time series regression for the evaluation of public health interventions: a tutorial

Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time…

It is particularly suited to interventions introduced at a population level over a clearly define time period and that target population-level health outcomes…

A time series is a continuous series of observations on a population, taken repeatedly (normally at equal intervals) over time. In an ITS study, a time series of a particular outcome of interest is used to establish an underlying trend, which is ‘interrupted’ by an intervention at a known point in time…

There is an expected or counterfactual trend/scenario established for comparison purposes (includes data collected prior to the intervention)…

A priori information/key information needed for study design (based on the article above):

Yes. The headline supports claims made by and summarizes the key points of the article.

Appropriate design?

1. Clear differentiation between pre-intervention and post-intervention periods

2. Outcome should be short-term, with the possibility of changing quickly after an intervention has been implemented

Appropriate data?

1. There are no fixed limits regarding data points (amount needed); inspect pre-intervention data points using descriptive statistics (visualize)

2. Routine data (usually administrative), gathered over a long period of time/long time series

3. Understand potential bias in results related to changes in recording or data collection methods

Where is change expected?

1. Gradient of the Trend

2. Change in the Level

3. Both

When should change occur?

1. Immediately after

2. After some lag

What should be taken into account?

1. Time-varying confounders

Control for seasonality (leads to autocorrelation and over-dispersion)

Adjust for residual autocorrelation using ARIMA (autoregressive integrated moving average modeling)

Control for infectious diseases that are prone to outbreaks (use sensitivity analysis)

Are these data sources credible when applied to the article?
Yes. The source is credible since the study design was followed/implemented as intended.

What are the data sources saying?

There was an increase between 2006 and 2015 in the rates (monthly mental health visit rates) of outpatient mental health (primary healthcare and psychiatric visits) use by youth aged 10 to 24 years old in the province of Ontario for males and females. The 2012 Bell Let’s Talk campaign was temporarily (temporally?) associated with increases in the trends of outpatient mental health visits, especially within the adolescent female cohort. Although no discernible difference in the immediate change in the rate of mental health visits (magnitude/level change) was observed among the adolescent groups, young adults exhibited a slight drop in most outpatient mental health visits, followed by a moderate increase or plateauing of rates…

Results broken down:

1. Over 10-year period (2 time points, 2006 & 2015)

Adolescents (10-17) saw an increase in the monthly mental health visit rate for primary care and psychiatric services.
Young Adults (18-24) saw an increase in monthly mental health visit rates for primary care and psychiatric services.

2. Immediate change associated with intervention:

Adolescents (10-17)

There was no discernible difference in the immediate change in the rate of mental health visits observed that could be attributed to the campaign.

Young Adults (18-24)

There was an immediate drop in rates of mental health visits after the campaign- this group experienced a decrease and plateau in the slope of all psychiatric service visits after 2012.

3. Both female age cohorts saw increases in accessing primary health care for mental health services after the 2012 intervention.

Are the data sources being interpreted correctly?
The article makes the claim that “each year during which the campaign ran, mental healthcare access saw a spike amongst adolescent and young adult patients.” However, since only two data points were compared (the 2006 and 2015 data points) the statement about seeing a spike “each year” does not appear to be accurate. This statement also appears to contradict the one made right after it: “Following the month-long campaigns, visit rates decreased or plateaued, researchers found.” The article also advocates for a more targeted campaign with specific calls to action, to see if this may lead to more health behavior change.

Overall, the researchers discuss how the, “lack of substantive step change in health care utilization from normal levels is not surprising,” since the goal of the campaign was to increase awareness of mental health and stigma. At most, the data from this study may suggest that the campaign contributed to a gradual rather that immediate change in behavior as it relates to youth in Ontario, Canada accessing mental health services. The researchers call for further exploration of the increase in female mental health service utilization over the 10-year period (possibly with an “emphasis on gender and sex within health sciences research”) and further research on “more precise modeling techniques to measure the effect of social media on population and public health.”

Are limitations provided?
The research study provides the following limitations:

1. Administrative data was used, so illness severity could not be measured. The study also could not analyze individual presentations/usage of mental health services.

2. Emergency department visits for mental health services were not included in the study (this was so that the study could focus on planned mental health activities that could possibly be attributed to the campaign).

3. Specific sub-populations could not be studied to see how the campaign may have impacted homogeneous populations/smaller groups.

4. Although there was a temporal change associated with the campaign, other factors could have contributed to this change.

5. The cumulative effect of the campaign on people over time was not explored.

I would add that caution should be taken when trying to generalize the results from this ecological study of youth in Ontario, Canada to other populations.


What does this mean for the general public and public health professionals?
Although mental health awareness can be increased using social media outlets and campaigns, more research needs to be done to see if these campaigns can also influence behavior change that leads to an increase in the utilization of mental health services on a population-level (or in specific sub-populations).

The impact of social media campaigns on population health should be evaluated using an appropriate study design.


Scrutinizer Product