Coming Soon!

Scrutinizer Logo

  • Sucralose needs a safety re-evaluation following metabolite discovery, say scientists
  • Study: OxyContin Reformulation Led to Rise in Hepatitis C Rates
  • Açaí fruit can transmit Chagas disease
  • How Safe Is Your Kids’ Food? A new report highlights food additives that may be harmful to kids
  • Why Your Workplace Might Be Killing You
  • That New Organic Study Doesn’t Really Show Lower Pesticide Levels
  • Push ups and mens health at 40
  • Pumped breast milk has higher levels of potentially harmful bacteria than nursing, according to a new study
  • Mental Health and Gun Violence

Scrutinizer Preconference Topics

Prenatal Fluoride Exposure and Attention Deficit Hyperactivity Disorder (ADHD) in Children

What is the headline saying?
Link to article Prenatal Fluoride Exposure Linked to ADHD in Kids

What is the article saying?

Prenatal exposure to higher levels of fluoride not only impairs cognitive development but also significantly increases the incidence of attention-deficit/hyperactivity disorder (ADHD) in children, new research shows.

…it is the first [study] to find an increased incidence of ADHD with prenatal fluoride exposure.

We observed a positive association between higher prenatal fluoride exposure and more behavioral symptoms of inattention, which provide further evidence suggesting neurotoxicity of early-life exposure to fluoride.

Does the headline ultimately support claims made by the article? Does it summarize key points of the article?
Yes. However, the lead author makes sure to say that this study does not summarize the debate about whether or not fluoride should be added to water sources. See below:

For the past 50 years, the medical establishment has claimed that fluoride is safe and effective; should the official position on fluoridation change? I do not believe our study alone can be used to answer this question.

What are the implications of the headline and article?
Countries and geographical areas that artificially add fluoride to water, have it naturally occurring in the environment, or add it to salt are putting fetuses and, consequently, kids at risk for ADHD.

They fuel the debate about whether or not fluoride should be removed from the drinking water in countries that have implemented this public health intervention.

What evidence currently exists to counter or support these implications?

Fluoridation and attention deficit hyperactivity disorder – a critique of Malin and Till (2015) https://www.nature.com/articles/sj.bdj.2017.988

Fluoride exposure and reported learning disability diagnses among Canadian children: Implication for community water fluoridation https://www.ncbi.nlm.nih.gov/pubmed/28910243

Exposure to fluoridated water and attention deficit hyperactivity disorder prevalence among children and adolescents in the United States: and ecological association https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4389999/

Are there similar and/or opposing headlines from other news outlets? Do the news outlets only link back to other news outlets?
There are similar headlines all tied to the same study – Google Search

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?
Research Study: Prenatal fluoride exposure and attention deficit hyperactivity disorder (ADHD) symptoms in children at 6-12 years of age in Mexico City.

Are these data sources credible when applied to the news story? Why or why not?
Data source is not credible due to limitations of the study. Please continue reading for more details. Also, please read the limitations section of the study:

Study participants

Three different cohorts of women in the Early Life Exposures to Environmental Toxicants (ELEMENT) birth cohort study with available maternal urinary samples during pregnancy, along with child assessments of ADHD-like behaviors at age 6-12.

Methods

Screening for ADHD

  • Mothers completed Conners’ Rating Scale-Revised (CRS-R)
  • Conners’ Continuous Performance Test (CPT II) was administered to children ages 6-12 years of age
  • Conners Scale for Assessing ADHD
    • As with all psychological evaluation tools, the Conners CBRS has its limitations. Those who use the scale as a diagnostic tool for ADHD run the risk of incorrectly diagnosing the disorder or failing to diagnose the disorder. Experts recommend using the Conners CBRS with other diagnostic measures, such as ADHD symptom checklists and attention-span tests.

To measure fluoride levels in urine:

The 24-hour urine collection should be used wherever possible…but 14–16 or even 8-hour collection can be used if necessary. Where 24-hour or continuous supervised collection periods are not possible, spot samples of urine can sometimes provide valuable information…

A spot urine sample is defined as an un-timed “single-void” urine sample. This method is the least informative method for studying fluoride exposure, because the amount of fluoride excreted per day or per hour cannot be calculated from the concentration alone.

If spot samples are collected, it is best to take them at several times within a day. Urine that has accumulated in the bladder over a short period may reflect a short-lived peak level of the fluoride concentration. Hence, the longer the urine is retained in the bladder, the more representative it is of 24-hour results. For each spot sample, the hour when it was obtained should be recorded. When spot samples are collected in a follow-up assessment of urinary fluoride, the time of day at which the urine is passed should be approximately equal to the collection times in the initial excretion study. In programmes where fluoride is given once or twice per day, spot urine samples are not useful unless they are scheduled in such a way as to be directly associated with the fluoride intake. 

Attention outcomes of interest:

  • Diagnostic and Statistical Manual of Mental Disorders – 4th edition (DSM-IV) criteria for ADHD, Conners’ ADHD Indices (CRS-R), Conners’ Continuous Performance Test (CPT-II)
    • DSM-IV Inattention Index
    • DSM-IV Hyperactive-Impulsive Index
    • DSM-IV Total Index (Inattentive, Hyperactive-Impulsive)
    • Cognitive Problem/Inattention and Hyperactivity Index
    • Conners’ ADHD Index and CGI: Restless-Impulsive

Covariates

Selected apriori – based on theoretical relevance or observed associations with fluoride exposure and/or the analyzed neurobehavioral outcomes

Used questionnaire at first pregnancy visit to collect information on maternal factors. Used questionnaire to collect demographics on infant at pregnancy. Mothers took socioeconomic status questionnaire during visit where psychometric tests were conducted. The Home Observation for Measurement of the Environment (HOME) Inventory was given to a subset of participants at the same time as the neurobehavioral tests.

Data analysis

After univariate and bivariate analysis:

  • Initial fully adjusted linear regression
    • Outcomes = skewed residuals
  • Corrected with Gamma regression with identity link GLM with a Gamma-distributed Dependent Variable
    • Used to examine the adjusted association between prenatal fluoride and each neurobehavioral outcome
  • Model adjustments
    • Model adjusted based on maternal factors, adjusted based on infant-specific factors and socioeconomic status, and adjusted for potential cohort and Ca+ intervention effects
    • Potential confounders – sensitivity analyses involving subset
      • HOME Inventory
      • Child contemporaneous fluoride exposure measured by child urinary fluoride adjusted for specific gravity
      • Maternal blood mercury
      • Maternal bone lead
  • Cook’s D
  • Generalized Additive Models (estimated using cross validation in R)
    • Visualize adjusted association between fluoride exposure and measures of attention to examine non-linearity (tested using the inclusion of a quadratic term in the model)
  • Applied Benjamin-Hochberg false discovery rate procedure to address multiple testing corrections (Q = .5, m = 10 tests)

Results

Only 10% of mother-child pairs fell within clinically significant range for CRS-R and MUFcr (average of all urinary samples)

  • Inattention was based on CRS-R; not hyperactivity or CPT-II outcomes (CPT-II fell within average range)
  • Higher concentration of MUFcr associated with parent-endorsed symptoms (statistically significant even after multiple corrections)
    • DSM-IV Inattention
    • DSM-IV Total ADHD
    • Cognitive Problem/Inattention Index
    • ADHD Index

Furthermore:

Sensitivity analyses did not change CRS-R scores

Observations of MUFcr and CRS-R suggest that higher levels of urinary fluoride concentration did not increase ADHD-like symptoms

Limitations

From article:

  • Cohort study not initially designed to look at fluoride exposure
  • Did not take routine samples for the majority of participants for each trimester
  • Cannot relate how intake of fluoride relates to concentration in pregnant women
  • No family history or genetic markers collected
  • No clinical diagnosis of ADHD
  • No teacher reports of ADHD using CRS-R
  • No functional consequences of symptoms characterized to clinically diagnosis the disorder

My additions:

  • Convenience sample
  • Focused on women in Mexico who consume water that naturally has fluoride (it is not artificially added) as well as salt that also has fluoride; not broadly generalizable
  • Limited opportunity for public health intervention regarding the source of naturally-occurring fluoride in water
  • Spot samples not adequate to make strong conclusions, need to record the time samples were taken (and also take multiple samples)

Conclusion

From authors of the study:

In summary, we observed a positive association between higher prenatal fluoride exposure and more behavioral symptoms of inattention, but not hyperactivity or impulse control, in a larger Mexican cohort of children aged 6 to 2 years. The current findings provide further evidence suggesting neurotoxicity of early-life exposure to fluoride. Replication of these findings is warranted in other population-based studies employing biomarkers of prenatal and postnatal exposure to fluoride.

 

What does this mean for the general public?

The headline and news article mirror the conclusions made by the authors, however, the claims by the authors that their findings provide evidence that suggests neurotoxicity of early-life exposure to fluoride are not valid based on the limitations of this study (many of which the authors point out themselves).

 

Scrutinizer Product

 

Scrutinizer Analysis_actionable summary1_Fluoride

 

Supplemental Vitamins and Minerals for Disease Prevention and Treatment

What is the headline saying or claiming?
Link to article: There’s even more evidence to suggest popular vitamin supplements are essentially useless

What is the article saying?

Popular vitamin supplements such as vitamin C and calcium don’t have any major health benefits…

Folic acid and B vitamins with folic acid could reduce the risk of cardiovascular disease and stroke…

Niacin and antioxidants could actually cause harm…

Multivitamin use has increased although there is little to no evidence to show that this prevents disease and mortality and the U.S. Dietary Guidelines Advisory Committee recommends that people meet nutritional requirements by eating a healthy diet that is largely plant-based.

What are the implications of this headline?
Everyone should stop taking multivitamins because they are useless.

Are there similar and/or opposing headlines from other news outlets?
Do the news outlets only link back to other news outlets?

Similar article that looks at the same study:
New Evidence Your Daily Multivitamin Doesn’t Help Heart Health or Help You Live Longer

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?
Supplemental Vitamins and Minerals for CVD Prevention and Treatment

Vitamin, Mineral, and Multivitamin Supplements for Primary Prevention of Cardiovascular Disease and Cancer: U.S. Preventive Services Task Force Recommendation Statement

Are these data sources credible when applied to the article? Why or why not.
Yes. The sources review multiple studies as well as previous recommendations made by the U.S. Dietary Guidelines Advisory Committee.

What are the data sources saying?
A systematic review of data and trials published over the past 5 years (Jan 2012 – Oct 2017) shows that even if multivitamins do not harm people, they do not benefit them either (particularly, when evaluating whether they can reduce the risk of cardiovascular disease, heart attack, stroke, or early death). However, folic acid and B vitamins may actually reduce the risk of cardiovascular disease and stroke, according to the 2013 U.S. Preventive Services Task Force.

Are the data sources being interpreted correctly?
Yes.

Results from the systematic reviews and meta-analyses revealed generally moderate- or low-quality evidence for preventive benefits (folic acid for total cardiovascular disease, folic acid and B-vitamins for stroke), no effect (multivitamins, vitamins C, D, β-carotene, calcium, and selenium), or increased risk (antioxidant mixtures and niacin [with a statin] for all-cause mortality). Conclusive evidence for the benefit of any supplement across all dietary backgrounds (including deficiency and sufficiency) was not demonstrated; therefore, any benefits seen must be balanced against possible risks.

What is the study design?
Researchers conducted a review and meta-analysis of existing systematic reviews and meta-analyses and randomized controlled trials published in English (using Cochrane Library, MEDLINE, and PubMed). They also conducted searches for individual supplements of the vitamins and minerals in the USPSTF report of 2013 for CVD outcomes and total mortality.

Data analysis:

  • Researchers used forest plots to identify relevant articles as well as 2 independent investigators to review full papers and perform data abstraction. The information gathered included the number of cases and participants in the intervention and control groups.
  • Where both supplements and dietary intakes of nutrients in foods were combined as total intakes, data were not used unless supplement data were also presented separately

  • Assessed multivitamins that include the majority of vitamins and minerals as well as B-complex vitamins and antioxidant mixtures as (composite entities with >10 RCTS, all-cause mortality data available for both types of supplements).
  • Summary plots were also undertaken as summaries of pooled effect estimates to include all cardiovascularoutcomes, and cumulative plots were undertaken to illustrate what was already significant or had becomesignificant since the USPSTF 2013 assessment.
  • Using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation (GRADE) tool), evidence was graded as high-quality, moderate quality, low-quality, or very low-quality evidence. By default, RCTs were graded as high-quality evidence. Criteria used to downgrade evidence included: study limitations (as assessed by the Cochrane Risk of Bias Tool), inconsistency (substantial) unexplained by interstudy heterogeneity, I2 > 50%, and p < 0.10; indirectness (presence of factors that limited the generalizability of the results); imprecision (the 95% confidence interval [CI] for effect estimates crossed a minimally important difference of 5% [risk ratio (RR): 0.95 to 1.05] from the line of unity); and publication bias (significant evidence of small study effects).
  • Attention was drawn to outcomes of meta-analyses that showed significance with moderate- to high-quality evidence (with >1 RCT). In this way, [they] reduced the risk of type 1 errors in the multiple comparison undertaken and avoided the use of corrections, such as the Bonferroni correction, which might have been too conservative.
  • Review Manager (RevMan)
  • Stata (publication bias analysis)
    • Mantel-Haenszel method (used to obtain summary statistics, data presented for random effect models only)
    • Cochran Q Statistic: p < 0.1 (assess heterogeneity), I2 statistic (used to quantify the Q statistic, greater than or equal to 50% = high heterogeneity
  • Funnel plots and quantitative assessment using Begg’s and Egger’s tests (p < .05 = small study effects, publication bias, only conducted when >10 trials available in meta-analysis)
  • Number needed to treat (NNT), Number needed to harm (NNH) (inverse of Absolute Risk Reduction)

 

 

What does this mean for the public?

Otherwise healthy individuals should meet nutritional requirements by eating a healthy diet that is largely plant-based in order to prevent cardiovascular disease, heart attack, stroke, or early death- instead of depending on supplements and multivitamins.

 

Scrutinizer Product

Scrutinizer Analysis_actionable summary1_Multivitamins.png

Using Social Media to Change Health Behavior/Service Utilization

What is the headline saying or claiming?
Link to article: Using Social Media for Public Health, Patient Behavior Change

Social media campaigns can be useful for sparking conversation about public health issues and driving patient behavior change and education

What is the research article saying?

Social media can be an effective tool for disseminating public health messages and support better patient access to mental healthcare… (example given is the “Bell Let’s Talk” campaign which introduced Twitter as the main platform in 2012)

More awareness about mental health treatment and reducing the stigma often associated with mental health treatment access may help encourage some patients to utilize treatments when they otherwise would not have done so…

There were temporal increases in care access during the Bell Let’s Talk Twitter campaigns.

What are the implications of this headline?
Social media campaigns can drive behavior change when it comes to health issues

Are there similar and/or opposing headlines from other outlets?
N/A. Social media campaigns are often used to raise awareness about an issue.

What are the data sources?
Research study that assessed the Bell Let’s Talk Campaign to see if the social media campaign impacted youth outpatient mental health services in the province of Ontario, Canada. Researchers studied the impacts of the campaign on rates of monthly outpatient mental health visits between 2006 and 2015: Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis

What is the study design?
The researchers used a cross sectional time series analysis of youth that accessed outpatient mental health services during the time period mentioned previously.

Additional data source that I referred to:
Interrupted time series regression for the evaluation of public health interventions: a tutorial

Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time…

It is particularly suited to interventions introduced at a population level over a clearly define time period and that target population-level health outcomes…

A time series is a continuous series of observations on a population, taken repeatedly (normally at equal intervals) over time. In an ITS study, a time series of a particular outcome of interest is used to establish an underlying trend, which is ‘interrupted’ by an intervention at a known point in time…

There is an expected or counterfactual trend/scenario established for comparison purposes (includes data collected prior to the intervention)…

A priori information/key information needed for study design (based on the article above):

Yes. The headline supports claims made by and summarizes the key points of the article.

Appropriate design?

1. Clear differentiation between pre-intervention and post-intervention periods

2. Outcome should be short-term, with the possibility of changing quickly after an intervention has been implemented

Appropriate data?

1. There are no fixed limits regarding data points (amount needed); inspect pre-intervention data points using descriptive statistics (visualize)

2. Routine data (usually administrative), gathered over a long period of time/long time series

3. Understand potential bias in results related to changes in recording or data collection methods

Where is change expected?

1. Gradient of the Trend

2. Change in the Level

3. Both

When should change occur?

1. Immediately after

2. After some lag

What should be taken into account?

1. Time-varying confounders

Control for seasonality (leads to autocorrelation and over-dispersion)

Adjust for residual autocorrelation using ARIMA (autoregressive integrated moving average modeling)

Control for infectious diseases that are prone to outbreaks (use sensitivity analysis)


Are these data sources credible when applied to the article?
Yes. The source is credible since the study design was followed/implemented as intended.

What are the data sources saying?

There was an increase between 2006 and 2015 in the rates (monthly mental health visit rates) of outpatient mental health (primary healthcare and psychiatric visits) use by youth aged 10 to 24 years old in the province of Ontario for males and females. The 2012 Bell Let’s Talk campaign was temporarily (temporally?) associated with increases in the trends of outpatient mental health visits, especially within the adolescent female cohort. Although no discernible difference in the immediate change in the rate of mental health visits (magnitude/level change) was observed among the adolescent groups, young adults exhibited a slight drop in most outpatient mental health visits, followed by a moderate increase or plateauing of rates…

Results broken down:

1. Over 10-year period (2 time points, 2006 & 2015)

Adolescents (10-17) saw an increase in the monthly mental health visit rate for primary care and psychiatric services.
Young Adults (18-24) saw an increase in monthly mental health visit rates for primary care and psychiatric services.

2. Immediate change associated with intervention:

Adolescents (10-17)

There was no discernible difference in the immediate change in the rate of mental health visits observed that could be attributed to the campaign.

Young Adults (18-24)

There was an immediate drop in rates of mental health visits after the campaign- this group experienced a decrease and plateau in the slope of all psychiatric service visits after 2012.

3. Both female age cohorts saw increases in accessing primary health care for mental health services after the 2012 intervention.


Are the data sources being interpreted correctly?
The article makes the claim that “each year during which the campaign ran, mental healthcare access saw a spike amongst adolescent and young adult patients.” However, since only two data points were compared (the 2006 and 2015 data points) the statement about seeing a spike “each year” does not appear to be accurate. This statement also appears to contradict the one made right after it: “Following the month-long campaigns, visit rates decreased or plateaued, researchers found.” The article also advocates for a more targeted campaign with specific calls to action, to see if this may lead to more health behavior change.

Overall, the researchers discuss how the, “lack of substantive step change in health care utilization from normal levels is not surprising,” since the goal of the campaign was to increase awareness of mental health and stigma. At most, the data from this study may suggest that the campaign contributed to a gradual rather that immediate change in behavior as it relates to youth in Ontario, Canada accessing mental health services. The researchers call for further exploration of the increase in female mental health service utilization over the 10-year period (possibly with an “emphasis on gender and sex within health sciences research”) and further research on “more precise modeling techniques to measure the effect of social media on population and public health.”

Are limitations provided?
The research study provides the following limitations:

1. Administrative data was used, so illness severity could not be measured. The study also could not analyze individual presentations/usage of mental health services.

2. Emergency department visits for mental health services were not included in the study (this was so that the study could focus on planned mental health activities that could possibly be attributed to the campaign).

3. Specific sub-populations could not be studied to see how the campaign may have impacted homogeneous populations/smaller groups.

4. Although there was a temporal change associated with the campaign, other factors could have contributed to this change.

5. The cumulative effect of the campaign on people over time was not explored.

I would add that caution should be taken when trying to generalize the results from this ecological study of youth in Ontario, Canada to other populations.

 


What does this mean for the general public and public health professionals?
Although mental health awareness can be increased using social media outlets and campaigns, more research needs to be done to see if these campaigns can also influence behavior change that leads to an increase in the utilization of mental health services on a population-level (or in specific sub-populations).

The impact of social media campaigns on population health should be evaluated using an appropriate study design.

 

Scrutinizer Product

SM SC

 

 

Maternal Morbidity and Mortality in the U.S.

What is the headline saying or claiming?
Link to article Severe Complications for Women During Childbirth Are Skyrocketing – And Could Often Be Prevented

What is the news article saying?
“The rate of life-threatening complications for new mothers in the U.S. has more than doubled in two decades due to pre-existing conditions, medical errors and unequal access to care. The U.S. has the highest rate of maternal mortality in the industrialized world.”

Does the headline ultimately support claims made by the news article? Does it truly summarize the key points of the news article?
Yes. The headline supports claims made by and summarizes the key points of the article.

What are the implications of this headline?
Despite living in a developed country, there are an increasing number of women in the U.S. who have severe complications during pregnancy that are preventable.

What are the implications of this news article?
Severe complications during pregnancy in the U.S., although preventable, are not rare and impact women from all walks of life.

Maternal mortality has not improved in the U.S. over the past few decades and is getting worse.

https://www.npr.org/2017/05/12/528098789/u-s-has-the-worst-rate-of-maternal-deathsin-the-developed-world

http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(16)31470-2.pdf

What evidence currently exists to counter or support these implications?
Severe maternal morbidity (SMM) is increasing, however, the causes are unclear and may be related to changes in the population of women giving birth in the U.S. This may inevitably place women at higher risk for complications.

https://www.cdc.gov/reproductivehealth/maternalinfanthealth/severematernalmorbidity.html

The article linked to the claim that the U.S. has the worst maternal mortality in the industrialized world also shares that the U.S. has improved its surveillance to identify more potential cases (whereas some of the countries it is being compared to have not).

The U.S. captures deaths occurring within 1 year of the end of pregnancy. This is different from other countries that only capture those which are within 42 days postpartum.

https://www.cdc.gov/reproductivehealth/maternalinfanthealth/pmss.html

https://www.cdc.gov/cdcgrandrounds/archives/2017/november2017.html

According to the CDC’s most recent public health rounds on maternal mortality and morbidity surveillance, maternal mortality within 42 days postpartum has remained relatively flat over the past few years. Data collected between 1987 and 2013 also shows that there was a decrease in maternal deaths due to hemorrhaging and hypertension as well as an increase in maternal deaths due to heart conditions. These data indicate that there have been improvements in maternal mortality related to previously identified factors. Different factors are the cause of more recent increases in maternal mortality.

https://www.cdc.gov/cdcgrandrounds/archives/2017/november2017.html

Are there similar and/or opposing headlines from other news outlets? Do the news outlets only link back to other news outlets?
Similar articles link back to ProPublica/NPR.

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?


Are these data sources credible when applied to the news story? Why or why not?
These are credible sources because they are based on data that is available. However, the sources do not necessarily support the claims made in the article.

What are the data sources saying? Are they being interpreted correctly in the article and are limitations provided? Are there multiple ways to interpret the data or various conclusions that may been drawn from the data?
There are limitations that are not discussed in the article, especially for maternal mortality comparisons. However, the coverage for SMM seems to be relatively accurate.

 

What does this mean for the general public?
Those who are pregnant or trying to become pregnant (as well as their providers) should 
know that there is a risk for complications during pregnancy and their specific risk factors for complications during pregnancy. Patients and providers should also identify best practices to prevent complications during pregnancy before, during, and after pregnancy.

 

Maternal Mortality Considerations.png

Scrutinizer Product

MM SC.png

 

The Scrutinizer Challenge Initiative: A Charge for Epidemiologists and Partners

Watch my introduction video!

Scrutinizer Challenge Video

It can be difficult to distinguish between truth, fiction, half-truth, and misinformation as we watch the news, read headlines, and scroll through various social media feeds. Fortunately, epidemiologists have the tools needed to serve as a practical resource for colleagues, partners, and communities in these situations. The Scrutinizer Challenge initiative is an opportunity for epidemiologists to tackle at least one headline or news story a month that is relevant to public health. The goal is for all of us to understand how we can serve as a practical resource by doing the research needed to examine data sources and implications of news stories and research articles. This process can help us deliver consistent and reliable messages to share with colleagues, partners, and communities. It also provides an opportunity for public health practitioners to consolidate resources and develop working relationships between practice and academia.

The outline below provides guidance on how to approach The Scrutinizer Challenge initiative after identifying a headline/news story or research article of interest:

SC Guidance

 

Maternal Mortality Considerations

Scrutinizer Challenge initiative end products include a list of sources and a short explanation about how each source truly contributes to a research article/news story and its implications, as well as one of the following: 1) an actionable summary that could be shared with colleagues or 2) a summary that could be shared with a local partner/the general public.

Pathways for Utilizing the Scrutinizer Challenge


Scrutinizer Challenge initiative end products should be emailed to sophia.anyatonwu@gmail.com. Submissions may be highlighted in public health newslettters, as a separate report, or used as content in a round table discussion.

Join the movement, join the network #IAmAnEpidemiologist #EpidemiologyScrutinizer

Sophia Anyatonwu, MPH, CPH, CIC
sophia.anyatonwu@gmail.com
about.me/sanyatonwu