Supplemental Vitamins and Minerals for Disease Prevention and Treatment

What is the headline saying or claiming?
Link to article: There’s even more evidence to suggest popular vitamin supplements are essentially useless

What is the article saying?

Popular vitamin supplements such as vitamin C and calcium don’t have any major health benefits…

Folic acid and B vitamins with folic acid could reduce the risk of cardiovascular disease and stroke…

Niacin and antioxidants could actually cause harm…

Multivitamin use has increased although there is little to no evidence to show that this prevents disease and mortality and the U.S. Dietary Guidelines Advisory Committee recommends that people meet nutritional requirements by eating a healthy diet that is largely plant-based.

What are the implications of this headline?
Everyone should stop taking multivitamins because they are useless.

Are there similar and/or opposing headlines from other news outlets?
Do the news outlets only link back to other news outlets?

Similar article that looks at the same study:
New Evidence Your Daily Multivitamin Doesn’t Help Heart Health or Help You Live Longer

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?
Supplemental Vitamins and Minerals for CVD Prevention and Treatment

Vitamin, Mineral, and Multivitamin Supplements for Primary Prevention of Cardiovascular Disease and Cancer: U.S. Preventive Services Task Force Recommendation Statement

Are these data sources credible when applied to the article? Why or why not.
Yes. The sources review multiple studies as well as previous recommendations made by the U.S. Dietary Guidelines Advisory Committee.

What are the data sources saying?
A systematic review of data and trials published over the past 5 years (Jan 2012 – Oct 2017) shows that even if multivitamins do not harm people, they do not benefit them either (particularly, when evaluating whether they can reduce the risk of cardiovascular disease, heart attack, stroke, or early death). However, folic acid and B vitamins may actually reduce the risk of cardiovascular disease and stroke, according to the 2013 U.S. Preventive Services Task Force.

Are the data sources being interpreted correctly?
Yes.

Results from the systematic reviews and meta-analyses revealed generally moderate- or low-quality evidence for preventive benefits (folic acid for total cardiovascular disease, folic acid and B-vitamins for stroke), no effect (multivitamins, vitamins C, D, β-carotene, calcium, and selenium), or increased risk (antioxidant mixtures and niacin [with a statin] for all-cause mortality). Conclusive evidence for the benefit of any supplement across all dietary backgrounds (including deficiency and sufficiency) was not demonstrated; therefore, any benefits seen must be balanced against possible risks.

What is the study design?
Researchers conducted a review and meta-analysis of existing systematic reviews and meta-analyses and randomized controlled trials published in English (using Cochrane Library, MEDLINE, and PubMed). They also conducted searches for individual supplements of the vitamins and minerals in the USPSTF report of 2013 for CVD outcomes and total mortality.

Data analysis:

  • Researchers used forest plots to identify relevant articles as well as 2 independent investigators to review full papers and perform data abstraction. The information gathered included the number of cases and participants in the intervention and control groups.
  • Where both supplements and dietary intakes of nutrients in foods were combined as total intakes, data were not used unless supplement data were also presented separately

  • Assessed multivitamins that include the majority of vitamins and minerals as well as B-complex vitamins and antioxidant mixtures as (composite entities with >10 RCTS, all-cause mortality data available for both types of supplements).
  • Summary plots were also undertaken as summaries of pooled effect estimates to include all cardiovascularoutcomes, and cumulative plots were undertaken to illustrate what was already significant or had becomesignificant since the USPSTF 2013 assessment.
  • Using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation (GRADE) tool), evidence was graded as high-quality, moderate quality, low-quality, or very low-quality evidence. By default, RCTs were graded as high-quality evidence. Criteria used to downgrade evidence included: study limitations (as assessed by the Cochrane Risk of Bias Tool), inconsistency (substantial) unexplained by interstudy heterogeneity, I2 > 50%, and p < 0.10; indirectness (presence of factors that limited the generalizability of the results); imprecision (the 95% confidence interval [CI] for effect estimates crossed a minimally important difference of 5% [risk ratio (RR): 0.95 to 1.05] from the line of unity); and publication bias (significant evidence of small study effects).
  • Attention was drawn to outcomes of meta-analyses that showed significance with moderate- to high-quality evidence (with >1 RCT). In this way, [they] reduced the risk of type 1 errors in the multiple comparison undertaken and avoided the use of corrections, such as the Bonferroni correction, which might have been too conservative.
  • Review Manager (RevMan)
  • Stata (publication bias analysis)
    • Mantel-Haenszel method (used to obtain summary statistics, data presented for random effect models only)
    • Cochran Q Statistic: p < 0.1 (assess heterogeneity), I2 statistic (used to quantify the Q statistic, greater than or equal to 50% = high heterogeneity
  • Funnel plots and quantitative assessment using Begg’s and Egger’s tests (p < .05 = small study effects, publication bias, only conducted when >10 trials available in meta-analysis)
  • Number needed to treat (NNT), Number needed to harm (NNH) (inverse of Absolute Risk Reduction)

 

 

What does this mean for the public?

Otherwise healthy individuals should meet nutritional requirements by eating a healthy diet that is largely plant-based in order to prevent cardiovascular disease, heart attack, stroke, or early death- instead of depending on supplements and multivitamins.

 

Scrutinizer Product

Scrutinizer Analysis_actionable summary1_Multivitamins.png

Using Social Media to Change Health Behavior/Service Utilization

What is the headline saying or claiming?
Link to article: Using Social Media for Public Health, Patient Behavior Change

Social media campaigns can be useful for sparking conversation about public health issues and driving patient behavior change and education

What is the research article saying?

Social media can be an effective tool for disseminating public health messages and support better patient access to mental healthcare… (example given is the “Bell Let’s Talk” campaign which introduced Twitter as the main platform in 2012)

More awareness about mental health treatment and reducing the stigma often associated with mental health treatment access may help encourage some patients to utilize treatments when they otherwise would not have done so…

There were temporal increases in care access during the Bell Let’s Talk Twitter campaigns.

What are the implications of this headline?
Social media campaigns can drive behavior change when it comes to health issues

Are there similar and/or opposing headlines from other outlets?
N/A. Social media campaigns are often used to raise awareness about an issue.

What are the data sources?
Research study that assessed the Bell Let’s Talk Campaign to see if the social media campaign impacted youth outpatient mental health services in the province of Ontario, Canada. Researchers studied the impacts of the campaign on rates of monthly outpatient mental health visits between 2006 and 2015: Youth Mental Health Services Utilization Rates After a Large-Scale Social Media Campaign: Population-Based Interrupted Time-Series Analysis

What is the study design?
The researchers used a cross sectional time series analysis of youth that accessed outpatient mental health services during the time period mentioned previously.

Additional data source that I referred to:
Interrupted time series regression for the evaluation of public health interventions: a tutorial

Interrupted time series (ITS) analysis is a valuable study design for evaluating the effectiveness of population-level health interventions that have been implemented at a clearly defined point in time…

It is particularly suited to interventions introduced at a population level over a clearly define time period and that target population-level health outcomes…

A time series is a continuous series of observations on a population, taken repeatedly (normally at equal intervals) over time. In an ITS study, a time series of a particular outcome of interest is used to establish an underlying trend, which is ‘interrupted’ by an intervention at a known point in time…

There is an expected or counterfactual trend/scenario established for comparison purposes (includes data collected prior to the intervention)…

A priori information/key information needed for study design (based on the article above):

Yes. The headline supports claims made by and summarizes the key points of the article.

Appropriate design?

1. Clear differentiation between pre-intervention and post-intervention periods

2. Outcome should be short-term, with the possibility of changing quickly after an intervention has been implemented

Appropriate data?

1. There are no fixed limits regarding data points (amount needed); inspect pre-intervention data points using descriptive statistics (visualize)

2. Routine data (usually administrative), gathered over a long period of time/long time series

3. Understand potential bias in results related to changes in recording or data collection methods

Where is change expected?

1. Gradient of the Trend

2. Change in the Level

3. Both

When should change occur?

1. Immediately after

2. After some lag

What should be taken into account?

1. Time-varying confounders

Control for seasonality (leads to autocorrelation and over-dispersion)

Adjust for residual autocorrelation using ARIMA (autoregressive integrated moving average modeling)

Control for infectious diseases that are prone to outbreaks (use sensitivity analysis)


Are these data sources credible when applied to the article?
Yes. The source is credible since the study design was followed/implemented as intended.

What are the data sources saying?

There was an increase between 2006 and 2015 in the rates (monthly mental health visit rates) of outpatient mental health (primary healthcare and psychiatric visits) use by youth aged 10 to 24 years old in the province of Ontario for males and females. The 2012 Bell Let’s Talk campaign was temporarily (temporally?) associated with increases in the trends of outpatient mental health visits, especially within the adolescent female cohort. Although no discernible difference in the immediate change in the rate of mental health visits (magnitude/level change) was observed among the adolescent groups, young adults exhibited a slight drop in most outpatient mental health visits, followed by a moderate increase or plateauing of rates…

Results broken down:

1. Over 10-year period (2 time points, 2006 & 2015)

Adolescents (10-17) saw an increase in the monthly mental health visit rate for primary care and psychiatric services.
Young Adults (18-24) saw an increase in monthly mental health visit rates for primary care and psychiatric services.

2. Immediate change associated with intervention:

Adolescents (10-17)

There was no discernible difference in the immediate change in the rate of mental health visits observed that could be attributed to the campaign.

Young Adults (18-24)

There was an immediate drop in rates of mental health visits after the campaign- this group experienced a decrease and plateau in the slope of all psychiatric service visits after 2012.

3. Both female age cohorts saw increases in accessing primary health care for mental health services after the 2012 intervention.


Are the data sources being interpreted correctly?
The article makes the claim that “each year during which the campaign ran, mental healthcare access saw a spike amongst adolescent and young adult patients.” However, since only two data points were compared (the 2006 and 2015 data points) the statement about seeing a spike “each year” does not appear to be accurate. This statement also appears to contradict the one made right after it: “Following the month-long campaigns, visit rates decreased or plateaued, researchers found.” The article also advocates for a more targeted campaign with specific calls to action, to see if this may lead to more health behavior change.

Overall, the researchers discuss how the, “lack of substantive step change in health care utilization from normal levels is not surprising,” since the goal of the campaign was to increase awareness of mental health and stigma. At most, the data from this study may suggest that the campaign contributed to a gradual rather that immediate change in behavior as it relates to youth in Ontario, Canada accessing mental health services. The researchers call for further exploration of the increase in female mental health service utilization over the 10-year period (possibly with an “emphasis on gender and sex within health sciences research”) and further research on “more precise modeling techniques to measure the effect of social media on population and public health.”

Are limitations provided?
The research study provides the following limitations:

1. Administrative data was used, so illness severity could not be measured. The study also could not analyze individual presentations/usage of mental health services.

2. Emergency department visits for mental health services were not included in the study (this was so that the study could focus on planned mental health activities that could possibly be attributed to the campaign).

3. Specific sub-populations could not be studied to see how the campaign may have impacted homogeneous populations/smaller groups.

4. Although there was a temporal change associated with the campaign, other factors could have contributed to this change.

5. The cumulative effect of the campaign on people over time was not explored.

I would add that caution should be taken when trying to generalize the results from this ecological study of youth in Ontario, Canada to other populations.

 


What does this mean for the general public and public health professionals?
Although mental health awareness can be increased using social media outlets and campaigns, more research needs to be done to see if these campaigns can also influence behavior change that leads to an increase in the utilization of mental health services on a population-level (or in specific sub-populations).

The impact of social media campaigns on population health should be evaluated using an appropriate study design.

 

Scrutinizer Product

SM SC

 

 

Maternal Morbidity and Mortality in the U.S.

What is the headline saying or claiming?
Link to article Severe Complications for Women During Childbirth Are Skyrocketing – And Could Often Be Prevented

What is the news article saying?
“The rate of life-threatening complications for new mothers in the U.S. has more than doubled in two decades due to pre-existing conditions, medical errors and unequal access to care. The U.S. has the highest rate of maternal mortality in the industrialized world.”

Does the headline ultimately support claims made by the news article? Does it truly summarize the key points of the news article?
Yes. The headline supports claims made by and summarizes the key points of the article.

What are the implications of this headline?
Despite living in a developed country, there are an increasing number of women in the U.S. who have severe complications during pregnancy that are preventable.

What are the implications of this news article?
Severe complications during pregnancy in the U.S., although preventable, are not rare and impact women from all walks of life.

Maternal mortality has not improved in the U.S. over the past few decades and is getting worse.

https://www.npr.org/2017/05/12/528098789/u-s-has-the-worst-rate-of-maternal-deathsin-the-developed-world

http://www.thelancet.com/pdfs/journals/lancet/PIIS0140-6736(16)31470-2.pdf

What evidence currently exists to counter or support these implications?
Severe maternal morbidity (SMM) is increasing, however, the causes are unclear and may be related to changes in the population of women giving birth in the U.S. This may inevitably place women at higher risk for complications.

https://www.cdc.gov/reproductivehealth/maternalinfanthealth/severematernalmorbidity.html

The article linked to the claim that the U.S. has the worst maternal mortality in the industrialized world also shares that the U.S. has improved its surveillance to identify more potential cases (whereas some of the countries it is being compared to have not).

The U.S. captures deaths occurring within 1 year of the end of pregnancy. This is different from other countries that only capture those which are within 42 days postpartum.

https://www.cdc.gov/reproductivehealth/maternalinfanthealth/pmss.html

https://www.cdc.gov/cdcgrandrounds/archives/2017/november2017.html

According to the CDC’s most recent public health rounds on maternal mortality and morbidity surveillance, maternal mortality within 42 days postpartum has remained relatively flat over the past few years. Data collected between 1987 and 2013 also shows that there was a decrease in maternal deaths due to hemorrhaging and hypertension as well as an increase in maternal deaths due to heart conditions. These data indicate that there have been improvements in maternal mortality related to previously identified factors. Different factors are the cause of more recent increases in maternal mortality.

https://www.cdc.gov/cdcgrandrounds/archives/2017/november2017.html

Are there similar and/or opposing headlines from other news outlets? Do the news outlets only link back to other news outlets?
Similar articles link back to ProPublica/NPR.

What are the data sources (i.e. memo, official statement, official document, research study,  validated surveillance system, official report, etc.) supporting the article?


Are these data sources credible when applied to the news story? Why or why not?
These are credible sources because they are based on data that is available. However, the sources do not necessarily support the claims made in the article.

What are the data sources saying? Are they being interpreted correctly in the article and are limitations provided? Are there multiple ways to interpret the data or various conclusions that may been drawn from the data?
There are limitations that are not discussed in the article, especially for maternal mortality comparisons. However, the coverage for SMM seems to be relatively accurate.

 

What does this mean for the general public?
Those who are pregnant or trying to become pregnant (as well as their providers) should 
know that there is a risk for complications during pregnancy and their specific risk factors for complications during pregnancy. Patients and providers should also identify best practices to prevent complications during pregnancy before, during, and after pregnancy.

 

Maternal Mortality Considerations.png

Scrutinizer Product

MM SC.png

 

The Scrutinizer Challenge Initiative: A Charge for Epidemiologists and Partners

Watch my introduction video!

Scrutinizer Challenge Video

It can be difficult to distinguish between truth, fiction, half-truth, and misinformation as we watch the news, read headlines, and scroll through various social media feeds. Fortunately, epidemiologists have the tools needed to serve as a practical resource for colleagues, partners, and communities in these situations. The Scrutinizer Challenge initiative is an opportunity for epidemiologists to tackle at least one headline or news story a month that is relevant to public health. The goal is for all of us to understand how we can serve as a practical resource by doing the research needed to examine data sources and implications of news stories and research articles. This process can help us deliver consistent and reliable messages to share with colleagues, partners, and communities. It also provides an opportunity for public health practitioners to consolidate resources and develop working relationships between practice and academia.

The outline below provides guidance on how to approach The Scrutinizer Challenge initiative after identifying a headline/news story or research article of interest:

SC Guidance

 

Maternal Mortality Considerations

Scrutinizer Challenge initiative end products include a list of sources and a short explanation about how each source truly contributes to a research article/news story and its implications, as well as one of the following: 1) an actionable summary that could be shared with colleagues or 2) a summary that could be shared with a local partner/the general public.

Pathways for Utilizing the Scrutinizer Challenge


Scrutinizer Challenge initiative end products should be emailed to sophia.anyatonwu@gmail.com. Submissions may be highlighted in public health newslettters, as a separate report, or used as content in a round table discussion.

Join the movement, join the network #IAmAnEpidemiologist #EpidemiologyScrutinizer

Sophia Anyatonwu, MPH, CPH, CIC
sophia.anyatonwu@gmail.com
about.me/sanyatonwu