KFF/Washington Post Poll Looks at Parents’ Trust in Children’s Health Content on Social Media, And Unfounded Claims About Abortion Pill Safety Follow FDA Approval of Generic Version – The Monitor


VOLUME 33


Summary

This volume highlights the latest release from the KFF/The Washington Post Survey of Parents, which finds that most parents report seeing children’s health content on social media, but many are unsure how to evaluate the trustworthiness of advice shared by health and wellness influencers. It also reviews misleading claims about the safety of medication abortion following the approval of a generic version of the abortion pill mifepristone, and it explores reports that federal officials are considering adding autism to the list of conditions covered by the Vaccine Injury Compensation Program. Lastly, it examines the use of AI chatbots by patients seeking to interpret lab results.


Featured: KFF/Washington Post Survey of Parents Finds Most Parents See Children’s Health Content on Social Media and Many Are Unsure What to Trust

The latest release from the KFF/Washington Post Survey of Parents finds that eight in ten parents say they see information or advice about children’s health at least occasionally on social media, including about three in ten who say they see this information daily or weekly. One-third (36%) of parents ages 18 to 34 say they see this content at least weekly compared to fewer parents ages 35-49 (28%) or those 50 and older (22%).

Three in Ten Parents See Information About Children's Health On Social Media At Least Weekly, Including Larger Shares of Younger Parents

Even though most parents say they see children’s health content on social media, very few can name a specific influencer they trust for this content. One in seven (15%) parents say they trust a particular influencer for information and advice about children’s health, though just 4% can name the influencer. Parents are also largely split on their ability to discern whether content from influencers is trustworthy. About a third (35%) say it is easy to know what advice to trust when it comes from health and wellness influencers on social media, while about four in ten (38%) say it is difficult and another quarter (27%) say they don’t see such content.

Parents Are Split on Whether It Is Easy or Difficult To Know What Advice To Trust From Health Influencers

When it comes to vaccine-related content specifically, one-third of parents say they have ever seen information or advice about children’s vaccines on social media, including similar shares who say most of the content they see is “pro-vaccine” (8%) and “anti-vaccine” (7%), with one in five (19%) saying they see a mix of both.


Recent Developments

Misleading Claims About Abortion Pill Safety Follow FDA Approval of Generic Version of Mifepristone

Photo of a person in a yellow sweater holding a capsule in their hand
MementoJpeg / Getty Images
What’s Happening?

Claims questioning the safety of the abortion drug mifepristone have circulated online following the Food and Drug Administration’s (FDA) approval of a second generic version of the drug. Mifepristone has been approved by the FDA for 25 years with an extensive safety record supported by peer-reviewed research, but the approval drew renewed attention to misleading narratives about its safety spread by lawmakers and officials who oppose abortion. The surprise decision to approve a second generic version followed a September 19 letter in which Health and Human Services (HHS) Secretary Robert F. Kennedy Jr. and FDA Commissioner Marty Makary pledged to conduct a full safety review of the drug.

What Are Common Themes in Online Conversations?
  • Posts about abortion pills increased on X, Reddit, and Bluesky on October 2, the day news of the approval was first reported. KFF’s monitoring of social media identified 8,959 posts, reposts, and comments mentioning terms relating to medication abortion on that day, up from a daily average of 2,460 over the 30 days prior.
  • Although posts about safety concerns represented a relatively small number of posts about medication abortions, the narrative was amplified by some prominent health officials and elected representatives who oppose abortion, despite mifepristone’s long record of safety and effectiveness. The most-engaged-with post about the safety of medication abortion that day came from Senator Josh Hawley, who has more than 2 million followers on X. The post claimed that evidence shows that abortion medications are dangerous and potentially fatal for the mother, though the post did not cite the alleged evidence supporting this claim.
Why This Matters

Unsupported claims questioning the safety of mifepristone despite extensive data showing the drug’s safety may influence decisions about medication abortion and create unnecessary concern and confusion among people seeking care. Mifepristone has been taken by millions of women and medication abortion currently accounts for nearly two-thirds of U.S. abortions. High-profile statements from government officials raising unfounded safety concerns may create confusion or hesitancy among patients and providers and fuel opposition to the drug when there is no evidence of harm.

What Does The Evidence Say?

Mifepristone has been approved by the FDA for 25 years, and FDA prescribing information notes that serious adverse events have been shown to occur in fewer than 0.5% of patients. Other studies have found similar rates, and major medical organizations, including the American College of Obstetricians and Gynecologists (ACOG) and the Society for Maternal-Fetal Medicine (SMFM) maintain that the drug is safe.

Reports Suggest HHS May Add Autism to List of Conditions Covered by Vaccine Injury Compensation Program

Photo of a toddler being administered a vaccination
Karl Tapales / Getty Images
What’s Happening?

Reports that federal health officials are considering expanding vaccine injury compensation to include autism may have contributed to increases in online conversations linking vaccines to the condition, despite decades of research showing no association. HHS Secretary Robert F. Kennedy Jr. has reportedly suggested both directly adding autism to the Vaccine Injury Compensation Program (VICP) and broadening definitions of some brain conditions covered by the program.

What is VICP?
  • The National Vaccine Injury Compensation Program is a no-fault alternative to the traditional court  system for vaccine injury claims, designed to compensate families for rare vaccine injuries. The program, which covers most routine vaccines and is funded by a small tax on doses administered, manages a trust fund of about $4 billion. The program includes a list of specific injuries that can be caused by each vaccine, which are presumed to be vaccine-related if they occur within timeframes described in the table.  Autism was excluded from this list after extensive litigation in the early 2000s, when judges appointed to handle vaccine injury cases reviewed test cases representing thousands of claims and found no link between vaccines and autism.
  • Compensation decisions through the program do not always indicate causation. Since 1988, about 60% of compensated cases involved negotiated settlements in which HHS drew no direct conclusions about the cause of injuries.
Why This Matters
  • Physicians and legal scholars have warned that this could lead to a wave of injury claims, potentially bankrupting the program and reinforcing false narratives that link vaccines to autism despite decades of research showing no association.
  • The myth that vaccines cause autism is a long-standing false claim, and despite frequent debunking, KFF polling has found that many parents continue to express uncertainty over whether or not it is true. Adding autism to the list of covered conditions could be used to suggest vaccines cause autism despite scientific evidence to the contrary, further eroding trust in vaccines and federal health authorities. Research suggests autism begins early in pregnancy, not in toddlerhood when most vaccines are given. KFF and The Washington Post’s recent release from the Survey of Parents found that at least one-third of parents said there had been too little research into the causes of autism (44%) or whether there is a link between vaccines and autism spectrum disorder (33%).
What Are People Saying?

News coverage and social media posts, reposts, and comments mentioning both autism and VICP spiked in late September and early October, according to KFF’s monitoring. KFF identified 1,647 posts, reposts, and comments published on X, Reddit and Bluesky on September 27, an increase from a daily average of only 12 from the 90 days prior. Many of the posts with the most engagement were reposts from an account with more than 400,000 followers that falsely claimed VICP had conceded vaccines cause autism. Similarly, the number of news stories mentioning both autism and VICP reached the highest point of the year thus far on October 8, with 86 news stories published that day, compared to a daily average of less than one for the year prior to this date.  

What Does The Evidence Say?

Decades of research have shown no causal relationship between vaccines and autism, and medical organizations like the American Academy of Pediatrics (AAP) have concluded there is no link. A 2013 CDC study, for example, showed that the amount of antigens received from vaccines was the same between children with and without autism, and a 2019 cohort study of over 650,000 children in Denmark found no increased autism risk from receiving the measles, mumps, and rubella (MMR) vaccine.


AI & Emerging Technology

Some Patients Turn to AI Chatbots to Interpret Lab Results

Photo of a vial of blood on top of printed lab results
peepo / Getty Images
What’s Happening?

A recent KFF Health News article detailed the growing trend of patients using artificial intelligence (AI) chatbots like ChatGPT, Claude, and Gemini to interpret their medical test results and records when they cannot quickly reach their doctors for answers. Some patients are uploading lab results, imaging reports, and other medical records to these chatbots to get medical explanations while waiting for physician callbacks or appointments.

How Widespread Is This Practice?

Data on how often patients specifically upload test results is not readily available, but KFF’s August 2024 Tracking Poll found that about one in six adults (17%) reported using AI chatbots at least once a month to find health information and advice, rising to one in four (25%) adults under age 30. Most adults (63%) said they were “not too confident” or “not at all confident” that health information from AI chatbots is accurate, while about a third said they were “very” (5%) or “somewhat confident” (31%) in the accuracy of this information.

Why This Matters

While AI chatbots may help patients understand results and reduce anxiety, physicians and researchers have identified risks of using this technology. One concern is AI “hallucinations,” instances where chatbots generate plausible but factually inaccurate information. Chatbots can present false information with the same tone as accurate information, making errors difficult for non-medical users to detect. These errors can be difficult to detect even for medical professionals. A March study published in BMC Medical Education found that general practice trainees had a mean accuracy of only 55% in detecting AI-generated medical hallucinations.

What Does The Evidence Say?

Research indicates that improved prompting strategies can improve the accuracy of AI-generated responses. An April study in JAMIA Open, for example, found that instructing a chatbot to take on the persona of a clinician improved accuracy, and an August Communications Medicine study showed that including additional safeguards in prompts, like asking the AI to use only clinically validated information, reduced the rate of hallucinations. AI education efforts focusing on how to tailor prompts to receive the most accurate information may improve the usefulness of these tools. Still, these strategies did not eliminate errors entirely, and researchers have recommended that chatbots should be supplementary tools rather than primary sources of health information.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *