Health communicators face the persistent challenge of balancing correcting misinformation with the risk of amplifying claims that audiences might not have noticed otherwise. As KFF’s President and CEO, Drew Altman noted in a previous column, reporters and communicators “likely [have] no choice when [politicians] spread false information but to cover it and correct the lies in the process, but there are choices to be made about how it’s done.” This tension was recently illustrated when President Donald Trump briefly shared, and later deleted, an AI-generated video on Truth Social that falsely alleged “medbeds,” a fake technology featured in a long-standing conspiracy theory, could cure all diseases and reverse aging. Although the video was deleted and widely debunked by mainstream media, like ABC, CNN, MSNBC, and Forbes, coverage and discussion briefly amplified the claim, exposing new audiences and sustaining its circulation. In this example, most of the coverage and online posts criticized or shared background on the claim, but coverage and attention intended to correct or criticize misinformation to an audience that was previously unaware of it can extend the claim’s reach and persistence.
Amplifying Misinformation Can Increase its Reach and Persistence
Several mechanisms contribute to amplification risks when reporting on or criticizing a claim:
- Engagement-driven dissemination: Content that provokes a reaction, whether agreement or criticism, generates engagement like clicks, comments, likes, and shares. Social media algorithms are designed to detect this potential for attention and amplify it further. This allows posts that criticize or mock false claims to travel just as far, or farther, than posts promoting them. A similar pattern occurs in news media, as coverage that elicits reactions like fear, disgust, and surprise is more likely to be shared across networks.
- Repetition and familiarity: Repeated exposure to a false claim, whether through multiple news stories or recurring social media posts, increases the perception of plausibility (the “illusory truth effect”). Some studies have even shown that neither fact-checking nor media literacy interventions can fully mitigate the effects of exposure to repeated misinformation.
- Extended lifespan: Any reporting, corrective or not, keeps a claim in public view longer than if it were ignored, allowing it to resurface and influence perceptions over time.
Exposure to Misinformation Can Erode Trust
Misinformation spreads more easily during periods of uncertainty and low institutional trust. In these contexts, even when a specific claim is debunked, narratives like medbeds can persist because they reinforce doubts about official institutions and align with their ideology. These dynamics occur in a landscape of fractured trust. KFF’s Tracking Poll on Health Information and Trust finds that many Republicans report more trust in health information from President Trump than from the CDC, FDA, or their local health department. In this environment, even unsubstantiated claims can have indirect effects. Exposure to these claims and corrections can erode trust in health authorities and government by reinforcing skepticism, even without belief in the specific conspiracy. So, people may reject the literal claim that medbeds exist but accept the broader idea that powerful institutions hide cures or they may have difficulty believing true information in the future.
Proactive and Strategic Communication
While reporting on misinformation too early can have unintended effects, leaving it unchecked can also leave information voids. Reporting on misinformation requires balancing transparency with amplification risks. Focusing on verified facts, limiting repetition of false claims, and avoiding sensationalizing or mocking narratives can reduce unintended spread. Research and practice offer additional strategies:
- Build Audience Resilience through Proactive Prebunking: Prebunking exposes audiences to fact-based information and explains how misinformation spreads before false claims appear. It strengthens resistance, fills knowledge gaps, emphasizes accurate facts without repeating false claims, and highlights manipulative tactics. Prebunking is especially useful for low- or medium-risk narratives and can complement later debunking if a claim gains traction.
- Debunk Strategically: If there is reason to believe that misinformation has reached the large swaths of Americans who are unsure about health information, or “the muddled middle”, debunking may be necessary. The Public Health Communication Collaborative suggests beginning with a clear fact, providing context on the misinformation and tactics used, and ending with a reinforcing fact. This helps audiences retain accurate information while limiting amplification of the false claim.
- Build Trust to Make Corrections Stick: Misinformation persists partly because of underlying distrust. Strengthening community relationships, amplifying trusted messengers, and communicating consistently reduces the appeal of conspiracies more effectively than corrections alone.
Misinformation is not just about what is said, but how it spreads. By anticipating false narratives, using prebunking and debunking strategically, reporting responsibly, and prioritizing trust-building, communicators can limit the reach of false claims while supporting informed, resilient audiences.