Informed i’s Weekly Business Insights
Extractive summaries and key takeaways from the articles carefully curated from TOP TEN BUSINESS MAGAZINES to promote informed business decision-making | Since 2017 | Week 413 | August 8-14, 2025 | Archive

How to Counter Fake News
By Michael Etter et al., | Harvard Business Review Magazine | September–October 2025
Extractive Summary of the Article | Listen
3 key takeaways from the article
- Fake news is a specific type of disinformation: It is deliberately fabricated as a “news” story with an intent to deceive, transmitted by social media, and characterized by virality. Fake news isn’t a problem just for the corporate sector, it has also proven disruptive in politics, eroding trust in democratic institutions, deepening polarization, influencing voting perceptions, and swaying results in closely contested elections.
- Conventional methods fail to address the full scope of the challenge, for several reasons.
- To effectively combat disinformation, organizations must spread the perception that fake news lacks widespread credibility not only because it is demonstrably false but also because others don’t believe it. Social proof tactics are not a replacement for information-based strategies such as corrections; rather, they enhance them. By integrating both approaches, companies can develop a more robust and effective response to fake news. Drawing on their research, the authors offer three ways to leverage social proof to counter disinformation: monitoring social resonance, ensuring transparency, and activating allies.
(Copyright lies with the publisher)
Topics: Countering Fake News, Business Strategy
Click for the extractive summary of the articleFake news is a specific type of disinformation: It is deliberately fabricated as a “news” story with an intent to deceive, transmitted by social media, and characterized by virality.
Fake news isn’t a problem just for the corporate sector, it has also proven disruptive in politics, eroding trust in democratic institutions, deepening polarization, influencing voting perceptions, and swaying results in closely contested elections.
Unfortunately, fake news is a stubborn problem that is unlikely to go away. While many U.S. adults feel confident in their ability to detect fake news, 38% of U.S. social media users have accidentally shared a fake news story, according to the Trusted Web Foundation. Complicating matters, fake news spreads significantly faster than real news—it is up to 70% more likely to be shared, according to a study of news stories on Twitter by MIT researchers. And with each share, it gains momentum: The more it gets shared, the more it looks true, and the more people share it. Fake news is likely to be an even bigger problem in the coming years as trust in legacy media continues to erode, advances in AI and video-editing technologies make fabricated content nearly indistinguishable from reality, and social media companies abdicate their content-moderation responsibilities, as Meta did in early 2025 when it announced that Facebook would stop fact-checking content on its platforms.
Conventional methods fail to address the full scope of the challenge, for several reasons. First, ignoring fake news is rarely effective. Second, asking news outlets and social media platforms to swiftly remove or correct harmful content is an important first step, but it is insufficient to staunch the flow. Third and more frustrating still, correcting the record with facts does little to change the narrative.
What’s happening here? Many executives don’t understand the viral nature of the problem and can often inadvertently amplify fake news by discussing it. This phenomenon is known as the “Streisand effect,” named after the actor-singer Barbra Streisand, whose unsuccessful attempts to prevent knowledge about her home address from spreading inadvertently made it more widely known. Once a piece of fake news reaches a certain threshold, efforts to suppress it will be largely in vain.
Fact-based efforts to combat fake news ignore a key insight into human psychology. People’s opinions are often shaped less by their own point of view than their perception of what others believe—a phenomenon that behavioral psychologists call “social proof.” We tend to be overly and subconsciously influenced by what we think is the majority opinion. That’s a problem when it comes to fake news, because people believe that others are more affected by fake news than they are themselves.
To effectively combat disinformation, organizations must spread the perception that fake news lacks widespread credibility not only because it is demonstrably false but also because others don’t believe it. That means crafting a multichannel strategy that addresses the falsehoods while also signaling that experts, peers, and other key stakeholders recognize the disinformation. The goal isn’t just to correct the record; it’s to show that the company’s reputation remains intact in the eyes of those who matter. When a company demonstrates that stakeholders’ opinions are unaffected by the fake news, it can effectively neutralize—or at least soften—the reputational damage.
Social proof tactics are not a replacement for information-based strategies such as corrections; rather, they enhance them. By integrating both approaches, companies can develop a more robust and effective response to fake news. Drawing on their research, the authors offer three ways to leverage social proof to counter disinformation: monitoring social resonance, ensuring transparency, and activating allies.
show less
Leave a Reply
You must be logged in to post a comment.