To celebrate 10 years of Creator Weekly, I’m sharing tech highlights from
2015 that still resonate 10 years later. This update was for the week ending January 25, 2015.
If you have used Facebook, or really any social media, you have seen how easily hoaxes, rumors and fake news spread. This isn't a new problem.

2015 Facebook post reporting options include "it's a false news story"
This update made several changes:
Meta Community Standards Transparency Report: Misinformation Policy Changes
If you have used Facebook, or really any social media, you have seen how easily hoaxes, rumors and fake news spread. This isn't a new problem.
As
the old adage says, "A lie can run around the world while the truth is pulling on its boots".
Watch the Short version below, or
check out my full discussion from the January 26 Creator News live stream.
On January 20, 2015 Facebook announced they had added an option to report false news, because "We’ve heard from people that they want to see fewer stories that are hoaxes, or misleading news." They noted that many people delete a post if their friends comment telling them it's a hoax.
On January 20, 2015 Facebook announced they had added an option to report false news, because "We’ve heard from people that they want to see fewer stories that are hoaxes, or misleading news." They noted that many people delete a post if their friends comment telling them it's a hoax.

2015 Facebook post reporting options include "it's a false news story"
This update made several changes:
- There was a new option in the report post menu for "It's a false news story".
- If many people report an item as a hoax or false news and (or) many people delete their post with a particular link, posts sharing that link would have reduced distribution in the News Feed.
- If there are a lot of reports, Facebook also adds a label "Many people on Facebook have reported this story contains false information.
- Facebook made clear that they were not making a determination whether a post or link had false information and that they would not remove posts with false information.
- People have to realize that the post is a hoax or false news.
- They have to report it as false news (and there need to be many reports).
- By the time there are many reports, this
- It doesn't take into account organized flagging of real news as false, in an attempt to get certain content demoted in the News Feed.
I said in the video that this made Facebook seem naive, but I'm sure they
thought of those limitations, and this was their choice.
Since 2015, Facebook has vacillated between having human fact checkers and
not, recommending political content and not, removing misinformation and
not.
There are especially changes around US Presidential elections, as Facebook and
Meta are criticized for both allowing the spread of misinformation and for
demoting or removing posts with misinformation.
A few milestones over the past decade:
May 2016: There was an outcry when it was revealed Facebook's human
editors supposedly "suppressed news stories of interest to conservative readers" from the trending list. In response, Facebook removed the human editors,
and
immediately a fake news story about a Fox News anchor started trending.
December 2016: Facebook simplified reporting of fake news stories and
started working with 3rd party fact checking
organizations. If the fact checkers determined it was indeed fake, related
posts would get a big red ! and "Disputed by 3rd Party Fact-Checkers" label.
Disputed stories would "appear lower" in the News Feed, show a warning if
people tried to reshare them, and were ineligible for paid promotions.
See Facebook's demo
of how the system is (or was) designed to work.
December 2017: Facebook
removed the "Disputed" flag from false news identified by fact checkers, replacing it with linked Related
Articles to add context.
May 2018: Facebook gives their approach a catchy name:
Remove, Reduce, Inform. Remove the Community Guidelines-violating content, Reduce the spread of
misleading or problematic content, Inform by adding additional context. In
June 2018 Facebook expanded their fact checking program to more countries and more types of content, and started working with
academics to better understand the spread of misinformation.
April 2019: Facebook started adding the News Feed Context Button to images, reduced the reach of Groups that
repeatedly share false information and announced expansion of the
fact-checking program.
April 2020: Facebook
started labeling and removing
some COVID-19 related misinformation. In December 2020 Facebook started
removing false claims about COVID-19 vaccines.
This process was adjusted several times through 2022.
May 2021: If an individual repeatedly shares content rated false by
fact-checkers,
all of their posts get reduced distribution
in the News Feed. Also, when someone "likes" a Page that has repeatedly shared
false information, they see a warning.
In 2021 Facebook vastly reduced the amount of political content
people are recommended in their News Feed. Recommendations of political
content from non-friends was
eventually made opt in
for Meta's Threads and Instagram. Meta never shared clear details about what
it considered "political", at it seems likely that shifted over time.
Oddly, there were no
misinformation related updates at all between November 2022 and December 2024.
What's happening in 2025?
So that brings us to 2025.
Just a few weeks ago Meta announced a major change in policy around hate
speech and misinformation. Regarding misinformation specifically:
- They are getting rid of the 3rd party fact checkers, starting in the United States, due to perceived bias and "censorship". Over the next few months, Meta will "get rid of our fact-checking control, stop demoting fact checked content and, instead of overlaying full screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it."
- They are launching a Community Notes program, similar to X. The idea is that users can add factual notes to content, and then if (and only if) there is a consensus across users with different political orientations that it's a helpful note, it will display publicly.
- They are adding more political content to the feeds on Facebook, Instagram and Threads, with controls that let you opt out.
You can still
flag a Facebook post as "false news", but it's not clear to me if that actually does anything.
Community Notes seem like an ineffective way to prevent the spread of
misinformation, as the notes won't appear unless there is broad agreement that
the note should be published. That takes time, and posts can be seen by
millions of people before that happens. And highly partisan posts may never
have notes appear on them, because supporters of the poster aren't likely to
find any note to be helpful.
But Community Notes likely does stop people who post false information from
complaining that Facebook is being biased or unfair. It's the "community"
saying a post is false, not Facebook, and not "biased" fact-checkers.
Read more about the changes,
including the new "hateful conduct" policy.
This is clearly a difficult problem, and I'm not sure any platform will be
able to solve it. It just feels like Meta is more concerned about the people
complaining than making sure objectively false information doesn't
spread.
Resources
Meta Newsroom: Combating Misinformation
Meta Community Standards Transparency Report: Misinformation Policy Changes
Related Information
Buzzfeed News, 26 October 2016:
Here's why Facebook's trending algorithm keeps promoting fake news.
NPR's All Tech Considered, 11 November 2016:
Zuckerberg denies fake news on Facebook had impact on the election.
Washington Post, 15 November 2016:
Why Facebook and Google are struggling to purge fake news.
NBC News, 7 August 2020: Sensitive to claims of bias, Facebook relaxed misinformation rules for
conservative pages
Poynter, 22 April 2024:
Let's say it plainly: Fact-checking is not censorship.
MIT Sloan, 2 September 2024: Warning labels from fact checkers work — even if you don’t trust them.
MIT Sloan, 2 September 2024: Warning labels from fact checkers work — even if you don’t trust them.
Washington Post, 30 October 2024:
Elon Musk says X users fight falsehoods. The falsehoods are winning.
Nature, 10 January 2025:
Does fact-checking work? What the science says.
Comments
Post a Comment
Spam and personal attacks are not allowed. Any comment may be removed at my own discretion ~ Peggy