Sharing Our Actions on Stopping Hate

Facebook Business

Facebook is committed to making sure everyone using our platforms can stay safe and informed. As part of our ongoing effort, we’re making changes to our policies, investing in system transparency and sharing work we’re doing to address the nine recommendations outlined by the Stop Hate For Profit boycott organizers.

Latest News

Originally published July 1, 2020 at 7:30AM PT on AdAge

Facebook Does Not Benefit From Hate

Nick Clegg, Facebook’s VP of Global Affairs and Communications penned an article about the progress we’ve made with moderating harmful content and taking down hate speech before and after someone reports it.

“A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter,” says Nick. “Last month, we reported that we find nearly 90% of the hate speech we remove before someone reports it - up from 24% little over two years ago. We took action against 9.6 million pieces of content in the first quarter of 2020 – up from 5.7 million in the previous quarter. And 99% of the ISIS & Al Qaeda content we remove is taken down before anyone reports it to us.”

Read the full article.

Update on July 1, 2020 at 7:30 AM PT

Addressing the Stop Hate For Profit Recommendations

The Stop Hate For Profit boycott organizers outlined nine overall recommendations that fall into three categories: provide more support to people who are targets of racism, antisemitism and hate, increase transparency and control around hate speech or misinformation and improve the safety of private Groups on Facebook.

Below, we’re addressing the recommendations, describing the work that is underway and sharing areas where we’re exploring further changes.

Provide More Support to People Who are Targets of Racism, Anti Semitism and Hate

The boycott organizers have asked for three things within this category:

  1. Create a separate moderation pipeline staffed by experts on identity-based hate for users who express they have been targeted because of specific identity characteristics. Today, hate speech reports on Facebook are already automatically funneled to a set of reviewers with specific training in our identity-based hate policies in 50 markets covering 30 languages. In addition, we consult with experts on identity-based hate in developing and evolving the policies that these trained reviewers enforce.
  2. Put targets of hate and harassment in touch with Facebook.Our approach, developed in consultation with experts, is to follow up with people who report hate speech and tell them about the actions we’ve taken. We also provide user controls that allow people to moderate comments on their posts, block other users and control the visibility of their posts by creating a restricted list. We’re exploring ways we can connect people with additional resources.
  3. Provide more information about hate speech in reports.We are committed to continuing to improve transparency about our Community Standards enforcement. We intend to include the prevalence of hate speech in future Community Standards Enforcement Reports (CSER), pending no further complications from COVID-19.

Increase Transparency and Control Around Hate Speech or Misinformation

The next set of recommendations focuses on increasing transparency and controls around hate speech or misinformation, including preventing ads from appearing near content labeled as hate or misinformation, telling advertisers how often their ads have appeared next to this content, providing refunds to those advertisers and providing an audited transparency report.

Below are some of our ongoing efforts as well as areas we’re continuing to explore:

  • Third-Party Fact Checking: Helps identify misinformation, puts prominent labels and reduces the distribution of content or disapproves related ads if the content is rated false or partly false on our platform.
  • Brand Safety Hub: Advertisers can review publishers, individual in-stream videos and instant articles in which their ads were embedded. While there are substantial technical challenges to extending the offerings in the Brand Safety Hub more broadly, we are exploring what is possible.
  • Advertiser Refunds: Issued when ads run in videos or in instant articles that are determined to violate our Network Policies.
  • Community Standards Enforcement Reports: Provide extensive information about our efforts to keep our community safe. We will continue investing in this work and will commit whatever resources are necessary to improve our enforcement.
  • Certification From Independent Groups: Groups like the Digital Trading Standards Group have examined our advertising processes against Good Practice Principles.
  • Auditing Our Brand Safety Tools and Practices: We will continue to work with advertising industry bodies like the Global Alliance for Responsible Media and the Media Ratings Council on these audits.

Improve the Safety of Private Groups on Facebook

Our team of 35,000 safety and security professionals actively review potentially violating content today, including content in private Groups. In addition:

  • Our proactive artificial intelligence-based detection tools are also used to identify hateful content and Groups that aren’t reported to us.
  • If moderators post or permit the posting of violating content, the Group incurs penalties that can result in the Group being removed from Facebook.
  • We are exploring providing moderators with even better tools for moderating content and membership.
  • We are exploring ways to make moderators more accountable for the content in groups they moderate, like providing more education on our Community Standards and increasing the requirements on moderating potential bad actors.

This isn’t work that ever finishes. We recognize our responsibility to help change the trajectory of hate speech.

Originally published on June 29, 2020 at 11:00AM PT

Our Continued Investment in System Transparency

We gave an update on our how we’re investing in system transparency, which includes:

  • An audit run by the Media Rating Council (MRC) where we plan to evaluate our partner and content monetization policies and the brand safety controls we make available to advertisers.
  • A commitment to include hate speech prevalence in our quarterly Community Standards Enforcement Report (CSER).
  • Participation in the World Federation of Advertiser’s (GARM) to align on brand safety standards and definitions, scaling education, common tools and systems, and independent oversight for the industry.

Originally published on June 26, 2020 at 11:25AM PT

CEO Mark Zuckerberg announced policy changes based on feedback from the civil rights community and our civil rights auditors. These include:

  • Changing our policies to better protect immigrants, migrants, refugees and asylum seekers. We’ve already prohibited dehumanizing and violent speech targeted at these groups — now we’re banning ads suggesting these groups are inferior or express contempt, dismissal or disgust directed at them.
  • Banning posts that make false claims about ICE agents checking for immigration papers at polling places or other threats meant to discourage voting. We will use our Election Operations Center to work with state election authorities to remove false claims about polling conditions in the 72 hours leading up to election day.
  • Labeling content that we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society — but we'll add a prompt to tell people that the content they're sharing may violate our policies.

Get Facebook Business news in your inbox.

Sign up for our monthly newsletter for the latest updates, insights, marketing trends and articles from Facebook.

Tags

Announcements Small Business Agency Large Business

Announcements
·
July 1, 2020

Sharing Our Actions on Stopping Hate

F

acebook is committed to making sure everyone using our platforms can stay safe and informed. As part of our ongoing effort, we’re making changes to our policies, investing in system transparency and sharing work we’re doing to address the nine recommendations outlined by the Stop Hate For Profit boycott organizers.



Latest News

Originally published July 1, 2020 at 7:30AM PT on AdAge

Facebook Does Not Benefit From Hate

Nick Clegg, Facebook’s VP of Global Affairs and Communications penned an article about the progress we’ve made with moderating harmful content and taking down hate speech before and after someone reports it.

“A recent European Commission report found that Facebook assessed 95.7% of hate speech reports in less than 24 hours, faster than YouTube and Twitter,” says Nick. “Last month, we reported that we find nearly 90% of the hate speech we remove before someone reports it - up from 24% little over two years ago. We took action against 9.6 million pieces of content in the first quarter of 2020 – up from 5.7 million in the previous quarter. And 99% of the ISIS & Al Qaeda content we remove is taken down before anyone reports it to us.”

Read the full article.



Update on July 1, 2020 at 7:30 AM PT

Addressing the Stop Hate For Profit Recommendations

The Stop Hate For Profit boycott organizers outlined nine overall recommendations that fall into three categories: provide more support to people who are targets of racism, antisemitism and hate, increase transparency and control around hate speech or misinformation and improve the safety of private Groups on Facebook.

Below, we’re addressing the recommendations, describing the work that is underway and sharing areas where we’re exploring further changes.

Provide More Support to People Who are Targets of Racism, Anti Semitism and Hate

The boycott organizers have asked for three things within this category:

  1. Create a separate moderation pipeline staffed by experts on identity-based hate for users who express they have been targeted because of specific identity characteristics.
    Today, hate speech reports on Facebook are already automatically funneled to a set of reviewers with specific training in our identity-based hate policies in 50 markets covering 30 languages. In addition, we consult with experts on identity-based hate in developing and evolving the policies that these trained reviewers enforce.
  2. Put targets of hate and harassment in touch with Facebook.
    Our approach, developed in consultation with experts, is to follow up with people who report hate speech and tell them about the actions we’ve taken. We also provide user controls that allow people to moderate comments on their posts, block other users and control the visibility of their posts by creating a restricted list. We’re exploring ways we can connect people with additional resources.
  3. Provide more information about hate speech in reports.
    We are committed to continuing to improve transparency about our Community Standards enforcement. We intend to include the prevalence of hate speech in future Community Standards Enforcement Reports (CSER), pending no further complications from COVID-19.

Increase Transparency and Control Around Hate Speech or Misinformation

The next set of recommendations focuses on increasing transparency and controls around hate speech or misinformation, including preventing ads from appearing near content labeled as hate or misinformation, telling advertisers how often their ads have appeared next to this content, providing refunds to those advertisers and providing an audited transparency report.

Below are some of our ongoing efforts as well as areas we’re continuing to explore:

  • Third-Party Fact Checking: Helps identify misinformation, puts prominent labels and reduces the distribution of content or disapproves related ads if the content is rated false or partly false on our platform.
  • Brand Safety Hub: Advertisers can review publishers, individual in-stream videos and instant articles in which their ads were embedded. While there are substantial technical challenges to extending the offerings in the Brand Safety Hub more broadly, we are exploring what is possible.
  • Advertiser Refunds: Issued when ads run in videos or in instant articles that are determined to violate our Network Policies.
  • Community Standards Enforcement Reports: Provide extensive information about our efforts to keep our community safe. We will continue investing in this work and will commit whatever resources are necessary to improve our enforcement.
  • Certification From Independent Groups: Groups like the Digital Trading Standards Group have examined our advertising processes against JICWEBS’ Good Practice Principles.
  • Auditing Our Brand Safety Tools and Practices: We will continue to work with advertising industry bodies like the Global Alliance for Responsible Media and the Media Ratings Council on these audits.

Improve the Safety of Private Groups on Facebook

Our team of 35,000 safety and security professionals actively review potentially violating content today, including content in private Groups. In addition:

  • Our proactive artificial intelligence-based detection tools are also used to identify hateful content and Groups that aren’t reported to us.
  • If moderators post or permit the posting of violating content, the Group incurs penalties that can result in the Group being removed from Facebook.
  • We are exploring providing moderators with even better tools for moderating content and membership.
  • We are exploring ways to make moderators more accountable for the content in groups they moderate, like providing more education on our Community Standards and increasing the requirements on moderating potential bad actors.

This isn’t work that ever finishes. We recognize our responsibility to help change the trajectory of hate speech.



Originally published on June 29, 2020 at 11:00AM PT

Our Continued Investment in System Transparency

We gave an update on our how we’re investing in system transparency, which includes:

  • An audit run by the Media Rating Council (MRC) where we plan to evaluate our partner and content monetization policies and the brand safety controls we make available to advertisers.
  • A commitment to include hate speech prevalence in our quarterly Community Standards Enforcement Report (CSER).
  • Participation in the World Federation of Advertiser’s Global Alliance for Responsible Media (GARM) to align on brand safety standards and definitions, scaling education, common tools and systems, and independent oversight for the industry.



Originally published on June 26, 2020 at 11:25AM PT

CEO Mark Zuckerberg announced policy changes based on feedback from the civil rights community and our civil rights auditors. These include:
  • Changing our policies to better protect immigrants, migrants, refugees and asylum seekers. We’ve already prohibited dehumanizing and violent speech targeted at these groups — now we’re banning ads suggesting these groups are inferior or express contempt, dismissal or disgust directed at them.
  • Banning posts that make false claims about ICE agents checking for immigration papers at polling places or other threats meant to discourage voting. We will use our Election Operations Center to work with state election authorities to remove false claims about polling conditions in the 72 hours leading up to election day.
  • Labeling content that we leave up because it is deemed newsworthy, so people can know when this is the case. We'll allow people to share this content to condemn it, just like we do with other problematic content, because this is an important part of how we discuss what's acceptable in our society — but we'll add a prompt to tell people that the content they're sharing may violate our policies.

Related Articles

Sharing Our Actions on Stopping Hate
Announcements · July 1, 2020

Sharing Our Actions on Stopping Hate

Facebook is committed to making sure everyone using our platforms can stay safe and informed.

Our Continued Investment in System Transparency
Announcements · June 29, 2020

Our Continued Investment in System Transparency

We're sharing updates and next steps as to how we are continuing to make our systems more transparent.

Announcing Boost With Facebook Summer of Support
Announcements · June 24, 2020

Announcing Boost With Facebook Summer of Support

Boost With Facebook Summer of Support is a free virtual training program designed to help businesses and people learn new skills during a time where it’s important to be online.

Get Facebook Business news in your inbox.

Sign up for our monthly newsletter for the latest updates, insights, marketing trends and articles from Facebook.