Good Questions, Real Answers: Protecting Your Brand on Facebook

Facebook Business

Check out our most recent Good Questions, Real Answers blog post on anti-counterfeiting tools and policies here.

One of our goals is for Facebook to be a platform that gives people a voice, while keeping them--and businesses like yours--safe. That’s why we’re focusing on brand safety in our latest Good Questions, Real Answers blog post, defined by the Internet Advertising Bureau (IAB) as “...keeping a brand’s reputation safe when they advertise online.” At Facebook, we work to create transparent policies and relevant controls to ensure you feel informed and in control of your brand’s reputation. Today, we’ll address some questions around how content is removed from our platforms to maintain a safe space; how we work with brand safety leaders in the industry; and additional details around a new control we’re testing, White Lists.

How much content does AI remove automatically?

Starting in Q2 2019, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases, the content is still sent to our review teams to make a final determination. While our systems’ abilities to correctly detect violations continue to progress, people will continue to play an important part in keeping our platform safe: both the people who report content to us, and the people on our team who review that content.

We use automation and artificial intelligence to stop spam attacks, remove fake accounts, and identify additional instances of content we've already removed for violating policies including child nudity, sexual activity, terrorism and now, hate speech. But a lot of content is very contextual and nuanced, like determining whether a particular comment is bullying. That's why we have people to look at those reports and make decisions based on our Community Standards.

Do you release any information about the types of content that are removed?

The Community Standards Enforcement Report (CSER) holds the company accountable when it comes to showing progress in removing harmful content from our services. CSERs detail how we’re doing on enforcing our policies by providing metrics across a number of policy areas, including: the prevalence of harmful content, the amount of content we took action on, and how effectively we proactively detected harmful content.

In November, Instagram was included in the report for the first time, and we released metrics on how well we’re enforcing our policies in four areas: child nudity and child exploitative imagery, drug and gun sales, terrorism, and suicide and self-injury content. Facebook also shared metrics for these areas, among others. You can see the report and metrics here.

Why don’t you include time to take down content?

We measure how often violating content is seen on Facebook (the views) as opposed to reporting on time (how quickly we removed the content) because one post that we pull down in 2 hours could have been seen by 1 million people while another post that took us 24 hours may have only been seen by 100 people. The prevalence number is based on how often violating content is seen on Facebook relative to the how often any content is seen on Facebook — by estimating the views, not the amount of violating content, and dividing it by the views of all content at a given moment. So while we work to ensure violating content is up for as little time as possible, what really matters is how many people could have seen the post. We believe it's a meaningful measurement of people’s experience on Facebook.

Watch the video below to learn more about how we measure prevalence.

What’s your approach to collaborating with brand safety industry bodies?

We collaborate with industry partners to share knowledge, build consensus and work towards making all online platforms safer for businesses.

In addition to our work with , we recently completed JICWEBS’ Digital Trading Standards Group’s Brand Safety audit, receiving the IAB UK Gold Standard. Industry partners are a valuable source of feedback for us. Working with these industry bodies allow us to share knowledge industry-wide and collaboratively make Facebook and all online platforms safer for people and businesses.

When will Facebook be rolling out White Lists?

We recently announced we’re starting with a small test for select advertisers. We’ll plan to learn from this test before rolling this out more broadly next year. The test will apply to all sites / apps for Audience Network and Facebook pages for in-stream ads. Advertisers are responsible for building their own White Lists and each advertiser has access to their own list only.

We understand that for advertisers, protecting the integrity of your brand is never done. We’ll keep finding new ways to ensure our platforms continue to safely give people a voice while helping brands thrive. Learn more about how we’re making our platforms better for people and businesses, straight from the people who build them, on our Good Questions, Real Answers site.

Announcements
·
December 12, 2019

Good Questions, Real Answers: Protecting Your Brand on Facebook

Good Questions, Real Answers: Protecting Your Brand on Facebook

Check out our most recent Good Questions, Real Answers blog post on anti-counterfeiting tools and policies here.

O

ne of our goals is for Facebook to be a platform that gives people a voice, while keeping them--and businesses like yours--safe. That’s why we’re focusing on brand safety in our latest Good Questions, Real Answers blog post, defined by the Internet Advertising Bureau (IAB) as “...keeping a brand’s reputation safe when they advertise online.” At Facebook, we work to create transparent policies and relevant controls to ensure you feel informed and in control of your brand’s reputation. Today, we’ll address some questions around how content is removed from our platforms to maintain a safe space; how we work with brand safety leaders in the industry; and additional details around a new control we’re testing, White Lists.

How much content does AI remove automatically?

Starting in Q2 2019, we began removing some posts automatically, but only when content is either identical or near-identical to text or images previously removed by our content review team as violating our policy, or where content very closely matches common attacks that violate our policy. We only do this in select instances, and it has only been possible because our automated systems have been trained on hundreds of thousands, if not millions, of different examples of violating content and common attacks. In all other cases, the content is still sent to our review teams to make a final determination. While our systems’ abilities to correctly detect violations continue to progress, people will continue to play an important part in keeping our platform safe: both the people who report content to us, and the people on our team who review that content.

We use automation and artificial intelligence to stop spam attacks, remove fake accounts, and identify additional instances of content we've already removed for violating policies including child nudity, sexual activity, terrorism and now, hate speech. But a lot of content is very contextual and nuanced, like determining whether a particular comment is bullying. That's why we have people to look at those reports and make decisions based on our Community Standards.

Do you release any information about the types of content that are removed?

The Community Standards Enforcement Report (CSER) holds the company accountable when it comes to showing progress in removing harmful content from our services. CSERs detail how we’re doing on enforcing our policies by providing metrics across a number of policy areas, including: the prevalence of harmful content, the amount of content we took action on, and how effectively we proactively detected harmful content.

In November, Instagram was included in the report for the first time, and we released metrics on how well we’re enforcing our policies in four areas: child nudity and child exploitative imagery, drug and gun sales, terrorism, and suicide and self-injury content. Facebook also shared metrics for these areas, among others. You can see the report and metrics here.

Why don’t you include time to take down content?

We measure how often violating content is seen on Facebook (the views) as opposed to reporting on time (how quickly we removed the content) because one post that we pull down in 2 hours could have been seen by 1 million people while another post that took us 24 hours may have only been seen by 100 people. The prevalence number is based on how often violating content is seen on Facebook relative to the how often any content is seen on Facebook — by estimating the views, not the amount of violating content, and dividing it by the views of all content at a given moment. So while we work to ensure violating content is up for as little time as possible, what really matters is how many people could have seen the post. We believe it's a meaningful measurement of people’s experience on Facebook.

Watch the video below to learn more about how we measure prevalence.

What’s your approach to collaborating with brand safety industry bodies?

We collaborate with industry partners to share knowledge, build consensus and work towards making all online platforms safer for businesses.

In addition to our work with Global Alliance for Responsible Media, we recently completed JICWEBS’ Digital Trading Standards Group’s Brand Safety audit, receiving the IAB UK Gold Standard. Industry partners are a valuable source of feedback for us. Working with these industry bodies allow us to share knowledge industry-wide and collaboratively make Facebook and all online platforms safer for people and businesses.

When will Facebook be rolling out White Lists?

We recently announced we’re starting with a small test for select advertisers. We’ll plan to learn from this test before rolling this out more broadly next year. The test will apply to all sites / apps for Audience Network and Facebook pages for in-stream ads. Advertisers are responsible for building their own White Lists and each advertiser has access to their own list only.

We understand that for advertisers, protecting the integrity of your brand is never done. We’ll keep finding new ways to ensure our platforms continue to safely give people a voice while helping brands thrive. Learn more about how we’re making our platforms better for people and businesses, straight from the people who build them, on our Good Questions, Real Answers site.

Good Questions, Real Answers: Protecting Your Brand on Facebook
Announcements · December 12, 2019

Good Questions, Real Answers: Protecting Your Brand on Facebook

As part of our brand safety efforts, we’ll be testing out White Lists with some of our advertisers. Here’s how we’re working to maintain Facebook as a safe space.

Helping Prevent Discrimination in Ads that Offer Housing, Employment or Credit Opportunities
Announcements · December 3, 2019

Helping Prevent Discrimination in Ads that Offer Housing, Employment or Credit Opportunities

We are actively working to fight discrimination and our efforts include sharing updates on our Civil Rights Audit. Read more about how we are expanding our enforcement.

Good Questions, Real Answers: Protecting Brands Against Counterfeits
Announcements · November 21, 2019

Good Questions, Real Answers: Protecting Brands Against Counterfeits

We want to help our advertisers create relevant, impactful ads that place people first. Here’s how we’re helping advertisers protect their intellectual property.

Get Facebook Business news in your inbox.

Sign up for our monthly newsletter for the latest updates, insights, marketing trends and articles from Facebook.

Get Facebook Business news in your inbox.

Sign up for our monthly newsletter for the latest updates, insights, marketing trends and articles from Facebook.

Tags

Announcements