Updated Friday, Sept. 15, 2017, 11:15 a.m. EDT: In response to a report that its ad system let advertisers target people who expressed interest in topics such as “Jew haters” and other anti-Semitic keywords, Facebook issued the following response late Thursday night:
Facebook equips businesses with powerful ways to reach the right people with the right message. But there are restrictions on how audience targeting can be used on Facebook. Hate speech and discriminatory advertising have no place on our platform. Our community standards strictly prohibit attacking people based on their protected characteristics, including religion, and we prohibit advertisers from discriminating against people based on religion and other attributes.
As people fill in their education or employer on their profile, we have found a small percentage of people who have entered offensive responses, in violation of our policies. ProPublica surfaced that these offensive education and employer fields were showing up in our ads interface as targetable audiences for campaigns. We immediately removed them. Given that the number of people in these segments was incredibly low, an extremely small number of people were targeted in these campaigns.
Keeping our community safe is critical to our mission. And to help ensure that targeting is not used for discriminatory purposes, we are removing these self-reported targeting fields until we have the right processes in place to help prevent this issue. We want Facebook to be a safe place for people and businesses, and we’ll continue to do everything we can to keep hate off Facebook.”
Facebook is encouraging any advertisers who encounter inappropriate targeting fields to report them immediately.
Updated Thursday, Sept. 14, 2017, 9:38 p.m. EDT: We have received information that provides further insight into how those ad categories got into the Facebook ad system.
The categories in question were, in fact, created based on how people filled out their profiles, and not based on an algorithm. In other words, users filling out their profiles may have added descriptions like “Jew hater,” which then would appear to advertisers as potential categories of users to which ads could be directed, but there’s no algorithm involved.
In addition, Facebook is working to implement new measures to keep offensive self-reported profile traits from being used in campaigns.
A new report reveals that social media giant Facebook has allowed advertisers to target people who have expressed interest in topics such as “Jew hater,” “How to burn Jews” and “History of ‘why Jews ruin the world.’”
ProPublica recently paid $30 to target members of the above groups with three “promoted posts”—which placed a ProPublica article or post in the newsfeeds of the people included in those groups—to test whether the ad categories were real. As was the case in its previous test of the Facebook ad system, all three of its ads were approved within 15 minutes.
The ad categories were created by an algorithm and not people, so when ProPublica contacted Facebook about them, the categories were removed, and the company said that it would explore ways to fix the problem.
“There are times where content is surfaced on our platform that violates our standards,” Rob Leathern, product management director at Facebook, told ProPublica. “In this case, we’ve removed the associated targeting fields in question. We know we have more work to do, so we’re also building new guardrails in our product and review processes to prevent other issues like this from happening in the future.”
Last week, Facebook disclosed that it had discovered that $100,000 worth of ads placed during the 2016 presidential election were put there by “inauthentic” accounts that appeared to be affiliated with Russia.
This is not the first time Facebook’s ad system has come under scrutiny. Last year, ProPublica exposed Facebook’s ad system for allowing targeted exclusion by race. ProPublica was able to purchase an ad targeting Facebook members who were house-hunting that excluded African-American, Hispanic and Asian-American users, which runs contrary to rules established by the Fair Housing Act of 1968 as well as the Civil Rights Act of 1964.
At the time, Steve Satterfield, privacy and public policy manager at Facebook, told ProPublica: “We take a strong stand against advertisers misusing our platform: Our policies prohibit using our targeting options to discriminate, and they require compliance with the law. We take prompt enforcement action when we determine that ads violate our policies.”
Perhaps anti-Semitism does not violate Facebook’s policies?
ProPublica says that at the time it investigated the ethnic-exclusion ads, it did not find any anti-Semitic categories, but after receiving a tip last week, it logged in to see if “Jew hater” was a real category and found out that it was.
ProPublica found the category, but because there were only 2,274 people in it, it was considered too small for the publication to be able to buy an ad targeted at only Jew haters.
Instead, the system suggested “Second Amendment” as an additional category that would boost the audience size to 119,000 people because clearly the algorithm thinks that anti-Semites and guns go hand-in-hand.
ProPublica says that since it contacted Facebook about the anti-Semitic ad categories, most of them have disappeared.
Read more at ProPublica.