About Face
Yeah, it’s going to be one of those weeks. The Wall Street Journal has released five pieces in the Facebook Files, and more scandals have dropped elsewhere, so let’s dive in and cover as much as we can before my brain melts out of my ears.
XCheck
Facebook content moderation rules generally prohibit users from posting harassment, incitement to violence, or “doxxing” private individuals. However, the company maintains a “whitelist” for celebrities, politicians, and other public figures that can exempt them from these rules. We all know that President Trump and many of his cronies were allowed to spew hatred and conspiracy theories unchecked on Facebook for years. There were other cases, though:
In 2019, it allowed international soccer star Neymar to show nude photos of a woman, who had accused him of rape, to tens of millions of his fans before the content was removed by Facebook.
Facebook claimed its XCheck system was for a “select few members” of its website, but that wasn’t true:
Despite attempts to rein it in, XCheck grew to include at least 5.8 million users in 2020, documents show. In its struggle to accurately moderate a torrent of content and avoid negative attention, Facebook created invisible elite tiers within the social network.
The world is a big place, but 5.8 million seems like an awful lot. Much has been written about Facebook’s moderation battles - many of their content reviewers are outsourced employees paid by companies like Accenture, who are not treated as full-time Facebook workers.
When a normal Facebook user has a post flagged for inappropriate content, there’s no guarantee a human being ever reviews it before their account is suspended or banned. It’s such a common problem people have resorted to buying VR devices to get their accounts reactivated. If you’re a VIP, however, it’s a different story:
Users designated for XCheck review, however, are treated more deferentially. Facebook designed the system to minimize what its employees have described in the documents as “PR fires”—negative media attention that comes from botched enforcement actions taken against VIPs.
If Facebook’s systems conclude that one of those accounts might have broken its rules, they don’t remove the content—at least not right away, the documents indicate. They route the complaint into a separate system, staffed by better-trained, full-time employees, for additional layers of review.
Framing this as a PR issue reveals a lot about Facebook’s strategy more generally - it’s a common theme. Often, the company only changes bad practices after a critical mass of bad press. Let’s see what else Facebook has been catching flack for!
Instagram is the subject of many jokes about how it encourages people to present a false reality on its site. The platform encourages people to post aspirational, filtered, perfect-seeming photos of their lives. People complain about the platform amplifying negative body trends like “fitspo”, and people use their “finsta” accounts to meme with friends or take a break from needing to be literally picture perfect on social media.
Facebook has done internal research into how Instagram is affecting teen girls, a group particularly susceptible to body issues and insecurity. The results were alarming:
“Thirty-two percent of teen girls said that when they felt bad about their bodies, Instagram made them feel worse,” the researchers said in a March 2020 slide presentation posted to Facebook’s internal message board, reviewed by The Wall Street Journal. “Comparisons on Instagram can change how young women view and describe themselves.”
Other presentations did not mince words:
“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.
“Teens blame Instagram for increases in the rate of anxiety and depression,” said another slide. “This reaction was unprompted and consistent across all groups.”
Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram, one presentation showed.
So, for years people within the company had been sounding alarms about the negative mental health consequences of Instagram. Facebook’s executives, however, were saying different things in public:
“The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing in March 2021 when asked about children and mental health.
In May, Instagram head Adam Mosseri told reporters that research he had seen suggests the app’s effects on teen well-being is likely “quite small.”
To its credit, Facebook has done quite a bit of in-depth research on the subject. Very little of it appears to have led to any changes on Instagram, which is its most popular network with younger demos, few of whom use Facebook anymore. Senior executives pursuing a growth at all costs strategy have little time for considering the potential wide-ranging negative societal impacts of their products.
News Feed
So, while Instagram was busy giving a generation of youth insecurity and depression, what was Facebook’s “blue” platform doing to adults? Making people angry:
Company researchers discovered that publishers and political parties were reorienting their posts toward outrage and sensationalism. That tactic produced high levels of comments and reactions that translated into success on Facebook.
“Our approach has had unhealthy side effects on important slices of public content, such as politics and news,” wrote a team of data scientists, flagging Mr. Peretti’s complaints, in a memo reviewed by the Journal. “This is an increasing liability,” one of them wrote in a later memo.
It was so effective it was literally changing political party platforms:
They concluded that the new algorithm’s heavy weighting of reshared material in its News Feed made the angry voices louder. “Misinformation, toxicity, and violent content are inordinately prevalent among reshares,” researchers noted in internal memos.
Some political parties in Europe told Facebook the algorithm had made them shift their policy positions so they resonated more on the platform, according to the documents.
“Many parties, including those that have shifted to the negative, worry about the long term effects on democracy,” read one internal Facebook report, which didn’t name specific parties.
It’s easy to see the effect Facebook’s algorithmic amplification of negative, toxic misinformation in the US. Much ink was spilled after Trump’s victory about how his campaign’s inflammatory social media posts and ads simply performed better on Facebook, and thus were seen by more people.
So, Facebook’s internal research teams found the company’s news feed changes had made the platform angry and more toxic. What did Zuckerberg do?
Mr. Zuckerberg resisted some of the proposed fixes, the documents show, because he was worried they might hurt the company’s other objective—making users engage more with Facebook.
Anna Stepanov, who led a team addressing those issues, presented Mr. Zuckerberg with several proposed changes meant to address the proliferation of false and divisive content on the platform, according to an April 2020 internal memo she wrote about the briefing. One such change would have taken away a boost the algorithm gave to content most likely to be reshared by long chains of users.
“Mark doesn’t think we could go broad” with the change, she wrote to colleagues after the meeting. Mr. Zuckerberg said he was open to testing the approach, she said, but “We wouldn’t launch if there was a material tradeoff with MSI impact.”
MSI is a term used by Facebook to measure user interactions on the platform. Ironically, it’s meant to measure “meaningful” interactions, which made the problem worse:
Under an internal point system used to measure its success, a “like” was worth one point; a reaction, reshare without text or reply to an invite was worth five points; and a significant comment, message, reshare or RSVP, 30 points. Additional multipliers were added depending on whether the interaction was between members of a group, friends or strangers.
As you can imagine, heavily weighting comments and reshares encouraged users on the platform to engage, and not always in healthy ways. The algorithm would treat highly charged political arguments as gold mines of user engagement and show them to more people.
As an added bonus, the changes dramatically impacted traffic to news companies, whose professionally produced content was less likely to spark the desired types of engagement:
As Facebook had warned was likely, the change hurt many online publishers. In the first half of 2018, BuzzFeed suffered a 13% decline in traffic compared with the prior six months, Breitbart lost 46% and ABC News lost 12%, according to online data firm Comscore.
In 2020, Facebook finally rolled out changes to attempt to curb the spread of false information relating to “civic and health” issues, in part in response to the January 6th Capitol riot and the pervasiveness of COVID-19 misinformation on the platform, but when researchers asked Zuckerberg to roll the same changes out across other areas of Facebook, he refused:
When Ms. Stepanov presented Mr. Zuckerberg with the integrity team’s proposal to expand that change beyond civic and health content—and a few countries such as Ethiopia and Myanmar where changes were already being made—Mr. Zuckerberg said he didn’t want to pursue it if it reduced user engagement, according to the documents.
Again, we see Zuckerberg’s refusal to sacrifice user engagement for the good of society.
Vaccinations
In March of this year, Zuckerberg announced he wanted to use Facebook to convince 50 million Americans to get the COVID-19 vaccine. There was a small problem - his platform was already the largest source of vaccine misinformation:
In the weeks before Mr. Zuckerberg made his announcement, another memo said initial testing concluded that roughly 41% of comments on English-language vaccine-related posts risked discouraging vaccinations. Users were seeing comments on vaccine-related posts 775 million times a day, the memo said, and Facebook researchers worried the large proportion of negative comments could influence perceptions of the vaccines’ safety.
Even authoritative sources of vaccine information were becoming “cesspools of anti-vaccine comments,” the authors wrote. “That’s a huge problem and we need to fix it,” they said.
Facebook’s relentless pursuit of “meaningful” user engagement was driving huge amounts of vaccine skepticism and disinformation in the comments of millions of vaccine-related posts. Very good!
Facebook belatedly attempted to change its ranking models for posts that it believed were spreading “health misinfo”, but it didn’t make an impact:
In August 2020, a report by advocacy group Avaaz concluded that the top 10 producers of what the group called “health misinformation” were garnering almost four times as many estimated views on Facebook as the top 10 sources of authoritative information. Facebook needed to take harsher measures to beat back “prolific” networks of Covid misinformation purveyors, Avaaz warned.
Again, Facebook staff were warning senior leaders about how big the problem was:
A Facebook employee also warned that antivaccine forces might be dominating comments on posts, possibly giving users a false impression that such views were widespread.
“I randomly sampled all English-language comments from the past two weeks containing Covid-19-related and vaccine-related phrases,” the researcher wrote early this year, adding that based on his assessment of 110 comments, about two-thirds “were anti-vax.” The memo compared that figure to a poll showing the prevalence of antivaccine sentiment in the U.S. to be 40 points lower.
Facebook’s attempts at using its tech to throttle vaccine misinfo sometimes suffered comical failures:
An integrity staffer circulated a memo about a post that had 53,000 reshares and three million views. It said vaccines “are all experimental & you are in the experiment.” The staffer called it “a bad miss for misinfo”—noting that Facebook’s systems mistakenly thought it was written in Romanian, which is why it wasn’t demoted.
And, once again, the comments the company’s algorithms favored were a problem:
Even when they worked as intended, the systems used to detect vaccine posts for removal or demotion weren’t built to work on comments, the documents show.
Employees improvised, and by late February, two Facebook data scientists came up with a rough way to scan for what they called “vaccine hesitant” comments. They wrote in memos that “vaccine hesitancy in comments is rampant”—twice as prevalent as in posts. One of the scientists pointed out the company’s ability to detect the content in comments was “bad in English, and basically non-existent elsewhere.”
Facebook was admitting, essentially, that it had no ability to police anti-vaccine content in comments throughout most of the world, and could barely do it in the US. In late March of this year, Facebook rolled out a “fix” which…allowed users to turn off comments on their posts. Problem solved! Right?
Troll Farms
While we’re talking about internal Facebook reports, here’s one the MIT Technology Review got its hands on that details how widespread foreign-run troll farm pages were on Facebook:
In the run-up to the 2020 election, the most highly contested in US history, Facebook’s most popular pages for Christian and Black American content were being run by Eastern European troll farms. These pages were part of a larger network that collectively reached nearly half of all Americans, according to an internal company report, and achieved that reach not through user choice but primarily as a result of Facebook’s own platform design and engagement-hungry algorithm.
Not great! So, what is a troll farm?
Troll farms—professionalized groups that work in a coordinated fashion to post provocative content, often propaganda, to social networks—were still building massive audiences by running networks of Facebook pages. Their content was reaching 140 million US users per month—75% of whom had never followed any of the pages. They were seeing the content because Facebook’s content-recommendation system had pushed it into their news feeds.
Ahhhh, very good. The troll farms running many of these pages operated out of Macedonia and Kosovo, two places flagged as centers of misinformation in the 2016 election. Instead of better policing pages run out of these places, Facebook has instead tailored its algorithm to amplify the type of content they’ve become experts at producing:
In the report, [Jeff] Allen identifies three reasons why these pages are able to gain such large audiences. First, Facebook doesn’t penalize pages for posting completely unoriginal content. If something has previously gone viral, it will likely go viral again when posted a second time.
[…]
Second, Facebook pushes engaging content on pages to people who don’t follow them. When users’ friends comment on or reshare posts on one of these pages, those users will see it in their newsfeeds too. The more a page’s content is commented on or shared, the more it travels beyond its followers. This means troll farms, whose strategy centers on reposting the most engaging content, have an outsize ability to reach new audiences.
Third, Facebook’s ranking system pushes more engaging content higher up in users’ newsfeeds. For the most part, the people who run troll farms have financial rather than political motives; they post whatever receives the most engagement, with little regard to the actual content. But because misinformation, clickbait, and politically divisive content is more likely to receive high engagement (as Facebook’s own internal analyses acknowledge), troll farms gravitate to posting more of it over time, the report says.
So, these troll farms were using Facebook’s focus on good old MSI metrics to reach huge audiences, and using inflammatory content to do so. How big is the problem?
As a result, in October 2019, all 15 of the top pages targeting Christian Americans, 10 of the top 15 Facebook pages targeting Black Americans, and four of the top 12 Facebook pages targeting Native Americans were being run by troll farms.
Holy moly. The researcher who wrote the report had a solution to fix the problem, which Facebook has not implemented. The troll farms were so good at duplicating and sharing stolen content, some of them even made it into Facebook’s “news” service intended for publishers:
At one point, thanks to a lack of basic quality checks, as many as 60% of Instant Article reads were going to content that had been plagiarized from elsewhere. This made it easy for troll farms to mix in unnoticed, and even receive payments from Facebook.
Remember when Facebook fired all of its human news editors? Good times. Facebook claims they’ve taken “aggressive” enforcement action against troll farms, but at the time of MIT’s publication five of the pages were still active - two years after the report was written.
One page in question called - I am sorry - “My Baby Daddy Ain’t Shit” has been taken down by Facebook since then, but a quick search shows a new page has taken over in its place claiming the original was hacked. This page has a US-based page manager, which may be a ruse to avoid scrutiny from Facebook - it’s quite easy to get access to a US-based Facebook account, either through spoofing or hacking. Maybe the Macedonians read the MIT Technology Review, who knows. The “new” Facebook may have been a back-up account for the main, in the event it got taken down. It already has almost 250,000 followers. The associated Instagram account has been posting stolen memes for years and has 40,000 followers and cash payment links in the bio.
The ease with which unsophisticated troll farms in Macedonia were able to steal viral content, repost it, and become the biggest pages in major categories on Facebook should have been alarming to the same executives who were promising us that after 2016 they’d learned a lesson and were serious about policing their platform. Instead, they ignored reports from researchers until a publication got a leaked copy and contacted them for comment.
Project Amplify
One thing Facebook has been able to control in its news feed is stories designed to paint Facebook and Zuckerberg in a positive light:
The effort, which was hatched at an internal meeting in January, had a specific purpose: to use Facebook’s News Feed, the site’s most important digital real estate, to show people positive stories about the social network.
Hahaha, I just, whatever. I’m tired. Facebook apparently decided that, rather than constantly apologizing for the scandal du jour, they’d go on the “offensive” and use their platform to convince everyone they are good, actually:
So in January, executives held a virtual meeting and broached the idea of a more aggressive defense, one attendee said. The group discussed using the News Feed to promote positive news about the company, as well as running ads that linked to favorable articles about Facebook. They also debated how to define a pro-Facebook story, two participants said.
That same month, the communications team discussed ways for executives to be less conciliatory when responding to crises and decided there would be less apologizing, said two people with knowledge of the plan.
They also spent a lot of effort on Zuck’s public image:
In January, the communications team circulated a document with a strategy for distancing Mr. Zuckerberg from scandals, partly by focusing his Facebook posts and media appearances on new products, they said.
The company also reassigned many of the workers behind CrowdTangle, a tool Facebook used to share engagement data with researchers and journalists. Concerned that the data showed right-wing conspiracy theorists had more engagement than mainstream news outlets, Facebook decided to release its own report to prove it wasn’t just a network of troll farms, vaccine misinfo, and amplified racism. Their first attempt went well:
So in June, the company compiled a report on Facebook’s most-viewed posts for the first three months of 2021.
But Facebook did not release the report. After the policy communications team discovered that the top-viewed link for the period was a news story with a headline that suggested a doctor had died after receiving the Covid-19 vaccine, they feared the company would be chastised for contributing to vaccine hesitancy, according to internal emails reviewed by The New York Times.
Uh huh. Anyhow, in August Facebook tested out seeding positive news stories about itself in three cities. We don’t know much more about the initiative, whether it was expanded, or whether it had any material impact on opinions about Facebook, because, well, Facebook doesn’t share any data with the general public. How about that.
Marketplace
Did you think we were done? Hahaha, of course not. Facebook Marketplace, Facebook’s online classifieds platform, now has over 1 billion users. In a truly uncharacteristic move for Facebook, the platform is largely unmonitored and has a big scammer problem:
Facebook says it protects users through a mix of automated systems and human reviews. But a ProPublica investigation based on internal corporate documents, interviews and law enforcement records reveals how those safeguards fail to protect buyers and sellers from scam listings, fake accounts and violent crime.
Violent…crime? What?
Since the start of the pandemic, criminals across America have exploited Marketplace to commit armed robberies and, in 13 instances identified by ProPublica, homicide. In one high-profile case, a woman was allegedly murdered by a man who was selling a cheap refrigerator on Marketplace. The alleged killer’s profile remained online with active listings until ProPublica contacted Facebook.
Jesus Christ. ProPublica got a leaked set of internal documents - lots of those going around these days! wonder why! - detailing the many ways Facebook’s attempts to police Marketplace have failed:
Marketplace’s first line of defense consists of software that scans each listing for signs of fraud or other suspicious signals before it goes live. But Marketplace workers said these detection services frequently fail to ban obvious scams and listings that violate Facebook’s commerce policies. The automated systems also block some legitimate consumers from using the platform.
[…]
As a backstop to its automated systems, Facebook Marketplace relies upon roughly 400 workers employed by consulting firm Accenture to respond to user complaints and to review listings flagged by the software.
Four hundred! Workers! For a platform with one billion people on it! Could this get any worse?
Until recently, Facebook Marketplace allowed these low-paid contract workers to police its site by giving them largely unfettered access to Facebook Messenger inboxes, ProPublica has learned. This broad access resulted in workers spying on romantic partners and other privacy violations, according to current and former Accenture employees. The employees said the efforts they made were rarely successful in preventing fraud.
Good Lord. Also, while Facebook has rules about what you can and cannot advertise on its platform, Marketplace gave scammers a handy workaround to sell all sorts of stuff:
…law enforcement bulletins from multiple countries and media reports describe frauds involving lottery numbers, puppies, apartment rentals, PlayStation 5 and Xbox gaming consoles, work visas, sports betting, loans, outdoor pools, Bitcoin, auto insurance, event tickets, vaccine cards, male enhancement products, miracle beauty creams, vehicle sales, furniture, tools, shipping containers, Brazilian rainforest land and even egg farms, among other enterprises.
It’s important to verify the Brazilian rainforest you’re buying on Facebook Marketplace is from a trusted vendor, obviously. Facebook does not handle the payments for items sold on Marketplace, so unlike eBay it can’t hold funds in escrow or give customers any protection if they’re ripped off. The company openly acknowledged it did not want to protect anyone:
But [Brian] Pan also emphasized that the company was not responsible for safeguarding transactions. “We see our role as just connecting buyers and sellers,” he told TechCrunch.
Yeah! What could go wrong with this? Facebook inadvertently created a tool to give scammers access to a huge audience, since Marketplace uses Facebook’s Messenger service to allow buyers and sellers to communicate - which is why Accenture moderators were given Messenger access to try to resolve complaints. Scammers on Craiglist have to be creative to get a mark’s personal details, but by connecting buyers and sellers on Messenger, which is typically tied to a personal Facebook profile, the company was opening the door wide.
The ease of obtaining hacked Facebook accounts makes it possible for criminals to not only defraud customers using someone else’s identity, but it can enable violent crime:
In April, Houston police issued a public alert identifying a local man they said had used at least four different profiles to set up bogus Facebook Marketplace deals and then rob the people who showed up to meet him. According to police, the man used a string of different names but kept the same profile picture.
So, how is all of this negative press and literal murder of its users helpful to Facebook? Ad money! Facebook has been advertising on Marketplace since 2017, and since 2018 sellers have been able to pay to “boost” a listing. I recently sold a bike on Marketplace, and I received daily push notifications from Facebook encouraging me to spend a few bucks to make sure more people saw my listing. So, in accepting ad dollars for scam listings, Facebook is profiting from criminal behavior, while providing its users with no protection against the same crooks. Very cool!
Short Cons
The City - “In the past year, both Wimbledon and the French Open were marked by suspicions of match-fixing flagged by monitors hired to insure the integrity of the sport.”
Vice - “There are no stats, no game to equip the items within, just a list; on top of that, Loot lists are free to generate, aside from network fees.”
EuroNews - “The sting targeted an Italian mafia gang on the Spanish island of Tenerife, who are accused of laundering more than €10 million.”
The Guardian - “The men were arrested after allegedly trying to flee from police near the Auckland border. When their car was searched, police said they found a large quantity of KFC, as well as the cash and a number of empty ounce bags.”
Tips, thoughts, or large quantities of KFC to scammerdarkly@gmail.com