Facebook unfriending itself will not make its problems go away
Facebook should focus more on content moderation, to help target the harmful content on its site before building a new one that will deliver the same old problems
Facebook Inc. is planning to change its name to something related to the metaverse, a new digital network for communicating through augmented and virtual reality, according to a report in the The Verge, which cites a source with direct knowledge.
Over the weekend, the company also said that as part of its metaverse-building efforts, it would add 10,000 high-skilled jobs in Europe.
Putting aside the prospects of financial success with this big new platform, which do not look good, Facebook's hyperfocus on the metaverse right now reflects poor judgement by its management and Mark Zuckerberg in particular.
Evidence is mounting that Facebook pushes older people toward conspiracy theories and teens toward body issues. Zuckerberg should be focused instead on carrying out the mother of all cleanup jobs: hiring thousands more staff, especially content moderators, to help target the harmful content on its site before building a new one that will deliver the same old problems.
Content moderators are contractors who scour Facebook and Instagram for potentially harmful content, and they are much cheaper than engineers. An entry-level engineer at Facebook in the U.K. earns about $125,000 a year, according to levels.fyi, which tracks Big Tech engineering salaries.
Meanwhile, content moderators who work for Accenture Plc, one of the biggest agencies doing Facebook's cleanup work, earn about $37,000 a year, according to Glassdoor.
German government sued Facebook in 2019 for misleading regulators by, among other things, recording only certain categories of user complaints in data it was required to share with them.
Facebook relies on roughly 15,000 content moderators to keep its site clean, and with the hiring budget it announced for the metaverse, it could more than double that number. This is exactly what a recent New York University study said Facebook should do to weed out harmful content.
In a separate blog post on Sunday, the company said its "improved and expanded AI systems" had led to a drop in hate speech, which now made up just 0.05% of content viewed on the site. (Facebook got that number by selecting a sample of content and then labeling how much they violated its hate-speech policies.)
The company seems to be implying that it does not need many more moderators because its technology is getting better at cleaning things up.
But these stats about harmful content, which Facebook shares in quarterly publications known as transparency reports, have a problem.
Researchers have long been skeptical of such reports from Big Tech, according to Ben Wagner, an assistant professor at Delft University of Technology in the Netherlands, who co-wrote a study in February about their limitations.
He pointed out that the German government sued Facebook in 2019 for misleading regulators by, among other things, recording only certain categories of user complaints in data it was required to share with them.
Facebook, which the government ordered to pay a 2 million-euro ($2.3 million) fine, said it had complied with Germany's law on transparency and that some aspects of the law "lacked clarity." It reserved the right to appeal.
Facebook faces other allegations of fudging its transparency report numbers. According to a Wall Street Journal story on Sunday, which cited internal documents leaked by Facebook whistle-blower Frances Haugen, Facebook changed its complaints process in 2019 by making it more difficult for people to flag content.
Facebook told the Journal that this "friction" was intended to make its systems more efficient, and that it had since rolled some of that friction back.
With no common standards for measuring harm, social media transparency reports end up confusing and unclear. For instance, Facebook's 2018 transparency report cited 1,048 complaints from users, while Twitter Inc. and Alphabet Inc.'s YouTube each reported more than 250,000, according to the German lawsuit against Facebook. That is a huge discrepancy in tracking.
And such reports are not properly audited. Facebook has set up a data transparency advisory panel of seven academics to make an "independent" assessment of its transparency reports, it said Sunday.
The panel is, like other scientific advisory boards, paid a fixed honorarium by Facebook before its assessment, which makes it seem not so independent.
Still, this is one area where Facebook seems to have moved in the right direction. It recently hired Ernst & Young Global Ltd., one of the Big Four accounting firms, to assess how it measures harm, saying EY would start its audit sometime this year.
Set up correctly, that could create a more reputable chain of accountability than exists today. Facebook declined to answer questions about when the audit would be published, which criteria EY would apply or which arm of EY would do the audit.
In the meantime, Facebook has to do more to improve its policing of harmful content. That is why it and other social media sites should be pushed to hire more moderators — thousands more — to help clean up their sites.
That would be a better investment than rushing to build an entirely new digital-reality platform like the metaverse, which is destined to have the same messes as the old platforms.
Parmy Olson is a Bloomberg Opinion columnist covering technology.
Disclaimer: This article first appeared on Bloomberg, and is published by special syndication arrangement.