Nick Clegg, Facebook’s vice president of global affairs, appeared on several television shows Sunday to defend the Company’s algorithm and security measures, and detail steps the company says it will take to protect users.
The big picture: Clegg told ABC’s “This week” that doing away with Facebook and Instagram’s algorithm for displaying content would lead to “more, not less” hate speech, misinformation and harmful content on users’ feeds.
- “[T]hose algorithmic systems precisely are designed like a great, giant spam filter to identify and deprecate and downgrade bad content,” Clegg said.
Why it matters: The Facebook executive’s media tour comes after whistleblower Frances Haugen last week urged lawmakers to regulate the company, saying it knows its algorithms can lead teens to pro-anorexia content and that it boosts extreme content more likely to elicit a reaction from users.
On CNN’s “State of the Union,” Clegg said Instagram is a positive experience “for the overwhelming majority of teenagers,” and is only potentially harmful to a minority of them.
- Asked what Facebook is doing to mitigate harm to those users, Clegg cited the company’s pause on developing Instagram Kids, and said it plans to introduce options to let adults supervise their children, algorithmic “nudges” when teens are looking at harmful content, and a “take a break” prompt urging kids to get away from the platform.
Rebutting Haugen’s claims that Facebook is primarily motivated by profit, Clegg said Facebook has invested over $13 billion in recent years for research on safety on its platform — “more than the total revenue of Twitter over the last four years.”
On NBC’s “Meet the Press,” Clegg addressed Haugen’s testimony that Facebook rolled back some of its security provisions following the 2020 election.
- Clegg said some of these provisions were “very, very blunt tools which were basically scooping up a lot of entirely innocent, legitimate, legal, playful, enjoyable content.”