Section 230 reform: Flawed arguments and unintended consequences

Daniel Lyons

The bipartisan drumbeat against Big Tech continues. Over the past few months, the House Energy and Commerce Committee has considered bills designed to address online behavior by limiting or repealing Section 230. We have discussed the problems with these proposals before; this post explores in further detail why they are unlikely to solve the issues they target, yet may have unintended consequences.

Incentives to police online behavior

Critics often allege companies protected by Section 230 lack incentives to police user behavior. Because the law protects these companies from being treated as the publisher or speaker of user-generated messages, the threat of legal liability does not compel them to remove problematic third-party content. Instead, the argument goes, these companies’ drive to increase profits leads them to disseminate even the most offensive material in the relentless pursuit of clicks.

This argument fails to consider basic economic principles about the relationship between self-interest and the common good. Adam Smith, the father of modern economics, was under no delusion about the altruism of the businessman: “People of the same trade seldom meet together, even for merriment and diversion,” he wrote, “but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices.” Despite this compulsion toward self-interest, Smith remained faithful that competition would discipline misbehavior.

Online platforms compete for users’ time, as memorably captured by Biden White House adviser Tim Wu’s moniker of “attention merchants.” And they compete not only against each other but for other pulls on our attention, such as television, video games, and magazines. To attract and hold user attention, these platforms voluntarily remove a wide range of socially unacceptable content that would drive users away. Facebook, for example, is relatively free of pornographic or violent imagery. This is not because its users behave themselves but because the company employs a variety of automated and manual moderation tools to remove problematic content — despite being under no legal compulsion to do so.

Over- and under-moderation

Right-leaning critics accuse platforms of misusing the moderator’s privilege by disproportionately targeting conservative content for removal. It’s not clear how valid this concern is. Certainly, anecdotal high-profile moderation missteps (such as the suppression of the Hunter Biden laptop story) reinforce that narrative. On the other hand, content by right-leaning commentators such as Ben Shapiro routinely dominate lists of most-shared content online.

But assuming bias is a problem, Section 230 reform is inapposite. Section 230 was designed to promote a diversity of views online. As Professor Daphne Keller notes, we may need more competition among platforms (or hosting services) to make all views heard, but Section 230 does not contribute to that problem, nor will repealing Section 230 solve it. After all, the New York Times is not protected by Section 230, yet its editorial pages are hardly representative of all viewpoints.

Left-leaning critics lob the opposite concern: Companies do too little to moderate misinformation. But particularly in an evolving environment (such as a pandemic), it is risky to appoint one institution as the arbiter of truth. (Just ask Galileo.) Traditionally, the solution to bad speech is more speech, not censorship.

Unintended consequences

Attempts to control the flow of information online can also create unintended consequences. As students of broadcast history know, the Federal Communications Commission once enforced a Fairness Doctrine, requiring that if broadcasters presented one side of an issue, they must give equal time to speakers on the other side. The idea was to make sure public opinion was not swayed by broadcasters’ control of information. But in reality, broadcasters often steered clear of controversial, important topics entirely for fear of triggering equal-time requirements.

Similar consequences may flow from bills that remove Section 230 protection for landmark statutes such as civil rights claims. If a platform lacks Section 230 protection for communication about or involving protected groups, it may reduce services to those groups to limit its overall liability, which would be a loss to society and particularly harmful to those the law is designed to protect.

Frivolous litigation

Repeal also makes Big Tech a target for trial lawyers — who are experts at interpreting statutes in ways Congress never intended — in pursuit of deep pockets. This is the lesson of the Telephone Consumer Protection Act, a 1991 anti-robocall statute that found new life in the 2010s to target conduct that Congress neither intended nor contemplated. These claims may ultimately fail, but they still impose significant litigation costs, which can disproportionately affect startup companies seeking to disrupt today’s tech giants.

Big Tech’s critics are correct that platforms can be misused to disseminate socially undesirable content online. But the remedy is to address the source of the problem — harmful users and the social conditions that make their messages attractive to others. Contemporary efforts at Section 230 reform avoid addressing these root causes.

Courtesy: (AEI.org)