In October, a confrontation erupted between one of the leading Democratic candidates for the US presidency, Sen. Elizabeth Warren, and Facebook CEO Mark Zuckerberg. Warren had called for a breakup of Facebook and Zuckerberg said, in an internal speech, that this represented an “existential” threat to his company.
Facebook was then criticized for running an ad by President Donald Trump’s re-election campaign that carried a manifestly false claim charging former Vice President Joe Biden, another leading Democrat contender, with corruption. Warren trolled the company by placing her own deliberately false ad.
This dustup reflects the acute problems social media poses for American democracy — indeed, for all democracies. The internet has, in many respects, displaced legacy media like newspapers and television as the leading source of information about public events, and the place where they are discussed.
But social media has enormously greater power to amplify certain voices and to be weaponized by forces hostile to democracy, from Russian trolls to American conspiracy theorists. This has led, in turn, to calls for the government to regulate internet platforms in order to preserve democratic discourse itself.
But what forms of regulation are constitutional and feasible? The US Constitution’s First Amendment contains very strong free speech protections. While many conservatives have accused Facebook and Google of “censoring” voices on the right, the First Amendment applies only to government restrictions on speech; law and precedent protect the ability of private parties like the internet platforms to moderate their own content. In addition, Section 230 of the 1996 Communications Decency Act exempts them from private liability that would otherwise deter them from curating content.
The US government, by contrast, faces strong restrictions on its ability to censor content on the internet in the direct way that, say, China does. But the US and other developed democracies have nonetheless regulated speech in less intrusive ways.
This is particularly true with regard to legacy broadcast media, where governments have shaped public discourse through their ability to license broadcast channels, to prohibit certain forms of speech (like terrorist incitement) or to establish public broadcasters with a mandate to provide reliable and politically balanced information.
The original mandate of the Federal Communications Commission (FCC) was not simply to regulate private broadcasters, but to support a broad “public interest.” This evolved into the FCC’s “Fairness Doctrine,” which enjoined TV and radio broadcasters to carry politically balanced coverage and opinion.
The rise and fall of the Fairness Doctrine shows how hard it would be to create an internet-age equivalent.
The constitutionality of this intrusion into private speech was challenged in the 1969 case Red Lion Broadcasting Co. v. FCC, in which the Supreme Court upheld the commission’s authority to compel a radio station to carry replies to a conservative commentator. The justification for the decision was based on the oligopolistic control over public discourse held by the three major TV networks at the time.
The Red Lion decision did not become settled law, however, as conservatives continued to contest the Fairness Doctrine. Republican presidents repeatedly vetoed Democratic attempts to turn it into a statute, and the FCC itself rescinded the doctrine in 1987 through an administrative decision.
The rise and fall of the Fairness Doctrine shows how hard it would be to create an internet-age equivalent. There are many parallels between then and now, having to do with scale. Today, Facebook, Google and Twitter host the vast majority of internet speech and are in the same oligopolistic position as the three big TV networks were in the 1960s. Yet it is impossible to imagine today’s FCC articulating a modern equivalent of the Fairness Doctrine.
Our politics are far more polarized; reaching agreement on what constitutes unacceptable speech (for example the various conspiracy theories offered up by Alex Jones, including that the 2012 school massacre in Newtown, Connecticut, was a sham) would be impossible. A regulatory approach to content moderation is therefore a dead end, not in principle but as a matter of practice.
This is why we need to consider antitrust as an alternative to regulation. The right of private parties to self-regulate content has been jealously protected in the US: We don’t complain that the New York Times refuses to publish Jones because the newspaper market is decentralized and competitive. A decision by Facebook or YouTube not to carry him is much more consequential because of their monopolistic control over internet discourse. Given the power a private company like Facebook wields, it will rarely be seen as legitimate for it to make such decisions.
On the other hand, we would be much less concerned with Facebook’s content moderation decisions if it were simply one of several competitive internet platforms with differing views on what constitutes acceptable speech. This points to the need for a massive rethink of the foundations of antitrust law.
The framework under which regulators and judges today look at antitrust was established during the 1970s and 1980s as a byproduct of the rise of the Chicago School of free market economics. As chronicled in Binyamin Appelbaum’s recent book “The Economists’ Hour,” figures like George Stigler, Aaron Director and Robert Bork launched a sustained critique of overzealous antitrust enforcement. The major part of their case was economic: Antitrust law was being used against companies that had grown large because they were innovative and efficient.
They argued that the only legitimate measure of economic harm caused by large corporations was lower consumer welfare, as measured by prices or quality. And they believed that competition would ultimately discipline even the largest companies. For example, IBM’s fortunes faded not because of government antitrust action, but because of the rise of the personal computer.
The Chicago School critique made a further argument, however: The original framers of the 1890 Sherman Antitrust Act were interested only in the economic impact of large scale, and not in the political effects of monopoly. With consumer welfare the only standard for bringing a government action, it was hard to make a case against companies like Google and Facebook that give away their main products for free.
We are in the midst of a major rethinking of that inherited body of law in light of the changes wrought by digital technology. Economists and legal scholars are beginning to recognize that consumers are hurt by things like lost privacy and foregone innovation, as Facebook and Google sell users’ data and buy up startups that might challenge them.
But the political harms caused by large scale are critical issues as well and ought to be considered in antitrust enforcement. Social media has been weaponized to undermine democracy by deliberately accelerating the flow of bad information, conspiracy theories and slander. Only the internet platforms have the capacity to filter this garbage out of the system. But the government cannot delegate to a single private company (largely controlled by a single individual) the task of deciding what is acceptable political speech. We would worry much less about this problem if Facebook was part of a more decentralized, competitive platform ecosystem.
Remedies will be very difficult to implement: It is the nature of networks to reward scale, and it is not clear how a company like Facebook could be broken up. But we need to recognize that, while digital discourse must be curated by the private companies that host it, such power cannot be exercised safely unless it is dispersed in a competitive marketplace.
Francis Fukuyama is a senior fellow at Stanford University and co-director of its Program on Democracy and the Internet. www.project-syndicate.org