What If Regulating Facebook Fails?
What if nothing works? What if, after years of scholarship and journalism exposing the dominance, abrogations, duplicity, arrogance, and incompetence of Facebook, none of the policy tools we have come to rely on to rein in corporations make any difference at all?
We have to be prepared for just such an outcome.
On Tuesday a federal court tossed out federal and state cases against Facebook for violating US antitrust laws. The judge ruled that, because antitrust has precise definitions of concepts like “monopoly” and high burdens of proof for actions in restraint of fair competition, the governments had not come close to justifying why these cases should proceed now. After all, the judge pointed out, the US government had raised no objections in 2012 when Facebook bought Instagram, or in 2014 when it bought WhatsApp. Why should the government swoop in to raise objections now? The judge was not wrong to rule that way. But we have been very wrong to allow our defenses against corporate power to shrink over the past 40 years.
Within hours of that court decision, the total value of Facebook stock rose above 1 trillion dollars. It joined only Microsoft, Amazon, Apple, and Alphabet (the company that owns Google) in reaching that valuation, making 17-year-old Facebook the youngest company to do so. And as it fast approaches 3 billion users worldwide, Facebook seems as unstoppable as it does unmanageable, at least in the short term.
Nothing before in human history—with the possible exception of Google—has reached into the lives of 3 billion humans who speak more than 100 languages. Facebook derives its power from that scale and the degree to which we depend on the Blue App, Instagram, and WhatsApp for our social interactions and virtual identities. While its products have proven to be bad for democracy and other living things, they’re remarkably useful and justifiably popular. Facebook may be terrible for us in the aggregate, but it serves each of its nearly 3 billion users well enough individually that the vast majority are unlikely to walk away from it. At the same time, the US government, often charged with the task of protecting us from our own bad habits, doesn't actually have a good set of tools to address the damage that Facebook (or Google, for that matter) does.
So what can we, as citizens, do? What should we demand from our governments?
There are currently three major areas of regulatory intervention in play: antitrust, content regulation, and data rights. For the most part, Facebook’s critics and those pushing for change have focused their attention on the first two. But while those avenues should be studied, considered, and pursued, neither is likely to solve our Facebook problem.
Antitrust is the angle that's received the most attention recently, but we shouldn't pin our hopes on it. This is not only because Republicans have spent 40 years gutting antitrust regulation to benefit their corporate patrons, and not only because an Obama-appointed federal judge tossed those cases out of court. Believing that competition could limit Facebook’s behavior, alter its design, or even stunt its growth demands a naive faith in markets and competition. Even in highly competitive markets like retail and groceries, big firms remain big. And big and small firms alike pollute, dodge taxes, exploit and mistreat labor, and distract people from civic and family commitments. That’s why we need other forms of regulation to impose costs or restrictions that can enrich our lives in ways that for-profit ventures cannot. Antitrust is at best a minor annoyance to companies and a minor boon to human beings.
Content regulation, meanwhile, is an even clumsier and less effective method of regulation. Many countries around the world explicitly ban Facebook, Google, and other publishers from distributing content that governments consider noxious. Despite all the flowery words about free speech that flow from Mark Zuckerberg, he and his company have never really championed—or even understood—what it means.
Of course, such direct state censorship is not on the table in the US, Canada, or other countries with long-standing liberal traditions. Another approach to content regulation in the US would be to limit the protection from liability that Facebook and other digital service providers currently enjoy for malicious content posted by users. That legal immunity is provided by Section 230 of the Communications Decency Act, and a number of critics have advocated changing it so that the platforms would be more concerned about the nasty things we post to and about one another.
But despite the flurry of recent commentary on Section 230, removing or reforming it is not likely to make Facebook significantly better for us. The provision gets too much credit for enabling the vibrant creativity that fueled the digital revolution, and it gets too much blame for the damage the digital revolution has caused. Although some countries offer different forms of a liability shield, most of the world does not offer as broad a shield as Section 230—and yet platforms continue to grow and thrive globally anyway. India, for instance, just abandoned its limited protection from liability, yet there is no sign—nor possibility—of Facebook abandoning its largest market and greatest source of growth.
In the US, the idea for providing this protection was actually that it would create a market incentive for platforms to keep themselves clean. That hasn’t exactly worked out as planned. Companies like Facebook and Google do lightly and inconsistently try to police the content on their platforms. But the scale and scope of those platforms undermine any effort to do so in ways subtle and effective enough to protect users.
Perhaps, then, you might assume that making these companies smaller would help. And that’s a reasonable assumption. Significantly smaller services like Reddit, with only about 430 million users, do a much better job these days (at least since 2020, when Reddit abandoned its naive commitment to radical free speech and encouraged strong community content-moderation techniques, banning some major subreddits that encouraged hatred). But it’s too late to reverse Facebook’s global reach. There is no way for one country’s policy to slow its growth.
Unlike most other companies, growth and user engagement—not revenue, profit, or even market capitalization—have been the driving metrics for Mark Zuckerberg since the beginning. Any attempt to limit Facebook’s reach would have to contend with those core values. But we currently lack any methods to do so.
If all the external forces of regulation have failed, some say, maybe we can count on a “greenwashing” operation intended to assure regulators that the company can police itself: the Facebook Oversight Board. But this collection of civic leaders, all of whom seem sincere in their commitment to improving Facebook, are unable to actually do anything about the core problems of Facebook. The board only considers decisions to remove or retain content and accounts, as if those decisions were the reason Facebook threatens democracy and human rights around the world. The board pays no attention to algorithmic amplification of content. It does not concern itself with linguistic bias or limitations within the company. It does not question the commitment to growth and engagement. It does not examine the problems with Facebook’s commitment to artificial intelligence or virtual reality. The more seriously we take the impotent Oversight Board, the less likely we are to take Facebook as a whole seriously. The Oversight Board is mostly useless, and “self regulation” is an oxymoron. Yet for some reason, many smart people continue to take it seriously, allowing Facebook itself to structure the public debate and avoid real accountability.
What about us? We are the 3 billion, after all. What if every Facebook user decided to be a better person, to think harder, to know more, to be kinder, more patient, and more tolerant? Well, we’ve been working on improving humanity for at least 2,000 years, and it’s not going that well. There is no reason to believe, even with “media education” or “media literacy” efforts aimed at young people in a few wealthy countries, that we can count on human improvement—especially when Facebook is designed to exploit our tendency to favor the shallow, emotional, and extreme expressions that our better angels eschew.
Facebook was designed for better animals than humans. It was designed for beings that don’t hate, exploit, harass, or terrorize each other—like golden retrievers. But we humans are nasty beasts. So we have to regulate and design our technologies to correct for our weaknesses. The challenge is figuring out how.
First, we must recognize that the threat of Facebook is not in some marginal aspect of its products or even in the nature of the content it distributes. It’s in those core values that Zuckerberg has embedded in every aspect of his company: a commitment to unrelenting growth and engagement. It’s enabled by the pervasive surveillance that Facebook exploits to target advertisements and content.
Mostly, it’s in the overall, deleterious effect of Facebook on our ability to think collectively.
That means we can’t organize a political movement around the mere fact that Donald Trump exploited Facebook to his benefit in 2016 or that Donald Trump got tossed off of Facebook in 2021 or even that Facebook contributed directly to the mass expulsion and murder of the Rohingya people in Myanmar. We can’t rally people around the idea that Facebook is dominant and coercive in the online advertising market around the world. We can’t explain the nuances of Section 230 and expect any sort of consensus on what to do about it (or even if reforming the law would make a difference to Facebook). None of that is sufficient.
Facebook is dangerous because of the collective impact of 3 billion people being surveilled constantly, then having their social connections, cultural stimuli, and political awareness managed by predictive algorithms that are biased toward constant, increasing, immersive engagement. The problem is not that some crank or president is popular on Facebook in one corner of the world. The problem with Facebook is Facebook.
Facebook is likely to be this powerful, perhaps even more powerful, for many decades. So while we strive to live better with it (and with each other), we must all spend the next few years imagining a more radical reform program. We must strike at the root of Facebook—and, while we are at it, Google. More specifically, there is one recent regulatory intervention, modest though it is, that could serve as a good first step.
In 2018 the European Union began insisting that all companies that collect data respect certain basic rights of citizens. The resulting General Data Protection Regulation grants users some autonomy over the data that we generate, and it insists on minimal transparency when that data is used. While enforcement has been spotty, and the most visible sign of the GDPR has been extra warnings that demand we click through to accept terms, the law offers some potential to limit the power of big data vacuums like Facebook and Google. It should be studied closely, strengthened, and spread around the world. If the US Congress—and the parliaments of Canada, Australia, and India—would take citizens’ data rights more seriously than they do content regulation, there might be some hope.
Beyond the GDPR, an even more radical and useful approach would be to throttle Facebook’s (or any company’s) ability to track everything we do and say, and limit the ways it can use our data to influence our social connections and political activities. We could limit the reach and power of Facebook without infringing speech rights. We could make Facebook matter less.
Imagine if we kept our focus on how Facebook actually works and why it’s as rich and powerful as it is. If we did that, instead of fluttering our attention to the latest example of bad content flowing across the platform and reaching some small fraction of users, we might have a chance. As Marshall McLuhan taught us 56 years ago, it’s the medium, not the message, that ultimately matters.
Nessun commento:
Posta un commento