Internet Polarization: Reform and Section 230
I question whether the internet's largest barrier to solving the problem of online polarization could be leveraged as a solution.
Polarization seems to be at the forefront of political thought. Since the early 2010s, many people have developed the intuition that Americans are increasingly entrenched in one set of political opinions or another. While the causes and effects of what have put us in this position of both left and right extremist attitudes in our country are hotly debated, a common culprit to many of our narratives is social media.
As a spirit of the ethos of the digital age, social media was said to represent a way of unifying people with disparate ideas, thoughts, and visions of the world together to join for a common purpose. Over time, though, we have found that social media has in many ways done the opposite. The development of echo chambers in Tumblr giving rise of left extremism and similar dynamics producing QAnon out of 8Chan have shown that our internet has not set out to do what we thought it would.
In a recent piece for The Hungarian Conservative’s print edition on Conservatism and Innovation, I wrote on the specific role that recommendation algorithms are playing in making our world a more polarizing place. My theory was that by shortening the internet’s path lengths and therefore bringing people together and by centralizing it and therefore removing our ability to sort into fracture cliques, what the internet has done has exposed us to more information but fewer ideas. In a sense, although it is true that people are engaging in more discourse than ever online, the diversity of things over which they are disagreeing is decreasing rapidly, meaning that fewer and fewer novel discussions are happening in the space of the internet. This, in turn, has potentially helped give rise to polarization by forcing us to aggregate complex components into as few parts as possible, lending us towards fewer and fewer camps.
This is not a permanent state, as the pre-2010s internet ideally reflects a very different structure from the post-2010s internet (an argument I think correctly intuited most recently by Jon Haidt), and while there have been many ideas for improving our internet by reforming our recommendation algorithms (I proposed several in a recent roundtable discussion for Quillette), a problem here is that social media platforms don’t have to reform their recommendation algorithms. They don’t have to improve the health of the internet and we can’t make them do so. To improve the internet, we just have to hope that some other innovator comes in, makes a social media website with an equally addicting recommendation system that is somehow less polarizing, and then they simply have to outcompete Facebook, Twitter, and TikTok.
This is expectation is totally unrealistic. Others have recognized this “solution” as anything but, but while they wish to reform the problem, there is a critical impediment. That impediment is known as Section 230. Stated simply, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” Section 230 broadly prevents social media websites from being made culpable for damages resulting from content posted on their platform. As such, this short clause is commonly referred to as “The 26 Words That Created the Internet"
In this post, I’m going to describe my vision of what the problem is with polarization and the internet, how we can fix it using our current recommendation algorithms, and how we can leverage our primary barrier to reform (Section 230) as a tool of reform itself in this space.
Part I: Decreasing Polarization by Increasing Polarization
Polarization has arguably increased in the United States. I won’t cover all the literature arguing that it has, but I do think a couple of clarifications are necessary to start getting at the root of what we are supposed to do about it. First and foremost, we might ask what it is we actually mean by polarization. In a colloquial sense, the image of polarization might refer to people falling on one side of a barrier or another, rather than meeting in the middle. While this is right and one form of polarization, it is also one-dimensional. For the purposes here and broadly speaking, we’ll use the term to refer to a sort of global extremism, such that in a two-dimensional space people might sort into the corners of four extreme categories, in a three-dimensional space into the corners of eight categories, in a four-dimensional space into the corners of 16 categories and so-on. In the United States, though, it is true that our polarization is much simpler. It is, seemingly, one-dimensional: left and right.
While this polarization has increased, the role that the internet has played in this increase is hotly debated; and in some countries, in fact, internet access has been shown to have decreased polarization. Nevertheless, the internet is an obvious factor to point to, in part because it is the place where polarization is the most obvious, and in any case, while we can argue over whether or not the internet has caused our current politically polarized climate, we can definitely say that it has not helped and actively is not helping. But what is it about the internet, and social media in particular, that causes this issue to be so obvious and so exacerbated?
In her book Frenemies, political psychologists Jaime Settle makes the compelling argument that unlike the real world, the internet is a world of weak ties. In sociological terms, your social world tends to be split between your strong ties (friends, including internet friends, and family whom you frequently interact with) and weak ties (acquaintances, friends of friends, and family and common strangers with whom you only intermittently interact with). We tend, more often than not, to agree more with those with whom we share strong ties with than with our weak ties, and in the real world, that is inconsequential. It is not often that we ask strangers on the street, friends of friends, or our distant cousins their political opinions. But on the internet, such information is nevertheless forced upon us. Not only that, but our recommendation algorithms boost that content which keeps us online - without intentionally seeking it out, we receive valuable social information about our weak ties online, form coalitions with and against them, and turn the world of simple social gossip into one of the most interactive games of informational arbitrage the world has ever seen. This world of weak ties and exposure to information we do not actually seek out is, in essence, what has made the online world more polarized.
The problem, as many have argued, is that polarization may have always been there, and even some polarization may be good. I tend to align with this view and depart by saying perhaps the problem with polarization is that it’s too broad and we actually don’t have enough of it. Instead, as I argued in my recent piece, what we need to do is increase polarization over specific issues and diversify the amount of polarization overall. Instead of being polarized and splitting into only one or two camps, instead what we need to do is split into an increasing number of camps and allow polarization and extremism to take place in those. Instead of combining all the diverse issues we could be debating into a single package, we need to be having these debates apart from one another, and in the essence of Mao, what the internet needs to do is, “Let a hundred flowers bloom; let a hundred schools of thought contend.”
In a 2011 paper, Flache & Macy examined the role that network structure in the form of strong and weak ties play in creating cultural polarization. In the model, individuals are placed in social networks and hold opinions between 1 and -1 across K number of issues. Individuals are tied to other individuals with some strength related to how similar their opinions are - if individuals are similar to each other, this weight is positive, and if they are different, this weight is negative. As individuals in this model interact, they change their opinions over time based on those around them, becoming more similar to those with which they share positive ties and less similar to those with which they share negative ties.
The authors then try two manipulations. In the first, they evolved independent groups (called cliques) in a network called the disconnected caveman graph. After these cliques evolved to a steady state, they then added random long-range ties to other cliques. In the second, they evolved semi-independent cliques, in a network called the connected caveman graph, evolved them, and then similarly added long-range ties to distance cliques (figure below). What they found in both cases was that while individuals within cliques grew more similar to each other, the overall polarization within the population before adding links was not very high. Yet when these long-distance weak ties were added, polarization dramatically increased, doubling in many cases.
What this model seemingly shows us is that this weak ties effect is real, in principle. But it doesn’t stop here. Flache and Macy then alter the parameter K, which represents the number of opinions individuals may hold. Critically, what they found is that as K, or opinion complexity, increased, the amount of polarization went down, dramatically.
Years later, my advisor and his student, Matt Turner, published an extension of this model where they also altered K, or the number of opinions in the opinion space the agents could take on (Turner & Smaldino, 2018). While they recovered the same findings as Flache & Macy, they further examine the role that the size of the opinion space plays in reducing polarization, doing so by artificially placing agent opinions in corners of K-dimensional hypercubes (representing the multi-dimensional polarization I discuss above). What they find is that this placement (the black line below) “reduces” polarization to the same extent that adding opinions in the traditional caveman treatment does. In other words, the population is not hyper-polarized, it is, to be scientifically crude, hyper-hyper-polarized.
This is a version of polarization we might find is worth bearing and which we might endorse. A set of people arguing over whether the new Lord of the Rings show sucks in one corner of the internet seems healthier to me than forcing all of us to bear the argument that the reason it sucks is that it’s too woke or that the reason you don’t like it is because you’re racist. This is the sort of a thing that should be a niche hobby, anyway, but I won’t get into that (again, in the Hungarian piece I have a whole argument on how the internet has killed online snobs and the Comic Book Guy type and how that’s a bad thing).
In sum, if we take the arguments from some of these network models seriously, there are a few means for serious reform of our internet architectures. We can structurally fracture the place and make people interact less and get rid of those weak ties, which would be great, or we can diversify the conversations and get rid of the common conversation which drives collective attention, which would also be great. The problem here is that neither of these things are profitable. The first solution comes at the expense of losing users, the second at the expense of driving engagement. Companies aren’t going to outright reform in either direction, and because of Section 230, we can’t make them do that.
But I wonder to what extent that’s true.
Part II: Reforming Through Section 230
As far as we can tell, nothing about the way our recommendation algorithms are built is illegal. In the same sense that we can’t sue McDonald’s and the corn industry for America’s ongoing obesity epidemic and force them to sell healthy food, we are forced to sort of dejectedly resign ourselves to a “the market will sort it out” perspective when it comes to the damage the internet is doing to our societies. I hate this. I hate this largely because it presumes that what is good for our collective intelligence and healing polarization is going to be chosen by consumers and that the concerns of 1) consumers, 2) the cultural unit the consumers are in, and 3) our recommendation algorithms are aligned. But they simple aren’t.
Why can’t we sue websites for the damage they have done or legislate them into doing the right thing? As it is written, Section 230 distinguishes social media websites as distributors of information rather than publishers of information. What this means is that if you are attacked by someone on Facebook, if a Nigerian prince scams you via email, or if someone uploads porn of you without your consent on PornHub, these platforms, as distributors, are not liable. The publishers are. This is akin to attempting to sue Barnes & Noble for selling libelous content instead of suing the author. It simply wouldn’t work. This distinction between distributor and publishers has been a point of contention for S230. In some cases, websites have been successfully sued for content posted, specifically when it has been found that the website itself was taking on an editorial role via their active curation; it is in these cases that the websites are treated as publishers.
So what are our social media websites, are they distributors or editors? They seem to be a mix of neither and both. If, when you went into a Barnes & Noble, the managerial staff immediately looked you up and down, said “28, male, glasses, dresses conservatively,” and shuffled you to the Science & Nature section without the opportunity to look at anything else, we would not be inclined to simply call Barnes & Noble a distributor, we would say they provided a curatorial experience. This is the sort of experience we are exposed to online, and while social media websites argue you can seek out other information, the fact of the matter is that you are only shown about 4-5% of the content you opt into, and your subsequent behavior stumbling through this maze to find additional content only makes navigation in the future even more complex.
It seems to me that the regulatory state of social media websites having their cake and eating too has gone on for too long. S230 was written before we knew about the power of these algorithms and when the online market did look more or less like a Barnes & Noble. They are clearly taking on an editorial role, whether or not it is trained on the user’s behavior. While it is argued that we opt-in to this system, I don’t think that’s true. Given there are no alternatives (besides, as I’ve noted, building our own internet) there is very little that is consensual about this arrangement.
My suggestion then is this. Make these places offer this arrangement as opt-in. Elon Musk recently spoke on open-sourcing Twitter’s algorithm, which is a step in the right direction. Allow users or groups of users to alter their recommendation algorithms. Similar to a connected caveman version of Discord servers, I could see a world where when someone signs up for Facebook, they are able to choose their experience and enroll in a number of algorithmic options. If they don’t want one, they can use the super lame and polarizing one we are using now.
Obviously I’m not a lawyer or a legislator, but at some point we need to be able to get past the despair we’re faced every time we look at S230. I’m also not pretending this is a simple fix, but it’s better than nothing. What I would ultimately like to see is a system where both the regulatory system and the social media websites can have their cakes and eat them, too. With the rise of competing algorithms, we receive a number of additional benefits: transparency on what you are being targeted for, moderation and information censorship which takes place at a transparent and broad group level rather than an opaque individual level, and even more effective ways of targeting advertisements towards broader swathes of people who have explicit buy-in on the things they are being sold.
Besides that, I don’t see too many opportunities besides getting rid of social media entirely. Which… given we probably have enough weak ties, anyways, might be better for getting back to the things that are important to us instead of worrying about those people over there are doing.
Flache, A., & Macy, M. W. (2011). Small worlds and cultural polarization. The Journal of Mathematical Sociology, 35(1-3), 146-176.
Settle, J. E. (2018). Frenemies: How social media polarizes America. Cambridge University Press.
Turner, M. A., & Smaldino, P. E. (2018). Paths to polarization: How extreme views, miscommunication, and random chance drive opinion dynamics. Complexity, 2018.