Platforms, Polarization and Democracy

by Henry Farrell on February 21, 2024

So Cosma Shalizi and I have an article (messy pre-print) coming out Real Soon in Communications of the ACM on democracy, polarization and social media. And Nate Matias, who I’m friends with, has forceful objections. I’ve promised him a response – which is below – but am doing it as a blogpost, since I think that the disagreement could be turned into something more broadly useful.

Cosma and I wrote the article to push back against one version of the common claim that we can blame everything that is wrong and toxic with social media (and by extension, American democracy – this is a U.S. centric piece) on engagement maximizing algorithms and their cousins. Specifically, we don’t think that we can fully blame these algorithms for the kinds of belief polarization that we see online: people’s willingness, for example, to concoct elaborate justifications for their belief that Trump Really Won in 2020.

We do this by engaging in a kind of thought experiment. Would we see similar polarization of beliefs if we lived in a world where Facebook, Twitter et al. hadn’t started using these algorithms after 2012 or so? Our rough answer is that plausibly, yes: we would see lots of polarization. Following Mercier and Sperber, we assume that people are motivated reasoners – they more often look for evidence to support what they want to believe than to challenge their assumptions. And all they need to do this is a combination of simple search (Google like it used to be) and social media 2.0.

Search enables them to find evidence that will support their priors, while social media enables them to link to, comment, elaborate on and otherwise amplify this evidence. Because of how simple search works (it treats web links and activity as proxies for quality), the more that people link to and comment on stuff on social media, the easier it is to find, and the easier it is to find, the more that they will link to it and comment on it. We (for values of ‘we’ that actually mean ‘Cosma’) construct a simple model, which suggests that this feedback loop leads to a world where there are a few vast glob-like communities of mutually reinforcing beliefs surrounded by a myriad of smaller, and less consequential communities. In statistical terms, the sizes of different communities fall along a rough power law distribution.

Nate – who is no more a fan of the algorithmic polarization consensus than we are – takes exception to what we say. First – he doesn’t like that we use the word ‘toxicity,’ because it has no agreed-on definition, and is sometimes used as “a way to obfuscate and sidestep precision in order to avoid hard debates about democratic governance.” Second, he doesn’t like what he takes to be our “assumption that an Internet with many smaller groups would have greater toxicity and a world with fewer, larger groups would be less toxic,” and that “[s]ince [our] simulation can imagine an Internet with many small groups, [we] conclude that a toxic internet does not depend on social media algorithms.” Third, he thinks, following a recent article by Dan Kreiss and Shannon McGregor, that arguments about polarization and group size too are ways to “avoid talking about racism, sexism, and inequality.” Finally, he thinks that our model is misleading – while it is “compellingly simple,” it needs to look at “more empirical work beyond just famous papers published in Science and Nature,” and should ideally be grounded in community science.

It’s not going to surprise anyone that we, in turn, disagree with most of these criticisms (the exception is that we could certainly have been more specific in how we used the term “toxicity” and will do what we can to mitigate at this point in the production process). And perhaps there’s a more interesting and productive disagreement than you might think from academics having at it online, even in a reasonably friendly way, about what each said and meant. But to get there, we likely need to clear up some misunderstandings.

The most straightforward one is that Nate isn’t quite right about what our model says and what we argue. We don’t actually think that a world with many smaller groups would be less toxic, and a world with more big ones less so. Our argument is just the opposite of that. We think that even just with simple search and social media, the Internet creates a world in which deranged beliefs can scale more easily than they used to. Before the Internet, it was harder for people to find and glom onto mistaken beliefs that pushed against the common wisdom. This meant either that they were likely to go with that wisdom (which of course was itself usually dubious) or invent their own idiosyncratic dubious alternatives, pushing out in a myriad different directions, which to some extent canceled each other out. In our counterfactual, even simple Internet technologies of search and Web 2.0 would allow them to construct their own alternative realities, collectively, and at scale.

That counterfactual isn’t necessarily worse than the recent past, where there was high public consensus around ideas and beliefs that were often pernicious. Also, it isn’t obviously better than the world that we actually live in, which is the comparison that we are actually looking to make. Our simple model suggests that both with post-2012 algorithms and without, we end up with much the same outcome – a world where there are big agglomerations of people with fundamentally discordant political beliefs.

So what does this initially counterintuitive comparison get us? I recently read a book by the philosopher Cailin O’Connor, who independently adopted a very similar mode of counterfactual argument (she gives Liam Kofi Bright partial credit). O’Connor is interested in figuring out the causes of racism and gender discrimination. As she points out, many people attribute racism and gender discrimination to psychological biases, such as stereotype threat. She doesn’t want to discount these explanations. But she wants to investigate whether we would still see large scale gender and racial discrimination in a world where human beings weren’t biased in these ways.

Obviously, O’Connor can’t directly observe a counterfactual world in which individuals were perfectly rational and not subject to psychological bias. So, like us, she constructs a simple model, in which people are rational. She shows that under plausible assumptions and conditions:

A modeling perspective can show us that the conditions necessary to generate pernicious inequity in human societies are extremely minimal. Under these minimal conditions, cultural evolutionary pathways will robustly march towards inequitable systems. These models do not prove that real world systems of inequity have, in fact, evolved via these simple cultural evolutionary pathways, but they tell us that they could. In particular, they show that even if many of the most pernicious psychological facts about humans are removed or mitigated, inequitable conventions of the second sort are still expected to emerge.

In other words: we might see racial and gender bias continue at the system level (e.g. Black people and women consistently being discriminated against), even if we somehow, magically, got rid of all the psychological biases have at the individual level about Black people and women. This is a really valuable finding. Indeed, my only significant objection to O’Connor’s book is that she doesn’t make nearly as much as she might of it. Economists like Gary Becker and Milton Friedman were extremely fond of arguing that racism and sexism were irrational and would disappear if only markets were allowed to work their magic. O’Connor uses economic models to demonstrate the contrary: why racism and sexism may continue to thrive under conditions of rational exchange, and I would love it if she was just a little blunter in sticking it to the Becker/Friedman complex. Bright, Gabriel, O’Connor and Taiwo have used similar modeling techniques to build a model of enduring racial capitalism: perhaps this will provide a platform from which some lively and useful future polemic will be launched.

I’m not saying that what Cosma and I have done is nearly as valuable, but its approach is very similar. Like O’Connor, we use our model not to represent reality as it is, but to build a counterfactual, suggesting that if we had not invented the panoply of modern social media algorithms in the first place, we would likely have ended up in much the same place. This does not say that these algorithms didn’t contribute, any more than O’Connor’s arguments absolve the psychological bases of racism and sexism. It does strongly imply (assuming that our model is not utterly mistaken) that the problems would still exist even if the algorithms did not.

But like all theoretical frameworks, our counterfactual has its implied politics – and here is where I think there is scope for a more useful and specific disagreement. As mentioned, Nate links to a very new piece by Kreiss and McGregor, which argues that much of the literature on polarization is not only misconstrued but actively misleading. In their words, “Our foundational claim is that polarization might not be bad for democracy—it might in fact be a necessary outgrowth of efforts to achieve democracy.”

Kreiss and McGregor go on to detail the various ways in which the literature on polarization and platforms harks back to an imagined pre-polarization America (which enjoyed an apparent consensus only because Black people and others who disagreed were suppressed). They argue that we should pay attention to inequality rather than polarization when we look to assess the health of democracy. More bluntly: we should understand that much of today’s apparent polarization is the result of people’s efforts to redress the inequality that has been part and parcel of America’s purportedly democratic system for decades. The struggle to actively achieve American democracy inevitably involves contention – and not all sides are equal. Those who are pressing for more equality and justice – especially but not exclusively racial equality and justice – have a very different status than those who are trying to defend unjust relations. And the focus on polarization tends to push those important questions to the sidelines.

So Cosma and I largely agree both with this diagnosis of the literature and with the understanding of democracy that propels it. We don’t talk about this in the article, except indirectly in side comments – we were bounded by both sharp word count constraints and a fifteen citations limit (as an aside, we don’t just cite to “famous” Nature and Science pieces – we cite just one article from either Nature or Science and two more from the Nature/Science Extended Universe ™). We used one of our precious citations to point to a previous article where we set out our account of democracy, which (a) emphasizes that democracy involves rowdy struggle, and (b) stresses that “a commitment to democratic improvements is a commitment to making power relations more equal.” If we’d had more room (extensive self-citation is especially egregious when you’re cramped within the confines of a tiny bibliography) we’d likely have cited arguments elsewhere e.g. about the value of democratic instability in tearing up old racial and gender norms, and how “strong gatekeeping” media systems in the pre-Internet era subordinated Black voices and perpetuated myths about Black people.

In other words, our arguments start from a place that is broadly located within the equality-centric understanding of democracy that Kreiss and McGregor are looking for (of course: there may be aspects of our understanding that they and others might still very reasonably dispute). More broadly speaking, one of our major intellectual projects, with Danielle Allen is to try to build a model of democracy, explaining how its central commitment to equality provides it with dynamical advantages (this relates in important ways to the O’Connor book described earlier – it also relates in different ways to Danielle’s fantastic recent book).

All this said, our article is very explicitly a piece about polarization and democratic stability. Its underlying intuition is that if beliefs become too polarized, democracy will become unstable. And that is not an inherently stupid or biased argument. As Kreiss and McGregor summarize a broader literature:

at some fundamental level the groups that exist within a pluralistic society must accept one another as legitimate, even though they may have opposing values, interests, and ends. Groups must tolerate one another, accepting each other’s right to exist and to advance their interests in private and public spheres. This tolerance is essential given that groups often define themselves through drawing boundaries with others (Smith, 2003). It is often socially and politically powerful to create and draw hard edges around a shared identity, conjure a clear opposition, and define competing interests, especially through media spheres that support building, maintaining, and contesting political power (Squires, 2002). As such, some level of polarization is an endemic feature of social and political life. Polarization becomes problematic, however, when it is so extreme as to erode the legitimacy of opposing groups, the tolerance that democratic co-existence is premised upon and faith among partisans that the other side will continue to engage in free and fair elections (Haggard and Kaufman, 2021),

I think – though I am not entirely certain – that Kreiss and McGregor endorse this understanding. Their article is explicitly a “provocation” and a polemic against the tendencies of the polarization literature that they rightly detest. Still, they acknowledge that it only applies to “some” of the literature, that polarization can be “dangerous,” and that some share of the people who worry about polarization (including my Hopkins colleague Lily Mason) have an approach to polarization that doesn’t suffer from these flaws, and that polarization has risks too.

So what are the specific risks of the belief polarization that we talk about? Again, there’s writing elsewhere that we weren’t able to cite to, which emphasize that even under a minimalist account of democracy, we need shared (and justified) beliefs in the electoral process, and in the willingness of government parties and officials to give up office when they lose an election. That is a foundation of democratic stability, even if we embrace contention and equality as core elements of democracy.

As Kreiss and McGregor say, we should not embrace concerns about polarization “at all costs.” But we shouldn’t completely exclude these concerns either. Some opponents of polarization seem to think that to heal democracy, we all just need to start liking each other. That isn’t a particularly serious claim. But the claim that we need to figure out ways to live together in some minimal consensus, however grudging, is, I think, one that ought be taken very seriously indeed. Hence our argument, which stems from the claim that this consensus is democratically possible even under a realistic and moderately pessimistic account of human psychology (here, we implicitly push back against some prominent recent anti-democratic arguments). If those psychological microfoundations are right, we even have some general clues as to the foundations of a better and more stable democracy.

So this is the disagreement that I think is worth taking up. If Kreiss, McGregor, and for that matter Nate, don’t think that polarization is a problem at all, then it would be good to know this. But I really don’t think that they believe this. If, alternatively, they think that polarization is a problem, but one that has been misused by people who idealize a largely imaginary peaceful American past, then there isn’t any disagreement in principle between them, Cosma, and myself. Of course, there may be, and almost certainly are, practical disagreements, and articulating these disagreements and thrashing them out would be potentially very useful.

More broadly – I think we are all committed to an understanding of democracy that is both (a) more just and egalitarian, and (b) stable against urgent threats, which do include polarization. But figuring out how to reconcile justice with democratic stability is extremely difficult, both in the particular and the general. And it requires the bringing together of different kinds of knowledge. When Nate suggests that the framework that Cosma and I use is empirically unfounded, he’s wrong. We’re building on a large body of research in human psychology. But if he were to make the (mildly modified) statement that our framework is severely empirically limited, he would be absolutely right. It sketches out the landscape of one important problem, but doesn’t say much at all about how to solve it, or reconcile possible solutions with other major problems that we face.  I think that the “community science” that Nate favors is one enormously important – even crucial – source of ideas about how to do this, as part of a broader “translational” approach to building democracy, which helps address the deficiencies of big theories, but this post is already very long, so I’ll leave it there.

{ 23 comments }

1

Mike Huben 02.21.24 at 12:05 pm

So let me get this straight. You are addressing the claim that polarization is some sort of spontaneous order affected by or a result of AI algorithms of social media.

But how can you ignore the thumb on the scale of billions of dollars devoted to promoting right-wing wedge issues? A methodology that was effective before social media?

I’ve been arguing with creationists and libertarians for at least 50 years. Their epistemic bubbles were the same then as now. I routinely used to ask libertarians if they had read any books on liberalism instead of libertarianism, and they NEVER could name one for me: their bubbles were that well sealed. The methodology was as simple as “all non-believers are deluded/liars/demons.”

I could accept that social media give greater penetration to target audiences. I could accept that social media give earlier penetration to audiences, recruiting children before they have defenses against bad ideas. But that just implies social media are tools of the very wealthy promoters of these ideas with their political purposes. Tools alongside traditional communications media.

2

Doug 02.21.24 at 2:10 pm

I think in the first few paragraphs (still reading the whole) I have correctly separated the uses of “we” that refer to you and Cosma, and the uses of “we” that are a more general, unmarked sort of public. But I would like to be sure, because that seems important to the arguments. I would also like to know more about who you think is in the second sort of “we,” though I realize that’s a lot to ask.

3

TM 02.21.24 at 3:20 pm

Henry: “Because of how simple search works (it treats web links and activity as proxies for quality), the more that people link to and comment on stuff on social media, the easier it is to find, and the easier it is to find, the more that they will link to it and comment on it.”

Wouldn’t you, if you use simple search unenhanced by bubble algorithms, be as likely to find information contradicting your view than supporting it? Or even more likely, if for example you happen to be a creationist looking online for information about evolution? You have to already restrict your search to creationist web sites to not be confronted with legitimate sources. Or your views have to be so hardened that no amount of legitimate information will affect you. But in that case, your model cannot explain how we got to this point.

4

steven t johnson 02.21.24 at 4:34 pm

If I remember correctly, Procopius complained about ordinary people arguing abstruse theology. In the Puritan Revolution/English Civil War, suddenly seemingly from nowhere there was a multitude of mushroom churches with wildly variant ideology. And I seem to remember John Reed describing vividly how people talked, talked, talked all manner of ideas. (Though I may really remember Warren Beatty’s dramatization of it?)

The point is that when people are engage en masse in debate, it is because there is a crisis situation (yes, the implied claim the Nike “riots” would have been deemed an attempted revolution in later centuries is not the conventional opinion) that impels people to clarify their thinking as a guide to action. In so-called normal times, the ideal of the OP I suspect thought I hope I am inadvertently ungenerous, they don’t.

I think polarization in times of struggle is impelled by the vicissitudes of the struggle, in the first place by the resistance of the old regime. Even more importantly, it is polarization over actual policy. And polarization in normal times is cultivated as part of interelite struggle. This polarization typically I think is largely symbolic and has only a very indirect relationship to actual policy struggles. What policies are at issue are generally the choice between highly reactionary politices vs. a very limited reformism.

Polarization is not necessarily a bad thing. It is a fairly strong symptom of a breakdown in the broad consensus among the rulers. But is them being united against the rest of us really a Good Thing?

5

MisterMr 02.21.24 at 4:58 pm

@TM 3

For example, if I’m a leftie and therefore read CT, I’m more likely to find links to leftish blogs/people, who will also link to leftish blogs/people, etc.

When I google stuff, I’m more likely to use words that are coded left etc..

When I went to uni like 30 years ago, I had an exam about sociology of mass media, and I vaguely remember some research done by, IIRC, Lazarsfeld, in the 50s or 60s.
He tried to see if, in terms of political communication, more exaggerate and emotion based communication work better, or more moderate and rational sounding worked better.
It turned out that it depends on the audience: if the audience is already a partisan, emotional and exaggerated communication works better, but if the audience is undecided (or even from the opposite side) rational and moderate worked better.

But there is a catch: a lot of political communication was made by “opinion leaders”, that is by people who were respected in their communities and are politically motivated, and who have a strong impact in their community, and these people are usually already committed to their side, so the higly emotional communication has a second way to have an impact, through the “opinion leaders” channell.

If this was true in the 50s or 60s, this is likely to be truer today with social media, with people going to blogs or forrums of like-minded people.

That said, I believe the polarisation is due to not so much to the “echo chamber” effect, but rather to other social dynamics, though we already spoke of the role of economic inequality, status fear etc. elsewhere so I won’t go on here.

6

William Berry 02.21.24 at 11:22 pm

@TM:

It begins with the search terms, then? The bias is there from the start, and not from the effect of algorithms?

That’s a temptingly simple view that appears to illuminate — no, eliminate — a lot of arguments and assumptions.

It’s wrong, of course. There’s nothing “original”about search terms. The terms used are conditioned by past results, past online experience, past all that niche enculturation. For some time now, this has been the case. It can be— surely is much of the time — a positive feedback loop that starts with a seed of resentment and grows into (say) a massacre of schoolchildren. The algorithms are there, doing their thing. And the platforms are owned by the Elon Musks of the world, who are determined to remake humanity in their own image.

This is different from all previous human history.

Why, it might even be much worse!

I sometimes think that “Our” individual experience tends to parallel humanity’s historical experience; the toxic garbage is still there, burning with the fires of patriotism/ religious frenzy/ every kind of hatred and exclusion/ every kind of dead and twisted and murderous ideology, and nothing of love or affirmation.

As a race, we can’t look forward because we’re too busy eating ourselves tail-first. Yummy!

7

KT2 02.22.24 at 1:44 am

HF says; “Would we see similar polarization of beliefs if we lived in a world where Facebook, Twitter et al. hadn’t started using these algorithms after 2012 or so? Our rough answer is that plausibly, yes: we would see lots of polarization”.

I agree. Though the amount of imagery, negative speech and text is now pushed to another level due to social and media access and methods of graining attention – it has become polarized by the medium forcing changes in media business models.

We are hard wired to rewire our brain.

“… at the functional level (see Figure 1), politically informed agents’ exposure to a political stimulus (e.g., Obama’s picture) may elicit unconscious information processing, which in turn manifests in IERs (e.g., anger).” (^1.)

HF said; “…we assume that people are motivated reasoners – they more often look for evidence to support what they want to believe than to challenge their assumptions.” …”so the higly emotional communication has a second way to have an impact, through the “opinion leaders” channell.”, and we don’t realise the conditioning until it is too late and polarized.

In “Toward a Theory of Political Emotion Causation” by Asaad H. Almohammad (^1) this quote stands out “As such, this unconscious process does not encompass the assignment of truth value (Gawronski & Bodenhausen, 2006). Hence, it takes place regardless of whether an agent considers it to be accurate (Bassili & Brown, 2005).”

Unconscious and belief regardless of fact. Oops! Political polarization.

^1.
“Toward a Theory of Political Emotion Causation”

“… implicated in automatic motor responses (Adolphs, 2002; Calder, Lawrence, & Young, 2001; for example, bodily responses that portray anger) and processing of rewards (McHaffie, Stanford, Stein, Coizet, & Redgrave, 2005). Evidently, charged nodes of another area (locus coeruleus) involved in automatic somatic responses modulate the activation of the amygdala, pulvinar, and superior colliculus’ associative networks, in response to emotionally salient stimuli (Liddell et al., 2005). In addition, the amygdala’s active nodes may stimulate associative networks in cortical regions involved in cognitive functions (orbitofrontal cortex; Öngür & Price, 2000) and affective functions (anterior cingulate cortex; Bush, Luu, & Posner, 2000).

“The spread of activation along the aforementioned structures, through the extensive associative pathways, is believed to trigger implicit emotional responses (IERs; for example, facial expressions of anger); it is noteworthy to mention that agents are thought to be unaware of such reactions. As such, this unconscious process does not encompass the assignment of truth value (Gawronski & Bodenhausen, 2006). Hence, it takes place regardless of whether an agent considers it to be accurate (Bassili & Brown, 2005). ”

https://journals.sagepub.com/doi/10.1177/2158244016662106

And the impact on the brain is so deep our amygdala develops a cleft due to this polarizarion. Do CTers, moderates have a smoother amygdala compared to those who are lower on an equality scale? More trauma and or fear responses by political affiliation?

The reversal of a cleft developed in the amygdala qnd associated brain systems is a barrier to depolarization. In my limited opinion. I am not a brain expert in any way. See memory extinction studies for reversing such effects.
Any cogsci readers?

Polarizationin the brain especially occurs in; “… individuals interested in politics showed greater activation in the amygdala and the ventral striatum (ventral putamen) relative to individuals uninterested in politics when reading political opinions in accordance with their own views.”
From”Interest in politics modulates neural activity in the amygdala and ventral striatum”
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6870837/

Which is basically conformation bias.We are hard wired to polarize with or without social / media. Pamphlet anyone?

8

Richard Melvin 02.22.24 at 1:59 am

Because of how simple search works (it treats web links and activity as proxies for quality)

Surely the procedure by which links and activities are totted up and pages are ranked is precisely an algorithm?

Is your point merely that the difference between such relatively simple algorithms and more complicated statistical or machine learning models is likely not significant? Which is hardly surprising, both are different attempts at optimizing essentially the same metric. Using an abacus or a calculator makes little difference to the function of a shop.

Increased polarization is caused by the decline of mass broadcast media. Everyone getting the same message is different from everyone having a non-zero chance of getting different messages.

Anything else that could be written on that topic faces a choice of ether being a footnote to that point, or being wrong.

9

TM 02.22.24 at 9:11 am

MisterMr: “For example, if I’m a leftie and therefore read CT, I’m more likely to find links to leftish blogs/people, who will also link to leftish blogs/people, etc.

When I google stuff, I’m more likely to use words that are coded left etc.”

Perhaps, but that is not an effect of the (prima facie neutral) search function. You are already part of a bubble and whatever means of information gathering you use is affected by that. My point is that the formation of the bubble cannot itself be explained merely by the fact that all kind of information on the web can be accessed via internet search.

I used the evolution example obviously to reject the framing of symmetric polarization. many of the issues that are commonly discussed under the “polarization” heading can in fact easily be shown to really consist of two sides one of which refuses to accept empirical evidence. And imho the fact that a lot of fake news is available on the web and can be accessed by everybody does not in itself explain why so many people believe the fake news, given that legitimate information is as easily accessible. (It’s not always that simple, I agree, but often enough it is).

10

Kinnikinick 02.24.24 at 2:06 am

In my experience, coming to understand something new usually involves accepting that one has to think about it in an unfamiliar way. Even a basic grasp of economics, evolution, or astronomy requires a scaffolding of concepts that are unintuitive at the very least.
A real danger of “bubble searching” is that the answers you’re likely to get are of the “one weird trick for X” variety, where pre-existing concepts go unchallenged and new information can only take the form of trivia fastened onto a familiar structure.
I’m struck by how conspiracy theories keep their shape even as they incorporate new details that imply shaking the worldview of their adherents to the core: we can now create submicroscopic injectable radio transmitters! Humans have secret bases on Mars! We have met extraterrestrial intelligences!
Bits of “information” like these are decorative elements, in a sense; they are certainly not load-bearing concepts. There is no risk of ripple effects as they interact with other beliefs; unfortunately, I think that is the real basis of their appeal.

11

John Q 02.24.24 at 7:21 am

No one so far seems to have mentioned Fox News and rightwing talk radio, which predate the Internet and which are the main information source for the old people who form the most reliably rabid component of the rightwing base.

These media fit the confirmation bias story neatly. But because they are mass media, they don’t do much in the way of selection. Everyone gets the same package.

12

engels 02.24.24 at 8:56 am

People seek out information that confirms their prejudices and web search enables them to do that more effectively. At least in Britain if I go to a library and ask for a book about “why evolution is wrong” I won’t get anything but typing that into Google is an instant expressway to crankery.

13

J, not that onr 02.24.24 at 6:05 pm

There are processes that happen when people do things, alone or together, over time. Grisham’s Law would apply to the results of popular online searches without anything like a selection algorithm, and the differences in quality between results for common medical searches, between US and UK results, are striking and almost certainly entirely a product of human beings trying really hard to be the opposite of toxic. I feel however that claiming everything is worse now might be a good way to energize people to work for change.

14

J, not that one 02.24.24 at 8:36 pm

I do kind of take issue on the other hand with the idea of “common wisdom.”

My theory is that in the US, the starting point for discussions of polarization was a combination of (1) pragmatism which as Menand has shown develops a kind of quietism out of the reaction to post Civil War Reconstruction, valuing lack of conflict above all else, and (2) post WWII social science which from an unclear political perspective built an idealized image of society where there was general agreement on beliefs and values that were theoretically supported by principles that were also generally agreed upon at least by elites (and who elites were was again generally agreed upon). But there’s no possible mechanism or institution that could have policed anything like “common wisdom” on a macro scale. (Unless you adopt some kind of mysticism that mandates a lockstep agreement among all persons who belong in some way, maybe to a religion, some other institution, or a kind of spirit of a land, class, or generation, or that supposes only a small number of valid options are even possible.) The only possible mechanisms are so local that they’d be impossible to coordinate, except in the broadest, most trivializing and therefore error-prone way.

15

Neville Morley 02.25.24 at 11:40 am

Grisham’s Law: Every false rumor gains credibility while being repeated, until it is practically a fact.

16

J, not that one 02.25.24 at 3:18 pm

Oops. Gresham’s Law. I guess that’s an illustration of my point, in a way, like the definition of “flammable.” :)

17

engels 02.25.24 at 8:55 pm

I’d assumed Grisham’s Law was another English statute enacted after some mediagenic death.

18

TM 02.26.24 at 9:08 am

“At least in Britain if I go to a library and ask for a book about “why evolution is wrong” I won’t get anything but typing that into Google is an instant expressway to crankery.”

Again, if you start with loaded search terms like that, you are already in a bubble. This is not how bubbles are formed. And in defense of websearch, even with those loaded search terms, you’ll get non-cranky results along with the cranky ones. I doubt that web search is a major encrankification pathway.

As an aside, I have often used a quick search in the hope of finding some evidence proving or disproving a point and finding that things weren’t quite as I expected.

19

TM 02.26.24 at 9:25 am

Or to put it succinctly, the way web search works doesn’t reinforce confirmation bias. It takes at least a bit of effort to ignore all the available non-confirming information. It’s much easier to just switch on your favorite crank channel (whether online or cable or radio).

20

Jonathan 02.26.24 at 9:38 am

Why not check out the list of dreadfully (in)sane, even psychotic, true believers who specialize in polarizing language, and actions too via this reference
http://www.digital.cpac.org/speakers-dc2024

21

CarlD 02.26.24 at 10:27 am

O’Connor seems to be arguing a kind of sensitivity to initial conditions nonlinearity. Big effects from small causes. I no longer think “rationality” is a useful way to think about these dynamics, even as a heuristic, unless we’re prepared to think of evolutionary processes as a kind of intelligent design and all of their incumbents as intelligent designers.

I’m self-plagiarizing a sketchy old post on conspiracy theories and echolocation from a few cycles back in these questions as a demo. “Equality” is also a funny way to think about stuff that works like this. One reason you don’t want to travel too far on O’Connor’s train of thought is that what it takes to reset the initial conditions to equal in each action cycle is not a way any of us want to live:

What we call ‘self’ is pretty clearly an emergent, adaptive epiphenomenon of environmental, biological, and cultural feedback systems churning along at various scales. Because it’s dynamic, relational, and adaptive, there’s inherently no stable essence to such a structure. It only persists by active (massively active) engagement with its surroundings, whatever they may be from time to time. This is an energetic process obviously subject to resource constraint.

Adaptation and evolution create a distribution of strategies within this basic dynamic. Interaction is split off into subsystems that operate at different rates and intensities, both within and among ‘individuals’. Resources are differentially committed and optimized around particular interactive settings. For example, it seems that people have various relatively hard wired rates at which learning occurs, with characteristic advantages and disadvantages to slow or swift response to new information.

Again, the dynamic interactivity of self means that its maintenance requires constant orienting feedback with and from the environments, internal and external. This is the echolocation part. But resource constraint means that we can’t be operating active echolocation in every subsystem and every scale simultaneously, and adaptive differentiation means we’re optimizing and prioritizing those feedback loops across a range of strategies. Practically, this means people are going to be active and maybe even ‘needy’ around a range of interactive domains, giving off and taking in information asymmetrically across multiple axes, none of this chosen or conscious obviously.

“Who am I” is a much harder question to answer and keep answered in interactively chaotic environments than homogenously stable ones. Environments produce a range of echoes, and processing biases reward different collection routines. It may be that for some people sometimes, somewheres, the mismatch between their pings and the available echoes is profoundly alienating, if not literally crazymaking. You would expect these distributional experiments out on the long tails, and you would expect those tails to get fatter as environments become more variable and chaotic. You would expect people to become more aggressive in their attempts to create and manage congenial echo chambers.

Conspiracy theories then work as a special case of a very ordinary kind of echolocating ping, by broadcasting a strongly biased signal into a chaotic environment likely to generate a loud and clear response one way or another. Although this feedback loop is likely to be identity and community defining, it’s not in the first instance about ‘believing’ the conspiracy theory at all.

22

somebody who remembers being told the dixie chicks should be tortured to death at guantanamo bay 02.26.24 at 6:57 pm

i think it is worthwhile to see “the algorithms”, broadly, as an extension of previously existing social media in the country. Facebook groups were, in the 90s, mailing lists and then web forums like free republic. right-wing newsletters are a huge business in the united states and have been for decades. am radio has always been a reliable way to get the death squads coming to your local school with a rifle to look for gay recruiters or driving to the border and wandering around in the desert before finally deciding to just shoot your ex-wife.

to the extent that social media has had an effect on these matters that hadn’t previously existed, i would say that its because of the ability to organize nationally on key issues. the black lives matter protests and the counter-reaction to it were and are organized openly on social media. this isn’t polarization because the people who disliked police brutality already disliked it and the people who hated the black victims of police brutality already hated them. social media just gave (each of them) a means of organizing their efforts.

consequently, the premise of this post remains intact – those events were not the result of “algorithmic” promotion or demotion, but as a result of user organization. indeed, historical experience teaches us that we should fully expect any voice speaking even slightly against white supremacy to be suppressed in any communications channel controlled in any respect by any american with money. only during times of national horror can these voices shout loudly enough to be heard.

23

KT2 02.28.24 at 9:41 pm

Henry (& Cosma) said “We think that even just with simple search and social media, the Internet creates a world in which deranged beliefs can scale more easily than they used to.”

“Lost in the Twittersphere – Trapped inside a Humanist Echo Chamber with the Global Village Idiot
November 24, 2019
jimdroberts

“Once, every village had an idiot. It took the internet to bring them all together.”
Colonel Robert Bateman

“The world is now like a continually sounding tribal drum, where everybody gets the message all the time…
Marshall McLuhan

“Instead of one tribal drum communicating one message, giving everyone access to a unified source of knowledge, it’s given everyone a voice, irrespective of their knowledge.  Rather than democratizing knowledge, it’s democratized the right to express ones opinions, no matter how unqualified the person might be expressing it. Essentially it’s armed every village idiot with a megaphone, while forcing everyone else to wear hearing aids, making their uninformed opinions hard to ignore. (The secret to the internet seems to be in substituting the metaphorical hearing aids for metaphorical ear plugs.) Where as historically these idiots were spread evenly geographically, say one in every village, the internet has enabled them to band together. If McLuhan were alive today, I wonder, might he be tempted to amend his title to, The Global Village Idiot. ”
https://theexaminedlife.press/2019/11/24/lost-in-the-twitterspehre-trapped-inside-a-humanist-echo-chamber-with-the-global-village-idiot/

Comments on this entry are closed.