“The Discourse” is the Cybernetic Event Horizon of Human Freedom

by Kevin Munger on December 20, 2022

Hello! My name is Kevin Munger, and I’m delighted to have gotten the call up to the blogging big leagues. I’ve been blogging since the beginning of the pandemic at Never Met a Science, a combination of meta-science (get it) and media theory that I intend to continue here.

Crooked Timber has been around for longer than Twitter, and it looks like that which has burnt brightest will burn shortest. 

Twitter’s spectacular conflagration, the wildfire currently burning through some of the dead wood of the digital media ecosystem, both entrances and illuminates. The fantastic release of energy produces pyrrhic phantasms, full of soot and fury…and while the catharsis and camaraderie of the bonfire are not to be taken lightly, we shouldn’t assign any meaning to the random sparks. Breathless attention to what Trump did every day in 2017 was understandable (if ineffective); breathless attention to what Musk does every day in 2022 is embarrassing.

I have been extremely critical of Twitter’s impact on intellectual life, yet I am not pleased to see so many academic colleagues “leaving” Twitter because a Bad Man is now in charge. This isn’t just hipster churlishness; being critical of a bad thing for the wrong reasons can be pernicious. The implication of the current critique is that if the Bad Man were removed, Twitter would be ok.

This wishful thinking has been the opiate of the academic/media/liberal professional class for the past six years, ever since the Great Weirding of 2016. The high water mark of any trend is of course the beginning of its decline, as evidenced by the fumbling of the Obama-Clinton Presidential handoff. This class–my class–has been adrift ever since, disoriented by the reality of contemporary communication technology. Rather than confront the depth of the challenge to the foundations of liberal democracy, we are sold crisis after crisis with the promise that solving this one will bring us back to “normal.”

To be clear: the crises are real. It’s the normalcy that’s fake: “Boomer Ballast” (the central argument of my recent book) has unnaturally preserved the façade of postwar America even as the technosocial reality shifts under our feet. I fear that ours is not an age for “normal science” in the social sciences, where ceteris is sufficiently paribus to engineer marginal gains. 

We need to think bigger, to be open to the possibility that the future will be radically different from the present and recent past, and then to actively envision the future we desire. This is my gloss of Maria Farrell’s recent, magnificent Crooked Timber post that challenges us “to think of technology systems and their version of the internet as an ecosystem” and to accept “that ecosystem as profoundly damaged.” This is the scale at which we need scholars to be thinking today; I, at least, have been deeply influenced by the tradition of media ecology.

I agree with Farrell that we need a healthier, stronger communication ecosystem. Ecology is fractal; each system comprises lower-level systems and composes higher-level systems.  

She emphasizes the higher level of health, the balance between different species and between different ecological niches. I will discuss the lower-level system, the individual animal: the humans. For a healthier internet, we need healthier humans. Facebook is other people.

Strategically, too, I believe that developing human capacities will enable us to demand and indeed to build a healthier digital ecosystem.

The destruction of Twitter creates a space of freedom, the rapid deterritorialization of what Farrell calls a “tech plantation.” Thus far, this freedom has mostly meant wagging our collective fingers at the pyro tyrant while eagerly waiting to see what he torches next.

Instead, we should look deeply at the rest of our communication ecosystem, cast into fresh relief by the flames. The lonely and quixotic elite-media media ecologist Ezra Klein calls our attention to attention, to the idea of attention as a common-pool resource. Many people—even the technologists—have known this ever since the great political scientist Herbert Simon proclaimed that “what information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.” The digital readers of the NYT know this too, which is why many of them wisely avoided getting bogged down in an article that they were warned would take “10+ minutes to read.” 

“10+ min read”!! The medium is the message, Ezra.

 

It is a testament also to the reigning Scientism that Klein feels the need to appeal to a study that “measured skin conductivity — a sign of emotional response — when participants saw positive, negative and neutral news. Negative news was consistently the most engaging.” My metascientific point is against reductionism: the idea that a sentence that talks about the mechanism of human emotion in terms of electric impulses (perhaps, with $1 billion from the NSF and 20 years, we can get to molecules!) is more meaningful than a sentence that simply takes human emotion as a relevant ontological category, as real.

Instead of electrodes, my method over the past few years of media uncertainty has been to read the great postwar critics of technology, the people who saw what new media technology was doing to us. Ivan Illich provides what I think is a more profound version of Klein’s point: 

“Just as the commons of space are vulnerable, and can be destroyed by the motorization of traffic, so the commons of speech are vulnerable, and can easily be destroyed by the encroachment of modern means of communication.”

Previous entries in this series have centered Illich, Marshall McLuhan, Neal Postman, and Margaret Mead. Today I turn to my new obsession, the missing link between cybernetics, media theory and continental philosophy, a Czech-Brazilian polyglot who has only recently been translated into English: Vilem Flusser. 

 

You may not like it, but this is what peak male media theorist performance looks like.

 

I plan on adopting a Flusserian lens in many future posts—indeed, one non-tenure-relevant priority in the coming year is Flusser boosterism. I’m genuinely obsessed. For now, though, I’ll read Twitter through his masterpiece, Into the Universe of Technical Images.

McLuhan pays attention to the human sensorium as media, emphasizing the two-dimensional structure of the visual field and arguing that the era of the printing press had replaced the all-at-onceness of earlier oral (ear-based) culture with the linearity of text, of the eye. 

Flusser extends this to theorize the hand as media, as the mechanism by which humans rearrange our physical environment: how we imprint ourselves on the world, how we thus create information. Flusser sees the creation of information as the quintessential human activity, that human freedom is our ability to fight entropy by wrenching the physical world from more- to less-probable states. Accepting that information is physical, these states of the world include the higher-order human activities like communication through media objects.

The crux, for today’s post: “Discourse is the method through which information is transmitted and dialogue the method through which it is produced.”  “The ideal society,” Flusser writes, is one “in which discourse and dialogue are in balance. Dialogue nourishes discourse, and discourse provokes dialogue.” 

But Flusser diagnoses (1985!) that the present is imbalanced in favor of discourse, that our society has similarities to the late medieval Catholic church: “the centrally radiating discourse of the Church controls the society, the sources of information threaten to dry up from an absence of dialogue…centrally radiating discourse dominates us, too, and society is threatened with entropy.”

This is counter-intuitive; modern communication technology has obviously enabled novel forms of expression, of genuine human dialogue, the creation of new information. Consider the flourishing of new music genres or the heady days of the mid-2000s blogosphere. Flusser warns  that these “dialogues that are technically possible now appear as a variant of medieval disputation…should they nevertheless lead to new information, it will now be disregarded as noise, whereas at the time, it was heresy, rendered ineffective through anathema.”

Some will recognize hints of anathema in the current discourse, but the shadowban is categorically different from the Spanish Inquisition. Rather than the Orwellian dystopia that hysterical critics see lurking around every corner, we are in a Huxleyian dystopia where novel information need not be suppressed because it is discarded as noise. This is how Neal Postman puts it in Amusing Ourselves to Death, also published in 1985. 

Any Twitter addict will see my larger point: “The Discourse” is the universally acknowledged term for what we log on dozens of times a day to receive. People talk of “The Discourse” in fully ironic terms, accepting that the text of the tweets they encounter will be “machinically” produced — and that their role to machinically reproduce what they encounter. Note that I’m saying “machinically” and not “mechanically” — the distinction comes from Lewis Mumford, the early 20th-century critic of technology. “Mechanical” (rote, repetitive) use of human bodies peaked under industrialism; in post-industrial capitalism, we behave “machinically” insofar as our actions are subsumed within the functioning of a societal “megamachine” that produces something other than human freedom.

Because the reproduction of The Discourse is machinic rather than mechanical, it requires some adjustments to make it more legible under local conditions–just as in the medieval-Catholic sense. Like the previous Romans, the Catholics only projected, counting on their on-the-ground missionaries to adapt the edicts to local conditions. The Vatican had no organs for receiving doctrinal feedback, nor any desire to develop such organs.

This Discourse flourished under two distinct technological regimes. In Europe, things were more or less settled until the printing press and Martin Luther produced a dialogic shock to the discursive system. This is just about the strongest media-ecological case, so if you don’t buy this, then I can’t help you. 

The other technological regime under which Catholic discourse flourished was military tyranny. Perhaps the only achievements of the Portuguese and Spanish imperial project was the imposition of Catholicism on Latin America, at the point of a sword caked in blood. Criticisms of the present regime are valuable (as I hope to demonstrate shortly), but never forget that things have been and could be so much worse. 

Returning to the present, we should consider two parallel anxieties: the fear of machines becoming human, and the fear of humans becoming machines. In other words, the parallel fears of The Discourse and GPT. I’ve been concerned about the latter for some time; see my article Hello Goodbye in Real Life magazine (RIP). My prediction is that automated text generation will in fact make communication harder, by obscuring the degree of intentionality encoded in an identical string of text. As the world has come to realize in the past month, we have already entered a world in which the seminar paper, the letter of recommendation, and even the “thoughtful” post-interview follow-up email have been annihilated, the victims of informational hyper-inflation.

In the short term, I advocate for genuine novelty, a form of linguistic detournement: playing with language by deconstructing words, adopting new slang that contorts standard grammar. In Patricia Lockwood’s recent novel about language and Twitter, the tweeters find joy in using the word binch, a word whose appearance is thus far unpredictable by machines. 

 

The machine is nervous.

 

But generative text models can be rapidly and perpetually updated with usage trends. All digital communication encoded in human language can be fed into the next iteration of the model. Linguistic novelty might create brief spaces of freedom, outside the machine’s training data and thus distinctively significant to humans. This will ultimately fail: some of the most powerful entities in the world are dedicated to closing these gaps as quickly as possible. “Run me over with a truck”; “inject it into my veins”: this ain’t a tweet, it’s a goddamn arms race — one we cannot win. 

Indeed, the most effective way to midwife the next, more powerful generation of GPT into existence is to help its creators identify the current exceptions, what are for now spaces of freedom from automation. By experimenting with ChatGPT (and then providing annotated codes of the model’s performance on Twitter) we are providing high-quality training data, empowering them through Reinforcement Learning by Human Feedbackfor free

Flusser predicts this process with uncanny accuracy: “unwanted information is reabsorbed and, in this way, reinforces the tendency of the sender to become more and more indistinct and inauthentic.”

So something I’ve been experimenting with is tweeting poorly. I’ve been inspired in this by my colleague Josh McCrain, who has been tweeting poorly for years. There’s a thrill of authenticity in sending “bad” (from the perspective of The Discourse) tweets and getting no likes. I feel like Stavrogin, or Meursault: in a fully inauthentic society, one in which our actions are “freely” chosen from a constrained space and thus without meaning, the only authentic action appears to be destructive, rebellious, binchy. 

This impulse is understandable, forgivable, yet immature. Genuine freedom can only come from connection with other humans, through the proper balance between The Discourse and dialogue, a high-information-density embrace of the other. 

The only durable spaces of communication must therefore remain uningestible by the machines. Twitter cannot be used for dialogue, not for long. All writing, audio/video recordings and our physical presence in space are all already machine-readable and machine-read. Facial recognition and always-on listening devices loom everywhere, and must be vigorously resisted. Both because of the civil libertarian fear of 1984 but also the Huxlyean fear of a totalized Discourse, perfectly able to predict every future human action, the cybernetic event horizon of freedom. Some critics of AI focus on the next stage, when machines are able to act in the physical, three-dimensional world, but human freedom is already threatened.

It would be all too easy for Flusser to fall into the same doomerism that has hamstrung latter-day critics of this development. Many young anti-tech activists I’ve encountered have fallen into either Kazcynskian eco-terrorism or an even more fatalistic collapsitarianism.  But despite being forced to flee Prague in 1939 and then to flee his adopted home of Sao Paolo in 1972 after the military dictatorship, Flusser retains the characteristically Czech absurdist humanism of Masaryk, Havel, Capek: he simply believes that other humans are the most interesting and valuable thing in the world.

This humanism allows him to diagnose the entire trajectory of the internet: that in the short run, unfettered dialogue will produce an explosion of creativity: “if only a few people were geniuses in the pretelematic era, it was because most people were unable to participate in dialogue.” Dialogue allows us to construct each other, build each other up to higher levels of competence and thus greater opportunities for action: freedom. 

In the same breath, though, Flusser tells us the cost of telematics: “This strategy…applies to artificial intelligences as well as human beings….these artificial intelligences will also become more like geniuses. So the question of how human intelligence and artificial intelligence are related will become the center of the dialogue very soon. We will face the unpleasant choice between humanizing artificial intelligences and making human ones more like apparatuses.”

So here we are. The Discourse is making humans more like machines and GPT is making machines more like humans. Flusser warns that the time for action is short: that we must reprogram the “telematic apparatus” to enable truly dialogic communication, and soon, before the ultimate triumph of The Discourse. “The relationship between people and images is descending into entropy, a fatal boredom is setting in.”

The future thus depends on “anti-spectacular revolutionaries.” Cultural critics or political activists who do not prioritize the reprogramming of the communication network cannot succeed because their targets are not solid, they are not “tilting at windmills but storming Kafka’s castle.” Any spectacular action, action that is visible through images and fed back into The Discourse, cannot be revolutionary. “Truly revolutionary engagement would be to turn the technical question [of whether and how dialogic threads can be drawn] into a political one…to turn a technical question into a political one, it must be torn from the technician’s hands. Technology has become too serious a matter to be left to technicians.” 

This is why I am so frustrated by the current desperate search for a Twitter replacement, for Twitter minus the Bad Man. The scope of possibility is limitless: the internet could be used in uncountably many yet-unimagined ways. Many interesting variations already exist! Since the beginning of the pandemic, almost all of my “social media” activity has been on semi-private/Patreon-supported, pseudonymous Discord servers or on small- and medium-scale end-to-end encrypted WhatsApp groups. 

These forms have enabled (for me) the emergence of genuine community. There are no algorithms; content is arranged chronologically, within defined chat rooms. The online (and offline!) reputational risks of good-faith dialogue are minimal – and the community is empowered to define the norms of “good-faith dialogue.” In contrast, massive platforms like Instagram require a one-sized-fits-all approach to content moderation that inevitably produces unreasonable outcomes.

On Discord, entry is restricted to members paying a $5-10 monthly fee that also includes access to specialized content (podcasts, IRL events, URL events like livestreams) put on by the hosts. This money also pays for a few part-time moderators who have been deputized to enforce the server rules, to ban or suspend problem users. This setup is exemplary of the “exitocracy” advocated by the recently deceased political philosopher Jeffrey Friedman in his magnificent Power Without Knowledge: A Critique of Technocracy. This work influenced me deeply and I expect to keep returning to it in coming years.

To rephrase the thesis of this post: liberal democracies have been scrambling to figure out how to “regulate the internet.” The EU has moved fast (eh…it’s moved) and gunked up the works a bit; the US, ridden with institutional and demographic scleroses, might get around to changing Section 230 of the Telecommunications Act of 1996 at some point in the next decade. 

But consider the car. How do we regulate the car? First, we had a lot more time: it took over a century for household car ownership to go from 0% to 80%, compared to just over a decade for the same growth in cell phone ownership. For our liberal democratic institutions to have a chance — for science, journalism, education, deliberation, elections and legislation to function — we desperately need to slow the pace of communication technology. The liberal project cannot succeed without its conservative pole; at present, the dialectic has collapsed in favor of unfettered Progress. 

But how did we use that time to regulate the car? We invested massive resources in developing and updating federal laws, strict standards for vehicle manufacture, licensing requirements for drivers, roads and road signs enabling healthier driving, and the most powerful police force in the world to enforce all this.

Was this bundle worth it? Great question – Ivan Illich says “hell no!”, and I’m inclined to agree. Digital communication holds unique promise, could be used to produce a flourishing Dialogue to balance the Discourse of modernist institutions and broadcast media. Digital communication could also lead us to the Huxleyan dystopia, premature heat death for the human mind. Hence Flusser’s urgency.

What would it look like if we “regulated” the internet the way we “regulate” the cars? In other words: who should “regulate” the internet? Everyone. Who must be enabled and empowered to “regulate” the internet? Everyone

 

Thanks to Amy Linch and Drew Dimmery for invaluable dialogue about this post.

{ 36 comments }

1

Alex SL 12.21.22 at 9:53 am

I am sorry to say that I find neither the problem nor any suggested solutions to be well defined enough to make sense of them.

How is oral narration not exactly as linear as reading, given that only one word is spoken at a time?

In what sense does dialogue create information, and discourse doesn’t? They are both types of conversation that merely transmit information, dialogue is merely the subset of discourse that has only two participants. New information, on the other hand, is produced by observation in the widest sense, ranging from one person trying something out to empirical experiment in a formal scientific context.

“In Europe, things were more or less settled until the printing press” – really? What about the Albigensian Crusade, just for starters? Catholicism constantly had to put down heresies.

Previous comment threads here discussed the pros and cons of Twitter’s existence and dynamics, and I repeat that I do not recognise my own experience in any of this. I have no fear that I am becoming like a machine if and because I tweet, for example, “opportunity to work with me and [colleague] at [institution] in a postdoc project [link]” or “look at this nice plant I found during my field trip”; nor because I read a colleague’s tweet that says “[tile of her newest paper] [link to her newest paper]”. It just isn’t even a consideration.

There are two aspects of this discourse (no pun intended) that bug me in particular.

First, there is something rather condescending and patronising to telling millions of media participants who for the first time in history can directly interact in real time with powerful politicians, subject matter experts, celebrities, or influential journalists, that they are all doing communication wrong and should listen to me, the intellectual here, to learn how to do it right.

Second, I just don’t see how “cultural critics or political activists” can even in principle exert any significant influence on this. I can just about see that one could fight a campaign and, although this is where it becomes hilariously unrealistic, hypothetically win it to outlaw all social media in one’s country. (Let’s not even talk about what level of dystopia would be required for enforcement.) But if, as I assume, that isn’t the idea, if you allow social media to exist in principle, and millions of people happen to want exactly a Twitter minus the Bad Man, because that is exactly what serves their communication needs, and they are unwilling to pay a few dollars per month to participate in some specialised niche community where they can’t tell the environmental minister directly that she should change policy AND get article notifications from a scientific journal AND read the thoughts of a journalist on a third continent, how exactly will the cultural critic stop them? Him and what army?

2

TM 12.21.22 at 2:07 pm

“breathless attention to what Musk does every day in 2022 is embarrassing.”

Yeah. But then why don’t we stop paying attention?

3

Mike 12.21.22 at 2:56 pm

I’ve been reading CT essentially since its inception, and this is simply the most inaccessible posting I’ve ever read here. It’d just too complex and assumes understanding of too many obscure and dubious points of view. There have been other CT posts where I’ve had to do some basic background reading to comprehend them, but this makes demands far beyond my previous willingness.

While it is possible that reality/society requires such a multifaceted description from a brilliant author, it also bears some resemblance to parodies or AI-generated nonsense that is pulling our legs. I’m impressed that Alex SL @ 1 was able to extract some meaning to criticize.

Perhaps an introductory series of articles explaining your viewpoints without requiring so much background would make this comprehensible to me. Of course I’m not assuming that I’m typical of the readership or that you’re attempting to write for me, but I’d be curious to know what audience is receptive to this as it stands.

4

Aardvark Cheeselog 12.21.22 at 3:06 pm

Alex SL: How is oral narration not exactly as linear as reading, given that only one word is spoken at a time?

Consider taking a look at “Orality and Literacy,” which argues that the appearance of literacy enables (indeed, compels) the emergence of a different kind of consciousness about language.

As for OP, it is a grand entrance. I think I shall be looking to read some Flusser.

5

Morgan L. 12.21.22 at 4:19 pm

“But how did we use that time to regulate the car? We invested massive resources in developing and updating federal laws, strict standards for vehicle manufacture, licensing requirements for drivers, roads and road signs enabling healthier driving, and the most powerful police force in the world to enforce all this.”

This is an unfortunate analogy. We heavily subsidized but barely regulated the car. What we did instead was regulate the structure of towns and cities, and regulated other forms of transport out of existence. We regulated pedestrians by inventing infractions like jaywalking out of whole cloth and treating pedestrian deaths as presumptively accidental. We still barely enforce posted speed limits and only began regulating seatbelt wearing in 1984. Sure, there are driving licenses, but those existed before the car.

If you’re right and we regulate this like regulate cars, we’re screwed.

6

TM 12.21.22 at 4:31 pm

Not sure whether this is on topic… my sincere apology if it isn’t:
https://existentialcomics.com/comic/477

7

Stephen 12.21.22 at 6:00 pm

I read with great interest, and occasionally comment on, CT because it often gives me insights into matters I had not previously much thought about. Usually, I could after reading a post summarise its content.

This post is an exception. Damn me if I understand most of what the author is going on about at intolerably great length. If I asked the author for a concise summary, would that be unreasonable?

8

LFC 12.21.22 at 6:26 pm

Morgan L. @5

You’ve misread him. He’s saying we shouldn’t regulate “this” the way we regulated cars.

9

LFC 12.21.22 at 6:34 pm

Btw, perhaps because I don’t work in the academy, I have no idea what the sentence about the seminar paper and the letter of recommendation having been annihilated refers to.

10

Peter Dorman 12.21.22 at 6:35 pm

Aside from my substantive issues with the various forms of critical theory and “x studies” that have come to dominate much of the humanities, I have a stylistic objection: peacockism. Cleverness is on display. The more clever you present yourself, the more credibility capital you amass. Claims are supported by references to other sources whose credibility derives from their own cleverness. Of course, I’m all in favor of being clever when it’s in the service of some higher end, like humor, but even then you shouldn’t let the advertisement of cleverness get in the way of the actual purpose.

Incidentally, quantitative fields, like my own economics, have their own version of this, peacock math (“mathiness”). I don’t think it’s as dominant, but it’s definitely out there.

11

Cranky Observer 12.21.22 at 8:22 pm

Am I supposed to take seriously a post on Crooked Timber of all places that uses as part of its argument (that is, not in reference to a usage elsewhere) the term “Bad Man”? This is presumably taken from the breitbart style “Orange Man Bad” response to any criticism of a US citizen who while serving a term of office barred visitors to the US on basis of religion, stole migrant children from their parents in brutal detention centers, and attempted an autogolpe after being turned out of office. Not to mention using the Justice Dept to suppress rape allegations.

12

Kevin Munger 12.21.22 at 8:27 pm

Hello all —

Thanks for reading and engaging! I appreciate it.

AL@1 — to what I see as your main point: I am absolutely not suggesting that I know the best way to use the internet. Indeed, I think that no one knows the best way to use the internet, but that it is extremely unlikely that the best way to use the internet for human freedom is the same as the best way to use the internet for tech company value extraction.

The status quo seems optimized for the latter; my goal is to encourage EVERYONE to figure out how to optimize the internet instead for human freedom.

Mike @3: I have been working on these themes for a few years now on my previous blog — I linked most of those posts, but it is perhaps even more useful to go back chronologically if you’re genuinely interested.

I try to write well; like many, I usually fail, and I’m trying to get better. But I’m not about to apologize for writing at a serious intellectual level, and the premise that “reality/society requires such a multifaceted description from a brilliant author” is a bullet I’m simply delighted to bite.

Morgan @ 5 — I agree with you, and indeed I attempted to anticipate your objection…in the sentence immediately following the quoted passage: “Was this bundle worth it? Great question – Ivan Illich says “hell no!”, and I’m inclined to agree.”

13

MFA 12.21.22 at 9:07 pm

“…I am not pleased to see so many academic colleagues “leaving” Twitter because a Bad Man is now in charge. This isn’t just hipster churlishness; being critical of a bad thing for the wrong reasons can be pernicious. The implication of the current critique is that if the Bad Man were removed, Twitter would be ok.”

NO. The implication of the current critique is that if the BadMan gets his way, Twitter, already problematic before Elon, will get even worse–and people don’t want to be associated with something worse than Twitter generally was.

Oh, and ‘hipster churlishness’? Disqualifying of serious consideration, and negating any claim of ‘serious intellectual level’ writing.

I hope your next submission avoids both bad assumptions and childish labelism.

14

LFC 12.21.22 at 9:35 pm

I wrote two very short comments, neither of which cleared moderation. Is there a new tacit rule here that comments shorter than a certain length will be deemed insufficiently weighty to go through?

15

Alex SL 12.21.22 at 10:34 pm

Kevin Munger,

Thanks for the direct reply.

Of course, increased social media literacy and more critical engagement with that space would be beneficial to everybody. And yes, the current networks are shaped by the interests of near-monopolistic, profit-maximising corporations. No disagreement there.

But most of your post is written so as to make cultural critics, activists, intellectuals appear as the protagonists, and with certainty that people are currently doing communication wrong, expecting some largely unexplained negative outcomes to be imminent.

And as much as it is our own responsibility to think through how we are acting individually, there are at least tendencies of how people will react in aggregate to given opportunities and incentives that are at the very least difficult, if not impossible, to overcome without regulatory state intervention.

Facebook, Youtube, TikTok, and Twitter did not become what they are (or at least not only) because an evil billionaire sat cackling in his lair devising a way of tricking everybody else, and then everybody else who would otherwise be wise and good naively fell for the trick because they didn’t think carefully enough. These networks are successful to a great degree because what they offer is useful to millions of people: “free”, i.e., no fees because ad financed, and I am saying that as somebody who despises ads; a single, central place where you can easily find nearly everybody, as opposed to a federated space like Mastodon or many fractured spaces like blogs and newsletters where that is difficult; and reply/threading/forwarding/like/search functions that ‘work better’ for what people want to do than equivalent functions on competing networks that have therefore been less successful.

Even if I, for example, am convinced that the way that, say, Twitter specifically works incentivises or enables, say, abusive pile-ons, I will still post a vacancy in my team on Twitter because I know it will reach a lot of relevant colleagues there. I am reacting to an obvious incentive here, and the only way to get people not to do that is realistically the government forcing social media companies to do something differently, see GDPR or German laws against holocaust denial.

Agree with Morgan L., too, that cars are not the ideal example. Design standards and traffic safety rules are the equivalent of comment moderation policies, but beyond that, cars have warped everything around them for the worse, from ability of children to play in the street all the way up to urban planning, just like social media may warp our politics around them for the worse. I am certainly not helping regulate vehicles, because if I did, I would largely ban personal car use, with the exception of remote areas, disability needs, and emergencies. And then I’d get 2% at the next election, because, again, millions think differently.

Aardvark Cheeselog,

Thanks for the suggestion, although of course I will be unable to read that rather comprehensive book before this thread stops being a live concern. I am just generally skeptical that there are such deep differences between human minds merely because of having or not having a single technology.

It is one thing to say that a hunter-gatherer perceives their environment very differently from a city-dweller who gets food from the corner store. It is another to claim that an ancient poet who recites “Thus did the old man rebuke them, and forthwith nine men started to their feet” has a significantly different theory of language than an ancient poet who recites these exact same words BUT who can now also read them from a piece of parchment if he wants to. What is more, many people even today are barely literate in the sense that they don’t read or write much beyond short text messages or emails and maybe a few headings; do they also have an all-at-onceness where their friends and neighbours who regularly read and write long-form-texts have linearity?

16

Timothy Scriven 12.22.22 at 1:58 am

I’m a little confused as to why everyone is trying to resist the rise of the machines rather than #accelerate, and try to steer the world to a machinic transcendence of our limitations on a leftwing basis. Why try to stop the tide? Isn’t it better to try and steer it towards human liberation?

17

TM 12.22.22 at 8:31 am

OP: “the era of the printing press had replaced the all-at-onceness of earlier oral (ear-based) culture with the linearity of text, of the eye.”

Alex 1: “How is oral narration not exactly as linear as reading, given that only one word is spoken at a time?

I think we simply lack the imagination how earlier, non text-based culture worked, how people experienced it.

OP: ““Discourse is the method through which information is transmitted and dialogue the method through which it is produced.” “Flusser extends this to theorize the hand as media, as the mechanism by which humans rearrange our physical environment: how we imprint ourselves on the world, how we thus create information.”

Information is produced by dialogue, and/or by human hands physically rearranging the environment. Which is it? How are these accounts connected?

OP: “the centrally radiating discourse of the [late medieval] Church controls the society, the sources of information threaten to dry up from an absence of dialogue”

Like Alex 1, I think this vastly overestimates the homogeneity of medieval intellectual discourse (dialogue?) and the power of the Pope to control that discourse. But also I’d point out that you are referring to a pre-printing press, 99% illiterate, mostly non-textual culture. What happened to the power of oral dialogue and/or of the hand to create information?

18

MFA 12.22.22 at 12:10 pm

Timothy @ #16: Ever read any science fiction? We’re not trying to ‘stop the tide’; we’re trying to stop the rich and powerful from forcing us into the water with robot sharks. It’s like that.

19

Dave 12.22.22 at 2:57 pm

Apologies, this is a very long and unfocused post. There are nuggets here for a series, surely.

Re substance, it would help to describe what experience of Twitter you’re talking about. It seems like post-2014, post-algorithmically-driven Twitter, which is different from what brought people to the platform in the first place, i.e. @dril and @horse_ebooks and generally funny, experimental writing.

The current Discourse Machine Twitter is pitched at normies and news junkies who need to know what journalists and nerds think, which, I’m sorry, but: boring. This is what Post is trying to replicate, unfortunately. It’s the least cool version of Twitter. I’m embarrassed for everyone.

Thesis: as long as social media are determined to appeal to mass audiences via algorithmic recommendations, they will be largely inane and insufferable. Platforms that take off are pitched to subcultures looking for community.

20

Ingrid Robeyns 12.22.22 at 3:39 pm

” The implication of the current critique is that if the Bad Man were removed, Twitter would be ok.”

I do not see how this follows. It might just be that there have been serious problems with Twitter all along, and that also frequent twitter users know these, but on balance judge that what Twitter-use brings them is worth the downsides. The issue now is rather that Elon Musk is doing things that they want to protest against. And the form that protest takes is to no longer be using Twitter (whether by leaving or by putting one’s activities on hold), and starting to build a network in the Fediverse/on Mastodon – as that’s the best thing they can come up with.

21

1soru1 12.23.22 at 12:04 am

. to turn a technical question into a political one, it must be torn from the technician’s hands

This seems deeply problematic, if only because it clearly not the case that it currently is in those hands.

As it happens, there is one small area of the internet where technicians currently are in charge. Valve corporation, who run the ‘steam’ pc game distribution system, are effectively a worker’s cooperative.

From my experience with it, it works perfectly well. The products of large and small corporations available to purchase conveniently from a single store. This compares with the 12+ services required to choose from a comparable selection of films or tv. ‘mods’ for games are free to upload and download. The only adverts you see are for the games you could buy, and you only see them when you look under the ‘what games could I buy’ menu. System running costs and developer salaries are paid from by simply taking a cut of purchases.

It does seem plausible that the nature of valve as a corporation has avoided a requirement for the prospect of unlimited revenue growth. And that that in turn has left it to focus on supplying customer needs, rather than turning customers into products.

Maybe that is a better model to iterate on than having the cleverest media theorist simply issue a dictat?

22

Alex SL 12.23.22 at 5:30 am

TM,

Or maybe the problem is that I have no clue what is meant with “linearity” in a context where both listening/speaking and reading/writing are equally linear activities and both having a story in one’s memory and having it written down in a book are equally “all-at-once”.

These two terms may have domain-specific definitions that I am ignorant of, similar to “good” and “all-benevolent” as used by a theologian to describe a deity that inflicts innocent children with horrific genetic disorders.

23

Kevin Munger 12.23.22 at 2:35 pm

MFA @13 — “hipster churlishness” was self-deprecation, not aimed at anyone else.

Alex @15 — I think the negative outcomes are already here! To this point: “Twitter specifically works incentivises or enables, say, abusive pile-ons, I will still post a vacancy in my team on Twitter because I know it will reach a lot of relevant colleagues there. I am reacting to an obvious incentive here, and the only way to get people not to do that is realistically the government forcing social media companies to do something differently, see GDPR or German laws against holocaust denial.”

— Yes, we are trapped in a bad equilibrium. One solution is regulation…and as I say, I’m just not optimistic about that in the US on a relevant time scale. So my goal, as a media theorist, is to raise the political salience of how we choose to use the internet.

Elite-driven norm change has rapidly expanded veganism and anti-racism; it does not seem hopeless to apply this theory of change to how we use the internet as well.

Scriven @16 — this is indeed the primary question….as I’ve heard it phrased, “transhumanism or Ted K?” My hope is for liberal humanism, at least a little while longer, because I think this is a precondition for even allowing us to choose which of these paths we want to take.

1soru1 @21 — I think we agree: more technical capacity among motivated left-wing activists could revolutionize the media technology industry. Imagine if all the tech companies were like Valve! There’s more than enough surplus to have public-good versions of basic internet services. The point is to politicize the infrastructural level of technology.

24

Stephen 12.23.22 at 7:58 pm

It’s cold, it’s raining, I have head nothing more urgent to to than to look at the OP’s magnum opus.

Of which the crux, as he said, seems to be “Discourse is the method through which information is transmitted and dialogue the method through which it is produced”. Turn that arse over tip, and you get “Dialogue is the method through which information is transmitted, and discourse the method through which it is produced”. Does that not make just as much or as little sense?

Also: Villem Flusser, “the missing link between cybernetics, media theory and continental philosophy”, wants “to theorize the hand as media, as the mechanism by which humans rearrange our physical environment: how we imprint ourselves on the world, how we thus create information”.

Well, we use our hands to create information by writing or typing or, if we are Italians, gesticulating as we speak. We rearrange our physical environment in many ways, through the labour of our hands or, nowadays, through machinery. So?

“Flusser sees the creation of information as the quintessential human activity, that human freedom is our ability to fight entropy by wrenching the physical world from more- to less-probable states.” So when a bird builds a nest, when a rabbit digs a burrow, when a beaver builds a dam, when bees build a honeycomb, are they not fighting entropy? Are they therefore quintessentially human too? Are they creating information and expanding human freedom?

Genuinely baffled.

25

Barry 12.23.22 at 11:14 pm

“Twitter’s spectacular conflagration, the wildfire currently burning through some of the dead wood of the digital media ecosystem, both entrances and illuminates. ”

It’s not deadwood, in the case of Twitter; it’s somebody quite deliberately spending an awesome amount of money to trash something that he does not like.

26

DK2 12.26.22 at 2:57 pm

Kevin:

I feel your pain. For a man in your line of work, these developments must cut very close to the bone. How long until a machine can write an original column in your voice that’s indistinguishable from your writing? How long until a machine can write a column in your voice that’s better than anything you’re capable of? 5 years? 10? Next week?

Not just you, of course. The reaper is coming fast for all the creative industries, plus everyone who makes a living communicating. Indian call center employees are going to need to look for new employment. Among many, many other people who have no idea what’s coming.

Good luck rallying the masses, or getting regulation through Congress or whatever your actual strategy is. I’m pretty sure this is going to roll over you without pause, but you never know.

A couple of questions/comments:

Your tweeting poorly strategy is hopeless, and from your other writings I’m pretty sure you know that. You’re playing Mr. Turing’s game, and in that game dumbing down is always a bad move. Easy for a machine to mimic. There are a lot more nonsense words than correct words in context. So as a strategy this will never work, since a computer can easily copy it. Binch is cute, and all that, but as you well know it will cease to be a marker of human originality the instant humans recognize it as such. Do you really think machines can’t make up nonsense better than humans?
Trying to shoehorn this into topicality by talking about Twitter isn’t doing you any favors. Twitter is a transitory issue that goes away once Musk kills it or stops trying to kill it or something else comes along. It has no more longer-term significance than the demise of the CompuServe forums of lamented memory.
Why are you on the internet at all? You’re feeding content into the great maw. Not going to turn out well. Shouldn’t you be advocating pure analog communication? Very hard for a machine to mimic a face-to-face conversation.

Can’t build much of a resistance movement on a face-to-face basis, but columns like this one aren’t going to build a movement anyway. The Flusser references lack the common touch. Seems like you’d be better off going full Luddite and swearing off digital communications entirely. At least that would have the benefit of consistency.

There is a significant question here that I’d be interested in hearing you address in some manner that doesn’t rely on citations of obscure texts. To wit: what does it mean for human beings when machines can perform literally every task better than people? This is going to have some profound consequences. In 1997 humanity realized that computers can play chess better than even the best humans and chess proficiency immediately ceased being a marker of irreducible human creativity. No problem, since computers can’t write good poetry, right? Not so much anymore. The island of human superiority is getting smaller every day, and the tide’s not going to retreat any time soon.

What happens when there’s nothing left? Not just economically, where the consequences are going to be Industrial Revolution-level profound. But psychologically. If a machine can create original art that’s indistinguishable from the output of the best human artists, and can do it instantaneously and cheaply and without the agony of cutting off ears and so on, what’s left for people to do, and how do we distinguish humanity from the machine world? And if you think machines creating human-level art is impossible I’d be happy to cite a long list of human skills that computers were never going to be able to replicate. Moore’s law is a bitch. Good luck fighting that fight.

So thanks for the column, though I’d prefer less of the philosophical theorizing and more discussion of the nuts and bolts. Maybe I’ll feed your work into one of the GPTs and see if I can get out something I like better. I wouldn’t bet against it.

27

J-D 12.27.22 at 6:21 am

Well, we use our hands to create information by writing or typing or, if we are Italians, gesticulating as we speak.

Use of the hands to convey information while speaking is a general trait: some people do it more, some people do it less, but it’s not a trait particular to Italians. You don’t have to be Italian to make a wiggling hand motion when you say ‘more or less’ or ‘so-so’ or ‘approximately’; you don’t have to be Italian to make gestures as if dusting off your hands when you say ‘so we’ve taken care of that piece of business’; you don’t have to be Italian to make clarifying gestures when you say ‘I meant this one here, not that one over there’; you don’t have to be Italian to find yourself making hand gestures when you’re explaining to somebody (who doesn’t already know) what a slinky is and what it does.

28

Scott P. 12.27.22 at 12:13 pm

I feel your pain. For a man in your line of work, these developments must cut very close to the bone. How long until a machine can write an original column in your voice that’s indistinguishable from your writing? How long until a machine can write a column in your voice that’s better than anything you’re capable of? 5 years? 10? Next week?

Yeah, I doubt it. Humans are very bad at projecting trends. People come across something like Chat GPT, reason it has the intelligence of maybe a 5-year-old child (it doesn’t), and reason that it will have the intelligence of a 15-year old teenager in 10 years, or likely faster, because hey, machines learn better than people, right?

Chat GPT has no real way to get ‘better’. You could give it a larger training set, but it already has the entire internet as a training set, there is none larger. And doubling the size of its training set wouldn’t double its capacities — it’s not linear. Maybe the programming of neural networks will get much better, but maybe it won’t, and in any case, there are limits to that approach.

As a commenter on Brad Delong’s blog said, “‘Every sentence I have so far heard, read, spoken or written about ChatGPT makes more sense to me when I substitute “the collective hive mind of the internet circa 2021, filtered for presentability”. ChatGPT is a very effective tool for summoning the general consensus of its source material and rearranging it….”

The sort of things it would be good at are the sort of activities that already have been stripped of any genuine human interaction, like the production of spam emails.

29

DK2 12.28.22 at 6:42 pm

So the technology’s going to stop improving any day now. And it’s never going to be good enough for the kind of communication you do. Because your communication involves “genuine human interaction.” And machines can’t reproduce that.

Okey dokey.

You do understand that a lot of smart people are putting a lot of money and effort into improving these systems, right? GPT 4 is supposed to be coming out next year, with major improvements in the algorithms.

It’s certainly possible everyone in the industry is wrong and you’re right about the limitations of this approach. But you don’t cite any particular compelling evidence.

The only actual technical limitation you refer to is model size, and as I’m sure you know the current trend is towards smaller models, rather than larger. So that’s not really a limitation.

You also engage in some hand-waving about “limits to that approach” in improving neural networks. Without doubt true. Indeed, a truism. There are limits to all approaches.

The real question is not whether limits exist but whether the existing approaches to GPT can be improved substantially. The GPT-4 architects apparently think so. Do you disagree? Do you have some insight into exactly when and where the “limits to that approach” will kick in? Next year? A decade?

You’re free to believe that this particular field of technical development is going to stop progressing in the immediate future. Lots of people have believed that about lots of technologies. Particularly when the technology in question is about to come for their job. But you seem awfully confident that this particular tide is going to stop right at your feet, and not one inch more. Without, it seems, any particularly good reason.

So maybe what you’re really selling here is “genuine human interaction.” It is without doubt true that no machine will ever generate “genuine human interaction.” Machines not being human.

The real issue isn’t whether machines can engage in “genuine human interaction.” The real issue is whether a machine could plausibly mimic “genuine human interaction” such that another human couldn’t tell the difference. That’s the real issue here, and you treat it as if the answer is self-evident.

There are libraries full of commentary on the Turing Test. That humans can communicate in a way that machines can’t mimic is not a self-evident proposition. If you want to argue it, fine, but don’t pretend that this is one of those things everyone agrees on.

Chess computers can look dozens of moves into the future and generate sequences of moves and responsive moves that chess masters describe as “creative.” I’m pretty sure machines will be able to communicate in such a way that most of the time and for most purposes no one will be able to tell the difference from human-generated content. We’re not there yet, but there’s been astonishing progress in the last couple of years and it’s plausible we’ll be there in 5-10 years.

You’re free to disagree, but I’ve got Moore’s law on my side. You’ve got handwaving.

Unless you think there’s magic involved. Some kind of vibration in human communication that machines can’t mimic because machines can’t recognize and/or duplicate that particular pattern. Can’t ever. No matter how powerful the machine.

If you think this is something that’s actually possible in the real universe, then you should really come out and say it. And then explain it.

30

J-D 12.29.22 at 4:22 am

We don’t know enough about how human beings do all the things they do to be able to say with confidence that machines can’t do them; on the other hand, we don’t know enough about how human beings do all the things they do to be able to say with confidence that machines can do them.

It is extremely likely that the range of things that machines can do will continue to be extended; but there’s some areas in which there’s no practical likelihood that the effort to substitute machines for humans will even be attempted. I don’t know whether there’s any barrier which would make it impossible to get a machine to do (well) the job of a union organiser, but we’re not going to find out for sure because nobody’s going to try: why would they?

31

Kevin Munger 12.30.22 at 3:51 pm

@DK2 thanks for the engagement here…but not for the doomerism. I’m sorry to hear that you’ve given up, but you’re correct to note that I have not. How else to spend my life than fighting for freedom.

Strategy-wise:

“binch” is indeed a juvenile, hopeless strategy….the space of freedom it creates is too narrow. But it might give people a brief jolt of human experience…a reminder that this system was made up overnight and that other ways of acting are possible. See

https://newdesigncongress.org/en/pub/the-para-real-manifesto

To justify Flusser in particular: he anticipates, in 1985, the real binding constraint of GPTx: it’s not model complexity, it’s data overhang. Technocapitalism works to expand by unlocking and exploiting new sources of energy/power: this anti-homeostatic, cancerous growth looks exponential each time, but the aggregate exponential growth is in fact a series of booms and busts.

So the current AI textmodel boom comes after a few decades of work on NLP and OCR: we’ve found a new energy glut to exploit. But the point of my article is that people are going to stop producing new texts; indeed, they already have. Text is no longer central but a minor component of composite digital communication.

And this is related to my larger strategy: overt intellectual self-seriousness. Reading midcentury scientists or even like Playboy magazine from the 1960s, it’s amazing how quickly literary culture has collapsed. But dense, written theory is condensed human compute, the best technology we have for collaborating on understanding the world and then, to paraphrase stafford beer, design a machine whose output is human freedom.

32

DK2 12.31.22 at 9:45 pm

Kevin:

“Doomerism?” Not remotely. This technology is coming whether you like it or not. The consequences may be good or bad for the human species. Probably a mixture of both. Either way we’ll adapt.

Arguing that this isn’t going to happen because of some technology limitations or because the Rebel Alliance is going to defeat the technology fascists is both nonsensical and ultimately uninteresting because it’s not remotely original and is almost always wrong.

Here’s the interesting question: why do you believe there’s some aspect of human communication that machines can’t successfully mimic? I’ve asked that one a couple of times but you’re not even addressing it.

A couple of other points:

“People are going to stop producing new texts; indeed, they already have.” I’m 100% confident that people are producing more text today than at any point in history. I’m 100% confident that you know that. I’m left to conclude that you’re arguing nonsense or you’re using an obscure academic definition of “text” that no one in the real world would understand.

“Data overhang.” I’ve been in and adjacent to this field for a long time and I’ve never heard this term. Nor has google. I have no idea what it means.

I’ve read some of your other stuff. I know you can write clearly. And having never heard of Flusser, I was hopeful that maybe there was something new here. I’m no longer optimistic.

33

J-D 01.01.23 at 12:03 am

Here’s the interesting question: why do you believe there’s some aspect of human communication that machines can’t successfully mimic?

There are aspects of human communication that machines won’t successfully mimic not because of any possible technical barriers but because there’s insufficient motivation for the attempt to be made.

34

Kevin Munger 01.02.23 at 2:19 pm

@DK2 —

@J-D ‘s response is correct on that point: it’s not that there’s fewer characters being produced, but that the originality of the text is decreasing. And I agree that for now we’re still not at the peak: more and more people are coming online, even becoming literate, and indeed there are more people than ever before.

My diagnosis is that the originality of texts is declining, where “originality” includes things like grammatical variations and “mis”spellings.

Think of it like accents, in speech: regional differences, none of which were right or wrong, but which represented a huge variety in the sounds humans made. Accents have been declining in the West, for a variety of reasons — but I’d wager that mass broadcast media is the most important one. Discourse stamps out local originality.

But the internet can be used well! I wrote elsewhere that “If you and your community have not invested serious energy taking advantage of the internet revolution — if you do not have a concrete set of norms, practices and institutions designed to allow you to use the internet without the internet using you — you are destined to lose. In fact, you’re not even trying to win.”

Different communities have developed different ways to produce dialogic pockets online. One of them is the online rationalists…and this is where I get the term “data overhang.” I confess I didn’t realize how niche a term this was:

https://www.lesswrong.com/posts/sjcQBQvassWqGEd5F/is-there-a-culture-overhang

“why do you believe there’s some aspect of human communication that machines can’t successfully mimic?”

I’m saying that as a matter of moral urgency, we should try to stop them from getting to that point! I’ll allow that it’s possible that we would all decide that that’s the future we genuinely want. But FOR NOW, when we are rushing forward without the possibility of deliberation, I think we should try to stop and get this question right…because it’s important.

And you should definitely read Flusser even if this post was, as I said, overwrought

35

Mike Huben 01.03.23 at 12:20 pm

How ironic it would be if Kevin Munger’s post was written by ChatGPT at the behest of one of the CT authors. What would we need to request of ChatGPT to get similar text? Things like “weak but extended metaphors” to explain “new sources of energy/power”? “Overwrought”? “Trendy terms from LessWrong”? “Moral condescension towards the masses”? “Hyperbolic overconfidence”? “Reference to the most obscure social theorist” to explain Czech-Brazilian polyglot Vilem Flusser? “Lengthy and dense”? “Not even wrong”?

Indeed, this could be a fun game, to use ChatGPT to create and post Kevin Munger paragraphs as responses. Unfortunately, we’re ALL vulnerable to that, and it would be a mean tactic, so I don’t recommend it.

“I’m saying that as a matter of moral urgency, we should try to stop them from getting to that point!”

In this impending era of deepfake video and writing, I agree that there will be a great deal of immoral fakery that would be good to stop. But I don’t see any way to stop it short of requiring cryptographic signatures, and disallowing pseudonyms. I don’t think corporations (already pseudonyms of sorts) will stand for that, since they have the most to gain. Likewise, individuals will want to use ChatGPT and its successors both to prepare personal briefings and statements ranging from love letters to propaganda.

Intellectual activity might largely change from creative thinking to creative exploration of ChatGPT textual space. Training in creative thinking might mean sequestering from ChatGPT and its products until some level of intellectual maturity is achieved.

36

TM 01.04.23 at 9:33 am

Alex: “Or maybe the problem is that I have no clue what is meant with “linearity” in a context where both listening/speaking and reading/writing are equally linear activities”

But speaking/listening is only comparable to linearly reading a book if you assume that one person is speaking and everybody else is listening. Which is not how oral interactions between humans usually happen.

But really I think it’s the author’s responsibility to exlain this point, which seems rather important for his argument and yet is only mentioned in passing.

Comments on this entry are closed.