Counterfeit digital persons: On Dennett’s Intentional Stance, The Road to Serfdom

by Eric Schliesser on June 10, 2023

A few weeks ago, Daniel Dennett published an alarmist essay (“Creating counterfeit digital people risks destroying our civilization”) in The Atlantic that amplified concerns Yuval Noah Harari expressed in the Economist.+ (If you are in a rush, feel free to skip to the next paragraph because what follows are three quasi-sociological remarks.) First, Dennett’s piece is (sociologically) notable because in it he is scathing of the “AI community” (many of whom are his fanbase) and its leading corporations (“Google, OpenAI, and others”). Dennett’s philosophy has not been known for leading one to a left-critical political economy, and neither has Harari’s. In addition, Dennett’s piece is psychologically notable because it goes against his rather sunny disposition — he is a former teacher and sufficiently regular acquaintance — and the rather optimistic persona he has sketched of himself in his writings (recall this recent post); alarmism just isn’t Dennett’s shtick. Third, despite their prominence neither Harari nor Dennett’s pieces really reshaped the public discussion (in so far as there (still) is a public). And that’s because it competes with the ‘AGI induced extinction’ meme, which, despite being a lot more far-fetched, is scarier (human extinction > fall of our civilization) and is much better funded and supported by powerful (rent-seeking) interests.

Here’s Dennett’s core claim(s):

Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends. Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created… 

Another pandemic is coming, this time attacking the fragile control systems in our brains—namely, our capacity to reason with one another—that we have used so effectively to keep ourselves relatively safe in recent centuries.

You may ask, ‘What does this have to do with the intentional stance?’ For Dennett goes on to write, “Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. We’re all going to be sitting ducks in the immediate future.” This is a kind of (or at least partial) road to serfdom thesis produced by our disposition to take up the intentional stance. In what follows I show how these concepts come together by the threat posed by AIs designed to fake personhood.

More than a half century ago, Dan Dennett re-introduced a kind of (as-if) teleological explanation into natural philosophy by coining and articulation (over the course of a few decades of refinement), the ‘intentional stance’ and its role in identifying so-called ‘intentional systems,’ which just are those entities to which ascription of the intentional stance is successful. Along the way, he gave different definitions of the intentional stance (and what counts as success). But here I adopt the (1985) one:

It is a familiar fact from the philosophy of science that prediction and explanation can come apart.*  I mention this because it’s important to see that the intentional stance isn’t mere or brute  instrumentalism. The stance presupposes prediction and explanation as joint necessary conditions.

In the preceding two I have treated the intentional stance as (i) an explanatory or epistemic tool that describes a set of strategies for analyzing other entities (including humans and other kinds of agents) studied in cognitive science and economics (one of Dennett’s original examples).** But as the language of ‘stance’ suggests and as Dennett’s examples often reveal the intentional stance also describes our own (ii) ordinary cognitive practice even when we are not doing science. In his 1971 article, Dennett reminds the reader that this is “easily overlooked.” (p.93) For, Dennett the difference between (i-ii) is one of degree (this is his debt to his teacher Quine, but for present purposes it useful to keep them clearly distinct (and when I need to disambiguate I will use ‘intentional stance (i)’ vs ‘intentional stance (ii).’)

Now, as Dennett already remarked in his original (1971) article, but I only noticed after reading Rovane’s (1994) “The Personal Stance,” back in the day, there is something normative about the intentional stance because of the role of rationality in it (and, as Dennett describes, the nature of belief). And, in particular, it seems natural that when we adopt the intentional stance in our ordinary cognitive practice we tacitly or explicitly ascribe personhood to the intentional system. As Dennett puts it back in 1971, “Whatever else a person might be-embodied mind or soul, self-conscious moral agent, “emergent” form of intelligence-he is an Intentional system, and whatever follows just from being an Intentional system thus is true of a person.” Let me dwell on a complication here.

That, in ordinary life, we are right to adopt the intentional stance toward others is due to the fact that we recognize them as persons, which is a moral and/or legal status. In fact, we sometimes even adopt the intentional stance(ii) in virtue of this recognition even in high stakes contexts (e.g., ‘what would the comatose patient wish in this situation?’) That we do so may be the effect of Darwinian natural selection, as Dennett implies, and that it is generally a successful practice may also be the effect of such selection. But it does not automatically follow that when some entity is treated successfully as an intentional system it thereby is or even should be a person. Thus, whatever follows just from being an intentional system is true of a person, but (and this is the complication) it need not be the case that what is true of a person is true of any intentional system. So far so good. With that in place let’s return to Dennett’s alarmist essay in The Atlantic, and why it instantiates, at least in part, a road to serfdom thesis.

At a high level of generality, a road to serfdom thesis holds (this is a definition I use in my work in political theory) that an outcome unintended to social decisionmakers [here profit making corporations and ambitious scientists] is foreseeable to the right kind of observer [e.g., Dennett, Harari] and that the outcome leads to a loss of political and economic freedom over the medium term. I use ‘medium’ here because the consequences tend to follow in a time frame within an ordinary human life, but generally longer than one or two years (which is the short-run), and shorter than the centuries’ long process covered by (say) the rise and fall of previous civilization. (I call it a ‘partial’ road to serfdom thesis because a crucial plank is missing–see below.)

Before I comment on Dennett’s implied social theory, it is worth noting two things (and the second is rather more important): first, adopting the intentional stance is so (to borrow from Bill Wimsatt) entrenched into our ordinary cognitive practices that even those who can know better (“experts”) will do so in cases where they may have grounds to avoid doing so. Second, Dennett recognizes that when we adopt the intentional stance(ii) we have a tendency to confer personhood on the other (recall the complication.) This mechanism helps explain, as Joshua Miller observed, how that Google engineer fooled himself into thinking he was interacting with a sentient person.

Of course, a student of history, or a reader of science fiction, will immediately recognize that this tendency to confer personhood on intentional systems can be highly attenuated. People and animals have been regularly treated as things and instruments. So, what Dennett really means or ought to mean is that we will (or are) encounter(ing) intentional systems designed (by corporations) to make it likely that we will automatically treat them as persons. Since Dennett is literally the expert on this, and has little incentive to mislead the rest us on this very issue, it’s worth taking him seriously and it is rather unsettling that even powerful interests with a manifest self-interest in doing so are not.

Interestingly enough, in this sense the corporations who try to fool us are mimicking Darwinian natural selection because as Dennett himself has emphasized decades ago when the robot Cog was encountered in the lab, we all ordinarily have a disposition to treat, say, even very rudimentary eyes following/staring at us as exhibiting agency and as inducing the intentional stance into us. Software and human factor engineers have been taking advantage of this tendency all along to make our gadgets and tools ‘user friendly.’

Now, it is worth pointing out that while digital environments are important to our civilization, they are not the whole of it. So, even in the worst case scenario — our digital environment is already polluted in the way Dennett worries by self-replication counterfeit people–, you may think we still have some time to avoid conferring personhood on intentional systems in our physical environment and, thereby, also have time to partially cleanse our digital environment. Politicians still have to vote in person and many other social transactions (marriage, winning the NBA) still require in person attendance. This is not to deny that a striking number of transactions can be done virtually or digitally (not the least in the financial sector), but in many of these cases we also have elaborate procedures (and sanctions) to prevent fraud developed both by commercial parties and by civil society and government. This is a known arms race between identity-thieves, including self-replicating AI/LLMs who lack all sentience, and societies.

This known arms race actually builds on the more fundamental fact that society itself is the original identity thief because, generally, for all of us its conventions and laws both fix an identity where either there previously was none or displaces other (possible) identities, as well as, sometimes, takes away or unsettles the identity ‘we’ wish to have kept (and, here, too, there is a complex memetic arms race in which any token of a society is simultaneously the emergent property, but society (understood as a type) is the cause. [See David Haig’s book, From Darwin to Derrida, for more on this insight.]) And, of course, identity-fluidity also has many social benefits (as we can learn from our students or gender studies).

Now, at this point it is worth returning to the counterfeit money example that frames Dennett’s argument. It is not obvious that counterfeit money harmed society. It did harm the sovereign because undermined a very important lever of power (and its sovereignty) namely to insist that taxes are paid/levied in the very same currency/unit-system in which he/she paid salaries (and wrote IOUs) and other expenses. I don’t mean to suggest there are no other harms (inflation and rewarding ingenious counterfeiters), but these were both not that big a deal nor the grounds for making it a capital crime. (In many eras counterfeit money was useful to facilitate commerce in the absence of gold or silver coins.)

And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need and that a pluralist mass society itself is more robust than a sovereign who insists on full control over the mint. Dennett himself implicitly recognizes this, too, when he advocates “strict liability laws, removing the need to prove either negligence or evil intent, would keep them on their toes.” (This is already quite common in product liability and other areas of tort law around the world.)

I am not suggesting complacency about the risk identified by Harari and Dennett. As individuals, associations, corporations, and governments we do need to commit to developing tools that prevent and mitigate the risk from our own tendency to ascribe personhood to intentional systems designed to fool us. We are already partially habitualized to do so with all our passwords, two-factor verification, ID cards, passport controls etc.

In many ways, another real risk here, and which is why I introduced the road to serfdom language up above (despite the known aversion to Hayek among many readers here at CrookedTimber), is that our fear of deception can make us overshoot in risk mitigation and this, too, can undermine trust and many other benefits from relatively open and (so partially) vulnerable networks and practices. So, it would be good if regulators and governments started the ordinary practice of eliciting expert testimony to start crafting well designed laws right now and carefully calibrated them by attending to both the immediate risk from profit hungry AI community, and the long term risk of creating a surveillance society to prevent ascribing personhood to the wrong intentional systems (think Blade Runner). For, crucially for a (full) road to serfdom thesis, in order to ward off some unintended and undesirable consequences, decisions are taken along the way that tend to lock in a worse than intended and de facto bad political unintended outcome.

I could stop here, because this is my main point. But Dennett’s own alarmism is due to the fact that he thinks the public sphere (which ultimately has to support lawmakers) may already be so polluted that no action is possible. I quote again from The Atlantic:

Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. 

I don’t think our liberal democracy depends on the informed consent of the governed. This conflates a highly idealized and normative view of democracy (that one may associate with deliberative or republican theories) with reality. It’s probably an impossible ideal in relatively large societies with complex cognitive division of labor, including the (rather demanding) sciences. (And it is also an ideal that gets abused in arguments for disenfranchisement.) So, while an educated populace should be promoted, in practice we have all kinds of imperfect, overlapping institutions and practices that correct for the lack of knowledge (parties, press, interest groups, consumer associations, academics, and even government bureaucracies, etc.)

It doesn’t follow we should be complacent about the fact that many of the most economically and politically powerful people, corporations, and governments  control our attention which they already do a lot of the time. But this situation is not new; Lippmann and Stebbing diagnosed it over a century ago, and probably is an intrinsic feature of many societies. It is partially to be hoped that a sufficient number of the most economically and politically powerful people, corporations, governments, and the rest of us are spooked into action and social mobilization by Harari and Dennett to create countervailing mechanisms (including laws) to mitigate our tendency to ascribe personhood to intentional systems. (Hence this post.)

There is, of course, an alternative approach: maybe we should treat all intentional systems as persons and redesign our political and social lives accordingly. Arguably some of the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization. Modern longtermism seems to be committed to the inference from intentional stance(i) to ascription of personhoodhood or moral worth. From their perspective Dennett and Harari are fighting a rear-guard battle.

 

*Here’s an example: before Newton offered a physics that showed how Kepler’s laws hung together, lots astronomers could marvelously predict eclipses of planetary moons based on inductive generalizations alone. How good were these predictions? They were so good that they generated the first really reliable measure or estimate for the speed of light.

**Fun exercise: read Dennett’s 1971 “Intentional Systems” after you read Milton Friedman’s  “The methodology of positive economics.” (1953) and/or Armen Alchian’s “Uncertainty, evolution, and economic theory” (1950). (No, I am not saying that Dennett is the Chicago economist of philosophy!)

+Full disclosure, I read and modestly commented on Dennett’s essay in draft.

{ 38 comments }

1

Tim Worstall 06.10.23 at 4:08 pm

A small possible clarification:

“Money has existed for several thousand years, and from the outset counterfeiting was recognized to be a very serious crime, one that in many cases calls for capital punishment because it undermines the trust on which society depends.”

There are two types of counterfeiting. Or with bullion money there are at least, linked in to Archimedes and the Eureka comment.

1) Make money out of baser metals. So, perhaps make supposedly gold coins out of electrum.

2) Make coins out of the right materials but not doing so through the King’s (or ruler’s etc) mint. The King took a slice of the metals value for running the mint. A very important source of finance that seigniorage.

1) destroys confidence, 2) destroys the King’s revenue. Both were punished v harshly (18th century England took uttering (false paper) and counterfeiting seriously enough that they were at least hanging offences). But they are rather different.

An AI that fakes being a human might – possibly – be usefully distinguished from an AI which is just made by not the right people? Or even, made by those who aren’t endowed with the right to collect those rents?

2

Eric Schliesser 06.10.23 at 4:16 pm

Fair. Brian Weatherson made related points on twitter.

3

LFC 06.10.23 at 4:28 pm

In certain spheres, there would seem to be limits to the (present) ability of counterfeit digital persons to infiltrate the human realm.

Suppose, for instance, that person X is into online dating. X makes a date with counterfeit person Y and they agree to meet in the real world at place Z. The appointed time arrives, and either Y does not appear or, hypothetically, Y appears in the guise of a sophisticated robot of some sort (assuming, sci-fi-like, that this is possible). But the latter possibility, evoking Blade Runner-like “replicants,” is far off. For the present, counterfeit person Y will not show up for the date. And that will be that.

This is a somewhat minor example, but probably it could be extrapolated. For instance, take a local school board meeting where parents and students testify on both sides of some hot button “culture wars” issue. Or a local commission meeting considering a utility rate hike, whatever. A counterfeit person has been involved in online debate on the matters, but when it comes to the meeting itself the counterfeit person will not appear. Etc.

It is also not insignificant that while a lot of transactions can be done online, many do not have to be. Also not insignificant is that while cell phones are very widespread, including in poorer countries, personal computers w high-speed internet connections are somewhat less so if one looks at entire populations, including in the “developing” world.

4

Paul Davis 06.10.23 at 11:51 pm

LFC @3 … consider the way school board (and so many others) shifted to zoom during the COVID19 pandemic. I don’t want to give any elements of the right any credit, but I think that to some extent a few of their fears about online meetings are not entirely dissimilar to the risk of fake people participating in such meetings. There is still some support for continuing online participation in such meetings, even though the in-person format has generally resumed.

5

Alex SL 06.11.23 at 1:06 am

Apologies if this comes across as rude, but could the first ca. two thousand words of this post not have been reduced to the following?

“people have a well-known tendency to see personhood and agency even where there are none, perhaps because in this context a false positive (thinking that a tree is out to hurt you) is a low-stakes mistake, but a false negative (thinking that an angry lion is passive like a tree) is a life-or-death matter”

I think pretty much everybody understands this intuitively without being referred to a philosophical treatise. It isn’t exactly one of the deeper controversies of epistemology or so.

As with many AI risks being discussed, I am puzzled what the problem is with AI specifically beyond making fraud slightly more efficient, as these risks already exist while being caused by humans. There are right now, for example, fraudsters on dating apps who manipulate lonely people into sending them money, and alleged journalists on TV or their own Youtube channels lying freely about political events or statistics. People with a good hunch for what is and what isn’t plausible weren’t fooled yesterday, and they won’t be fooled by AI tomorrow; people who don’t have that capability for critical thought were marks for other humans yesterday and will be marks for AI tomorrow.

The real problems with AI appear to be: (a) too many people either don’t understand, or don’t want to understand or admit, that the models are GIGO, i.e., when used for decision making they merely reproduce the biases the developers put into the training sets, (b) generative models will make a whole bunch of people lose their income, immediately especially writers and artists, (c) because only large companies have the resources to annotate large enough training sets and run the software, models like ChatGPT continue the trend of concentration of economic power in a few large companies, and (d) the whole IP issue with the training data.

That we are never perfectly well informed to the degree of omniscience when we give consent is no logical contradiction to the claim that liberal democracy requires at least a minimum level of informed consent of the electorate. It is simply the case that in many supposed liberal democracies governance is already deeply broken by the consequence-free lying and hate propaganda spewed forth by the right-wing media. I really do not understand how Hayek could argue that central planning is incompatible with personal freedom, as theses are two different spheres of life; of course one could have a planned economy where people can freely express whatever religion, sexual orientation, or music and culinary preferences they want. (I can only assume that, as with most libertarians, to him freedom only ever meant the freedom of a rich person to pay no taxes.) It makes a lot more sense to argue, conversely, that private enterprise is incompatible with democracy, as every enterprise is internally run as a dictatorship, and we therefore spend at least a third of our time under a dictatorship through our job, which is also generally central to our identity and self-worth.

But, crucially in this context, liberal democracy is incompatible with billionaire-owned mass media, as a deliberately misinformed electorate cannot make informed decisions and will regularly vote against its own interests (“get your government hands off my Medicare”, “we have to leave because the EU wants to outlaw church bells/British apple trees/bendy bananas/etc”, “I want more social spending but I just don’t trust Labour, don’t know why, but probably doesn’t have anything to do with the newspaper I read constantly telling me to distrust them, right?”). You write we shouldn’t be complacent about this kind of manipulation but then write somewhat complacently that it was always thus, which isn’t even correct; there was a time only a few decades ago when there was no privately run TV in any Western nation and when newspapers and radio stations were much less monopolised than they are now. These are all deliberate decisions.

As an aside, given that what I think (?) is the original ‘road to serfdom’ hypothesis (Hayek) was merely ideologically motivated slippery-slope fear-mongering, has there ever been an actual, bona fide road to serfdom that actually happened? The only examples that come to my mind that would come close to the definition presented here are (a) the rise of an authoritarian regime when caused inadvertently by something like the Weimar government’s austerity policies, and (b) an empire growing so large through conquest that it becomes ungovernable as a republic and has to concentrate ever more power in a single person to function, e.g., ancient Rome or the USA. But I think neither of these really capture the intent expressed here because the former is too short-term, and the latter too long-term? AFAICT, the whole concept is only a fancier word for slippery slope fallacy.

Circling back to AI, I am not even going to touch on the ideas of the “Oxford transhumanists” because they are so disconnected from reality and plausibility, and this has already become too long. It seems to be a good heuristic to assume that everything that Bostrom argues publicly is wrong, that saves a lot of time better spent elsewhere.

6

MisterMr 06.11.23 at 6:57 am

It seems to me that the most likely ethical problem that will be caused by AIs is that they will be used to do emotive labour, and this strikes most of us as unhetical.

Examples might be emotionally volatile people who fall in love with AI “waifu” anime girls, or AI robots used for elderly care so the elderly do not have anymore emotive contact with humans, and similar situations.

In these situations the people “mistaking” the AI for a real person are doing it because of emotive need, but from the outside it sounds very dystopic and morally dubious. perhaps there will be anti-waifu laws, or perhaps this will be normalized.

7

JimV 06.11.23 at 10:09 pm

“The real problems with AI appear to be: (a) too many people either don’t understand, or don’t want to understand or admit, that the models are GIGO, i.e., when used for decision making they merely reproduce the biases the developers put into the training sets”

Surely we are all at the mercy of our training sets, that is, the sum total of our past experience, including good and bad formal training and the results they produce when used. That being obvious, the implication seems to be that AI is somehow worse off than usual for being trained by people who have the intent of studying and optimizing the process.

AlphaGo’s training caused it to decide on a novel move, of which the then World Champion Go player said, “I thought no machine would ever beat me, but when I saw that move I knew I would lose the tournament.” AlphaFold’s decisions on predicting how protein molecules will fold outperforms all other known methods by about 30%.
Since then one bias in AlphaGo has been found and removed, and no doubt AlphaFold will be improved. If you believe, as I do, that the main process of improvement is trial and error, then you must expect many errors at the beginnings of new technologies.

As other commenters have suggested, the problem of rich people controlling and misusing technology is not new. I suspect many of our ills can be traced back to the invention and use of the printing press; however, also many good things can be traced to that source.

8

Kevin 06.12.23 at 11:51 am

Alex @ #5

As with many AI risks being discussed, I am puzzled what the problem is with AI specifically beyond making fraud slightly more efficient, as these risks already exist while being caused by humans.

It’s more than slightly more efficient.

Consider the evolution of spam and phishing emails. At one point, the state of the art was sending “I am Nigerian Prince. Send me your bank details.” to a million email accounts. This caused a brief crisis in email spam that was solved by spam filters. I’m currently getting “Hello! I hear you have been in an accident.” calls. They are annoying, but ignorable.

Now consider telling an AI to find all the grandmas who have grandchildren in college. Find out as much as you can about them both, call grandma and get her to transfer some money.

“Hello, Grandma Mary! It’s Little John-Boy! Lovely seeing you last week at Red Lobster. As you know I am starting advanced astronomy next semester and I can’t afford my textbooks. Please transfer $100 to this account X123456H.”

Now a determined conman could do this already but an AI that can target All The Grandmas can do it so more efficiently. The damage is twofold:

Lots of grandmas lose $100.
No grandma ever trusts a phone call again. This is the end of the telephone (or text or WhatsApp or Facebook messenger) as means of communication with grandmas.

It’s not just a bit more efficient.

9

SusanC 06.12.23 at 1:52 pm

I had plenty of experience with Zoom (and similar tools) before the pandemic. because I’ve worked on research projects with multinational collaborators. I already knew Zoom sucks for meetings.

Post-pandemic, I’m even more convinced that Zoom sucks for meetings.

It would be kind of amusing if certain types of meeting have to be in person to prevent AI generated fake people. (Political protests for example … you need to demonstrate that all those protestors are real people).

10

Rob Chametzky 06.12.23 at 9:50 pm

Seeing Bill Wimsatt mentioned reminded me of this: I recall him saying that the real problem of “other minds” wasn’t how we know that there are any, but rather how we get convinced that there are so few. He mentioned that, after all, we just KNOW that THAT toaster and THAT elevator really are out to get us. I’ve always appreciated that insight into the malevolence of inanimate objects. I used to attribute a similar insight being (at least part of) what makes Buster Keaton’s films so wonderful, but Noel Carroll’s work persuaded me I had that pretty much wrong, fwiw. And all this without intenional stancening!

–Rob Chametzky

11

Alex SL 06.12.23 at 9:50 pm

JimV,

I do not doubt that AI models can be very useful, and in fact I am using them myself in my own work. What I mean with this is where people want to use AI for decision making because they acknowledge that humans are biased but either naively assume that AI won’t be or use the illusion that an algorithm is unbiased to sneak their own biases into decision-making. There are people seriously advocating that AI be used in a variety of ways including to predict the likelihood of somebody committing another crime. Perhaps the most illustrative example was a major tech company a few years ago trialing an AI model to short-list CVs of job applicants. Because they trained it on how they as humans had short-listed before, they effectively created an AI filter against female applicants. At least this example was so obvious that it was immediately scrapped when they realised, presumably out of worry for their reputation, but not all comparable cases may be as blatant.

We are living in a second gilded age, with enormous inequality and corporate power against the backdrop of a state that has long decided to give up most of its control over such matters. Under these circumstances it seems useful to warn of the dangers of entrenching this situation ever further rather than retreat to the complacency of “eh, rich people have always existed”.

Kevin,

I agree that this is more efficient, yes, and one could theoretically pump out a lot more attempts at fraud and misdirection per time. My main reason not to consider this a realistic scenario is that scammers and spammers are generally not terribly sophisticated. Running this kind of very targeted scheme would require serious investment in its design, and it would be computationally expensive. If anything, it seems more plausible applied to individual high-value targets (corporate espionage, maybe?) than to thousands of grandmas.

Of course, grandma shouldn’t trust a phone call from somebody with an unfamiliar voice asking for money anyway, even today.

12

JakeB 06.13.23 at 2:53 am

This is too chewy for me right now, at the end of a very long day after not enough sleep last night, but I can’t resist noting: “This is a known arms race between identity-thieves, including elf-replicating AI/LLMs who lack all sentience, and societies.”

Maybe the problem is just too many elves?

13

KT2 06.13.23 at 2:57 am

Eric, “Counterfeit digital persons:” doesn’t go far enough. “the Oxford transhumanists and their financial and intellectual allies are betting on this even if it leads to human extirpation in a successor civilization.”

Here it is. “Counterfeit Communities – multiple counterfeit humans make for a convincing counterfiet community.”
Proto successor to civilization? Simvilization.

Kevin @8 says “It’s not just a bit more efficient.”.

It is another category entirely… “these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. ”
*

“Generative Agents: Interactive Simulacra of Human Behavior”

Joon Sung Park, Joseph C. O’Brien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, Michael S. Bernstein
[Submitted on 7 Apr 2023]

“Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools.

“In this paper, we introduce generative agents–computational software agents that simulate believable human behavior. ”

“Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day.

“To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior.

“We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language.

“In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time.

“We demonstrate through ablation that the components of our agent architecture–observation, planning, and reflection–each contribute critically to the believability of agent behavior.

“By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.”

https://arxiv.org/abs/2304.03442

We will need a new word for Fooled.

14

David in Tokyo 06.13.23 at 7:08 am

JimV said:

“AlphaGo’s training caused it to decide on a novel move”

Well, no. AlphaGo’s Monte Carlo Tree Search allowed it to find a novel move. The MCTS had some help from the pattern matching, but it was the search that found that the move was (within the limits of it’s search time) a profitable move. (Inversely, if it really was a novel move, the training wouldn’t have noticed it, of course.)

Without MCTS, the neural nets find locally pretty moves, but get easily beaten by even (a particular (I did the test)) pre-NN Go program. With MTCS, said pre-NN program* gets trashed (with similar search times). (This is with KataGo, a second-generation reimplementation of Alpha Zero using crowdsourced computing for the training.)

Yes, I’m grinding an inconsistent axe: I think Go and Chess programs are seriously kewl (that is, that they are seriously kewl computer science), and the rest of current AI is pretty much all BS. YMMV, of course.

*: Tencho Igo 7, known in English as Zen 7. It’s actually quite good, and on a fast peecee (it doesn’t use a GPU) it plays at mid-level professional strength. Albeit with a pre-AlphaZero style.

15

David in Tokyo 06.13.23 at 7:11 am

Kevin writes:

“Now consider telling an AI to find all the grandmas who have grandchildren in college. Find out as much as you can about them both, call grandma and get her to transfer some money.”

This scam works fine without AI, and has been used with horrific efficiency here in Japan for years…

16

Eric Schliesser 06.13.23 at 8:11 am

Thank you for catching that JakeB

17

Trader Joe 06.13.23 at 2:37 pm

I’m not that worried about duping grannies or other scams. Most people are naturally suspicious around money matters and those that aren’t should be. There are always dupes.

More concerning is the sort of deep-fakery that just keeps getting better and better and harder and harder to detect. Imagine someone anonymously posts video evidence of you beating your children or killing your Ex – all faked using AI. You may be able to disentangle yourself but not at great cost of time, money and reputation.

Imagine such a video was posted of some celebrity – we might all gawk and shake our head when he/she claims its “Not true” but we’ve been trained to trust our lying eyes and not trust protests to the contrary.

Bit by bit we already trust so little. AI inevitably erodes the remainder at a quicker pace. If there is any silver lining, it might be to force more interaction back to the actual real-world on the assumption that literally everything on-line is either fake or false.

18

J, not that one 06.13.23 at 5:02 pm

I think it’s very interesting to ask what kinds of behaviors (in the broad sense) can be automated/simulated using what kinds of software-hardware combinations. What we’re getting instead, unfortunately, is a jump straight to “this system is obviously qualitatively the same as a human but better at everything,” without the evidence needed to support that.

Dennett says something interesting from the opposite direction, which is that he claims all humans and human culture does is manipulation of words and sentences, essentially randomly. He doesn’t support that either. I’ve run into lots of people online (some of them coming at it from the word end, not the tech end) who think that’s essentially true (Saussure and all that). Because they already believe that, they don’t seem to have a lot of motivation to consider that what the LLMs are doing isn’t similar to human thought. Instead, they seem to prefer to use the apparent existence of a general
AI (which they think they’ve seen) to present a fait accompli on the question of the nature of human thought.

19

MisterMr 06.13.23 at 10:37 pm

Pedant note: Sassure didn’t claim that people just manipulate at random he did claim that words defined each other and each represented a different slice of the nonverbal world. The obvious example is colors: there is a continuum of colors, but we use different words (pink, red, orange) to define different slices of said continuum. The words define each other in the sense that red ends where pink begins, but they still refer to different chunks of nonverbal reality.

20

rivelle 06.14.23 at 7:25 am

Despite appearances, Hubert Dreyfus’s “What Computers Still Can’t Do: A Critique of Artificial Reason” remains a valuable statement of the necessity that intelligence must be embodied in the Heideggerian sense of the term. But Dreyfus failed to anticipate that it is precisely the disembodiment of logic learning machines that makes them potentially advantageous over the intelligence of embodied entities.
Contemporary logic learning machines (LLMs) operate in accordance with algorithms which allows them to perform tasks which humans cannot. Often these algorithms are generated by the computer itself.

In Niklas Luhmann’s terms, LLMs have become a functionally differentiated social-system.

It is this implacable aspect that so frightens Dennett and the signatories of “The Statement on AI Risk”.

It may help to clarify terms. Computers are not embodied in Drefus’s sense and hence are not intelligent. Interaction between humans and computer-systems is not an interaction between human intelligence and AI but should be reconceptualised as a communication between functional systems. LLMs should be thought of in terms of Artificial Communication.

The link below is to MIT Open Access which makes selected books freely available.

“A proposal that we think about digital technologies such as machine learning not in terms of artificial intelligence but as artificial communication.
Algorithms that work with deep learning and big data are getting so much better at doing so many things that it makes us uncomfortable. How can a device know what our favourite songs are, or what we should write in an email? Have machines become too smart? In Artificial Communication, Elena Esposito argues that drawing this sort of analogy between algorithms and human intelligence is misleading. If machines contribute to social intelligence, it will not be because they have learned how to think like us but because we have learned how to communicate with them. Esposito proposes that we think of “smart” machines not in terms of artificial intelligence but in terms of artificial communication.
To do this, we need a concept of communication that can take into account the possibility that a communication partner may be not a human being but an algorithm—which is not random and is completely controlled, although not by the processes of the human mind. Esposito investigates this by examining the use of algorithms in different areas of social life. She explores the proliferation of lists (and lists of lists) online, explaining that the web works on the basis of lists to produce further lists; the use of visualization; digital profiling and algorithmic individualization, which personalize a mass medium with playlists and recommendations; and the implications of the “right to be forgotten.” Finally, she considers how photographs today seem to be used to escape the present rather than to preserve a memory.”

https://direct.mit.edu/books/book/5338/Artificial-CommunicationHow-Algorithms-Produce

21

rivelle 06.14.23 at 7:26 am

On the question of frauds and scams in the early decades of our 21st century, we currently operate within, and communicate via the means of, a series of functional social-systems (Niklas Luhmann) that operate in accordance with algorithms that are more powerful, more over-arching, implacable and autonomous than heretofore.

This is why we should abandon thinking in terms of Artificial Intelligence and think in terms of Artificial Communication instead.

Dennet’s dystopian nightmare of grandmothers being scammed by scammers who are themselves prisoners of an inhuman system that operates in an over-arching but implacable realm was recently depicted in Cyril Schäublin’s movie “Those Who Are Fine”.

https://europeanfilmawards.eu/en_EN/film/those-who-are-fine.11532

“The Zurich depicted here seems devoid of air and light, a cold and dystopian society in which people go about their daily lives with robotic regularity. The modernistic architecture is similarly forbidding, providing a bleak environment that DP Silvan Hillman often shoots from high above to accentuate its large scale and lack of warmth. Policemen are shown stopping people to conduct random bag checks as a result of a bomb threat, but you get the feeling that it’s simply a matter of routine. The dialogue more often than not consists of characters stiltedly exchanging details about bank accounts (how Swiss!), internet passwords, data plans and other technical minutiae. It’s as if humanity had been rendered digital, reduced to a series of 0’s and 1’s. Even when people try to connect on a more intimate level, such as describing a film they’ve recently watched, they’re unable to remember something as basic as its title.”

https://www.hollywoodreporter.com/movies/movie-reviews/who-are-fine-1100542/

22

KT2 06.14.23 at 7:31 am

COUNTERFEIT PEOPLE. DANIEL DENNETT. (SPECIAL EDITION)
20K views · 9 days ago
https://m.youtube.com/watch?v=axJtywd9Tbo

23

Alex SL 06.15.23 at 12:07 am

J, not that one,

he claims all humans and human culture does is manipulation of words and sentences, essentially randomly

I did not think that Dennett would believe something like that, that is disappointing if so. But oh yes yes yes, there are so many people who have in this current AI hype cycle supported variations of the idea that our brains are also merely probabilistic word association algorithms or suchlike.

In the present case, the suspicion suggests itself that this is being claimed to support the idea that LLMs, which I have recently seen called “scaled-up autocomplete”, are already AGI, and the singularity is near (repent!). But it is merely one aspect of a broader set of beliefs that has been widespread in the pseudo-rationalist community for years now: free will, the mind, our personality, us making decisions, all of these are supposedly “illusion”. We don’t have free will because there is physical cause-and-effect in our brains. Our mind or personality are illusions because we consist of atoms interacting, and/or because our brain consists of different parts. We don’t make decisions because a brain scan shows activity a fraction of time before we are aware of having made a decision. “We are nothing but an LLM ourselves” is part and parcel of this same complex of ideas.

I find all of this extremely exasperating and would like to see more push-back. The underlying cognitive errors seem to be:

(1) The fallacy of composition – instead of “the mind is an illusion because the brain has different parts that aren’t individually minds” one could just as legitimately say that “an airplane is an illusion because it consists of different parts that aren’t individually airplanes”. Only in the context of neuroscience, some people think they have had a revolutionary insight when they make that basic mistake. Another way of putting it is that there is confusion about what “me” or “us” is. The conclusion is that “I” am not making a decision because my brain makes it for me, but this only makes sense if “I” am not my brain but, say, an immaterial soul, which the very same people making this argument don’t believe exists in the first place. In reality, of course, I am my body, so I am making decisions, so me making decisions isn’t an illusion. Again an analogy suggests itself: “I am not walking to the supermarket, that is an illusion, because it is only my legs that are walking to the supermarket”.

(2) A term that was invented to describe an empirically observed process or property was claimed at some point by some theologian to be only explainable by god/immaterial soul, and then the pseudo-rationalist concludes that because god/soul doesn’t exist, the empirically observed process or property must be an illusion. Free will, for example, has a perfectly non-magical definition of being able to act on one’s internal preferences in free from coercion or incapacitation, and it (or its Latin counter-part ‘voluntary’) is a term that is needed to describe this important distinction, but the incompatibilist throws that particular baby out with the bathwater.

24

steven t johnson 06.15.23 at 2:51 pm

1)The notion of free will is the fallacy of division in practice. The personal experience of conscious thought is imagined to be choosing, when it is only a part. I think, I need to go to the supermarket and therefore I move my legs. But experiments have found movement begins before awareness of choice, decisively refuting the notion that free will is what we choose. Believers in free will will insist that a judge with low blood sugar imposing harsher sentences is exercising free will. The empirically observed experience of the judge’s conscious choosing contradicts the reality that blood sugar levels in some sense choose.

By the way, the general agreement there is a hard problem of consciousness constitutes the secular version of god/immaterial soul, confronting the epistemological limitations of (mere) neuroscience. All this kind of philosophizing strikes me as pseudorational, a higher grade of rationalizing. The goal apparently is to morally justify legal punishments, framing all determinist arguments for a different approach as exceptional, putting the burden of proof where it should not be.

2)The supposedly non-magical definition of free will as being able to act on one’s internal preferences free from coercion or incapacitation is not the one I see used by people who believe in free will. People who believe in free will firmly believe that they can impute internal preferences and do not need to analyze determinist hypotheses about the causes of choice. For instance, a football coach cannot be hypothesized to be unconsciously afraid of losing his job/identity so cannot believe allegations of sexual assault by his own son, it has to be an individually rational choice to follow the bully’s code. The very notion of free will even rules out the train of thought. Free will is false framing.

Further, the notion of coercion has no assigned meaning. The very premise of free will renders all but the grossest forms of coercion unidentifiable. This confusion I think is highly functional for justifying all manner of social, economic and political coercion that don’t involve open acts and threats of violence, but maybe that’s just me. Fear of violence as coercion seems in practice to reduce itself to a status competition, where a sympathetic victim is deemed coerced by an unsympathetic villain.

Even further, the notion of incapacitation again is framed by the notion of free will as the normal capacity. Incapacitation by definition is something that needs to be demonstrated, justified. The thousands of addicts are correctly jailed, according to the dogma of free will, precisely because the notion of incapacitation has no intrinsic content. I think the critique of society needs to include the explanation of capacity. But again, maybe that’s just me.

This point ends up coming back to fears of AI. It seems to me that evaluating threats from AI depend very much on some understanding of Natural Intelligence. The free will style of thinking can see the mind as a program, inimitable, and the artificial souls of AI a metaphysical/ontological menace. Or the free will style of thinking can lean more on the implicit supernaturalism and decree AI can’t solve the hard problem of consciousness, ergo AI can’t be truly intelligent. It’s true this blatantly repeats the fallacy of division, whereby our common experience of consciousness is deemed to be the whole. But then, I think free will is basically a dead end, avidly pursued because in the end you don’t want to go anywhere different.

25

Alex SL 06.16.23 at 8:00 am

steven t johnson,

1) No, you are making exactly the mistake that I was talking about. Even if “experiments have found movement begins before awareness of choice”, who exactly has made the choice? Bob in the next door office? My shoe? Not, it was my brain, and that is part of me.

Me = my entire body, including the parts of the brain that make a decision. If my body sleeps, I sleep. If my body decides between two flavours of ice cream, I decide between two flavours of ice cream. If you step on my foot, I will say, “ouch, you stepped on my foot” instead of “look at that, you stepped on the foot of the body I am using as a vehicle, but no biggie, because that isn’t actually me”.

The other angle here is usually trying to make the terms choice and free will disappear in a puff of smoke: everything is cause-and-effect, everything is predetermined, thus choices are illusions. And those who engage in this armchair reasoning have to hope that nobody notices that even if there is only cause-and-effect (which I agree with! or, well, that plus stochasticity), there is still a major difference between me being asked if I wanted a lift to work instead of taking the bicycle and a rock or a cat being asked the same question, or between me being asked if I wanted to donate $100 and me being asked the same question with a loaded gun to my face.

2) I have no idea what you are talking about, sorry to say. I see two kinds people who use the term free will. The first use it to mean something contra-causal and magical to support their religious or libertarian beliefs. This position is nonsensical, because cause-and-effect and randomness are the only two options, and they don’t actually mean randomness, so we are back to cause-and-effect, and their hope of getting away from the Problem of Evil or environmental determinants of social and economic outcomes, respectively, is dashed.

The second are compatibilists like me who would like to have terminology to describe the difference between between me being asked if I wanted to donate $100 and me being asked the same question with a loaded gun to my face and me being asked the same question while inebriated out of my wits, thank you very much, because that is a major difference, and if ‘choice’, ‘voluntary’, and ‘free will’ are abolished for being some kind of thoughtcrime, we will have to reinvent terms for that distinction. I might add that in my native language, the term for voluntary is “freiwillig”. I would find it unnecessarily convoluted to say “I did this out of my own volition, without coercion” in German without making reference to the term free will, but I appreciate that English speakers have it a bit easier to mystify the Germanic term because they have the Latin translation as a less mystified alternative.

The problem you describe in your fourth paragraph is one of black-and-white, binary thinking. Of course we are all to some degree coerced in our lives (having to earn a living, avoiding social opprobrium, etc), but that doesn’t mean everything falls over. To use another analogy, there was no nanosecond in which I seamlessly turned from a toddler into an adult; the process is gradual, and the 18th birthday is entirely arbitrary as the date of adulthood. Nonetheless I hope you agree that it is a big difference whether you entrust two year old me with a chainsaw, or forty year old me. There is probably also a fallacy name for this belief that because there is a gradient, everything is the same?

I don’t know if AI can be truly intelligent. I lean towards yes, but the most compelling argument to the contrary that I have read is that our intelligence may be substrate-dependent and might not be replicable without replicating all the downsides that come with biological neurons. But that is still speculative, of course. What I do know is that the current generative models that have caused this hype cycle are not really intelligent. “Scaled-up autocomplete”: there cannot be understanding in our sense simply because of the way they are trained, to associate words with each other, or to associate image elements with words, nothing more. Maybe next hype cycle will be closer.

26

MisterMr 06.16.23 at 8:27 am

@steven t johnson 24

“The empirically observed experience of the judge’s conscious choosing contradicts the reality that blood sugar levels in some sense choose.”

Yes but the judge “conscious” is not the same thing of the whole judge, blood sugar levels are also part of the judge. The conscious is just the mental representation of oneself, the self image; it cannot “choose” anything, it is the whole of the person that chooses.

This is probably entirely irrelevant for the thread but sounds like a cool discussion.

“By the way, the general agreement there is a hard problem of consciousness constitutes the secular version of god/immaterial soul, confronting the epistemological limitations of (mere) neuroscience.”

I think it is more the distinction between subjective perception and seeing things from the outside (neuroscience). For example if I taste a muffin I will have some subjective perception of “sweet”, and this perception can be explained from the outside as a chemical reaction, but our subjective sensual experience comes before the chemical explanation and the two are very different. The same things happens to the perception of consciousness (that is a word with many meanings so this creates a lot of confusion, but the consciousness of the hard problem is not the same thing of Freud’s conscious/pre-conscious, for example).

More generally though, I think that “free will” is more a moral/legal concept.
At the bottom the mechanism is this: some behaviours are socially disapproved, so people are punished for doing it (e.g. transgender dressing, stealing etc.), other are prized.
This punishment/prize thingies has the practical purpose of pushing people into doing/not doing something, but it only makes sense if people can act on this, so for example if I dress the wrong way because of coercion there is no purpose in punishing me, hence that behaviour is not attributed to my “free will”.
So it is more a juridical thing than a descriptive thing (from a descriptive perspective, if I choose I choose based on my preferences/knowledge, that are not caused by me, so there isn’t really a discinction between free/unfree will, but just a problem of definition of what constitutes “me” and what doesn’t).

27

steven t johnson 06.16.23 at 3:29 pm

“A fallacy of division is an informal fallacy that occurs when one reasons that something that is true for a whole must also be true of all or some of its parts.” In this case, the daily experience of making choices is asserted to be true of the conscious part of the mind, that possesses a faculty called the free will. Even more, the usual experience of making choices consciously is attributed to reason selecting alternatives between wants and ways to satisfy the ones chosen, a task carried out in the last stages consciously, mistakes the net experience for the whole.

The whole experience of a making choices again, is a conscious one. This whole experience is directly opposed to crude notions of predetermination. Personally I do not believe in predetermination of choices, not least because I believe there is a great deal of randomness involved. I do not even believe the brain is a computer so finely perfected by God/Evolution that it will necessarily operate the same way, any more than an athlete’s body will perform a specific action the same way. Even more to the point, the actions of individuals in the end are inseparable from the social context. You can draw a figure suspended in midair but there is essentially no one who is not acting out a social role. The sudden appearance of the term predeterminism gives away the game, I think.

The problem with compatibilism is that it exists solely as a way to justify the status quo. Drug addicts are criminals, because free will. Etc. The scientific evidence is that much of our decision making is not conscious. Notions of free will, in which it is conscious, therefore cannot be “predetermined” are false. Some gut reaction that you can’t not punish people without social apocalypse is just moral panic. It’s akin to the fear that without a king society will devolve into hell on earth, or some such.

The fact that people are not rationally deterred by the prospect of punishment is an inevitable outcome of the fact that being deterred is not solely a conscious choice! That’s why punishment as deterrence is not necessarily unjust. But it’s also why escalating punishment to increase deterrence is fundamentally irrational, i.e., in my opinion unjust. Compatibilism by contrast says that death sentences for drug offenses is an entirely rational, hence just, “solution” is a drug crisis (real or merely imagined.) Any given compatibilist may feel outraged at a Duterte or Duterte would-be like Trump. But the would-be humane compatibilist needs to find ad hoc justifications for such presumed excesses. In truth, the rational course was always to find justifications for the punishments. Compatibilism is why trying juveniles as adults is so popular.

MisterMr@26 says free will is “more a moral/legal concept.” I mildly disagree. I think in practice it is only a moral/legal concept. I think in explaining human behavior, normal or aberrant, it is misleading or useless.

28

Alex SL 06.16.23 at 9:49 pm

No, as I have now stated repeatedly, I assert the ability to make choices of the person as a whole, not only a part of their mind. That is simply what words like choice, free will, and I/me mean to everybody in everyday language, because all those words were invented in that way in the first place. When people had need to invent terms like ‘you’, ‘I’, or ‘person’ for communication, they did not mean a part of the brain (not least because most of them would have had no idea what that organ was good for anyway). When people say ‘choice’, they mean that person over there picking out of several options the one that appeals to them the most. When people say that somebody has done something voluntarily (freiwillig), they mean that they were unencumbered in their choice. That is it, that is how that works.

Apart from that, I find it frustratingly difficult to understand what you are even trying to argue. You say that there is no predetermination because of randomness, and then that our actions are inseparable from social context, which only makes sense if you assume that the social context significantly predetermines said actions. Which is it? I assume it must be the latter because of how you then continue, and because I can also only assume that you are successfully predicting other people’s behaviour every day, which only works if in real life you accept predetermination at least as a heuristic (e.g., if I give the shopkeeper money, she is very likely to allow me to walk out with the goods I am paying for, and she is rather unlikely to start singing the national anthem, throw a ball at me, or start washing her hair).

And you are imputing a lot. Please point me to where I have tried to justify the status quo of our social order, where I have argued that drug addicts are criminals, where I wrote that our social order will collapse without vindictive punishments, where I have argued for “escalating punishment” including a death sentence for drug offenses, and where I said that juveniles should be punished as adults. Spoiler alert: None of this true.

Incompatibilits and compatibilists agree in their perception of what happens and the moral implications of that, e.g., that there is no magic element to decision making and that people are shaped by their environment, genes, and other aspects beyond their control, thus extenuating circumstances and leniency. The only difference between the two is that for ideological reasons the former want to eradicate the use of terms that are needed to describe empirically observed distinctions, and the latter would like to continue to be able to describe those empirically observed distinctions.

In effect, the incompatibilist position is that the plebs cannot be trusted to use terms like free will or choice, because if given such terms the feeble-minded will inevitably believe in magic or, as argued here, that drug addicts should be put to death. I see no evidence whatsoever for this, as whether people who have not already consciously staked out a position in this debate can be brought to take either stance depending on how a survey question is phrased. And it is also simply a weird way of going about things. Why not simply explain that there is no magic? After all, when science started to understand that life is metabolism plus capacity to reproduce instead of some kind of spark inhabiting a body, the relevant researchers somehow managed to refrain from trying to outlaw the word life as an ‘illusion’. They simply explained that they had found out that life is metabolism plus capacity to reproduce, and we kept that word as a term still useful to describe an important difference between the moss and the rock it grows on.

29

steven t johnson 06.17.23 at 2:49 pm

“. When people say that somebody has done something voluntarily (freiwillig), they mean that they were unencumbered in their choice. That is it, that is how that works.”

That is not how it works. When people say they have free will, the default is that all choices are conscious choices and the default is that the will is unencumbered. Anything other than lip service to the notions of coercion—which are generally limited solely to the grossest of physical threats from others, if then—much less diminished capacity. The reason compatibilism is wrong is because the implied default is wrong: It is the use of punishments that needs to be rationalized in the sense of conforming means to ends, and revising ends to conform to necessity which has been genuinely demonstrated. A moral panic about apocalypse is people can’t be punished is not such a demonstration.

By the way, “life” may have been reduced to metabolism plus the ability to reproduce, but “free will” has not been reduced in daily practice to choices made free from coercion or impaired capacity. If compatibilists really believed that, then the many drug addicts in jail would be a serious issue. They are not, no more than the mentally ill in jails are. You don’t have to say specifically you support this. When you claim people have free will, the presumption is that every drug addict has free will and thus is justly punished by imprisonment for their crimes.

I say, the correct presumption is that the only free will that matters in ethics is the ability to choose what you want—-but you don’t have that. At the best, your compatibilist theory ignores this, making it useless. When you start thinking about why people want what they want, you’ve left compatibilism behind. The morally vicious claim that people are guilty for their choosing their feelings is a perfectly logical reading of compatibilism.

Again, the compatibilist default is always that your conscious self always can choose and therefore “it” (a mistaken assumption that the whole conscious experience must be a faculty, as if God/Evolution designed the mind/body/brain/ etc. for moral agency in the compatibilist sense. Punishments can be justified by this theory. The willingness to admit occasional corrections to the grossest abuses generated denies the real need: To reform the whole legal/moral system. A compatibilist trying to find the sweet spot so that there is a due weight given to coercion that isn’t a gun or a fist, or a lack of capacity when the presumption is that everyone has free will, that is, the capacity to choose! is like a libertarian trying to find a loophole for the grossest cases of usury or price gouging.

“In effect, the incompatibilist position is that the plebs cannot be trusted to use terms like free will or choice, because if given such terms the feeble-minded will inevitably believe in magic or, as argued here, that drug addicts should be put to death.” Duterte and Trump are not plebs. Lots and lots of people do believe in magic, though most call it religion, and others philosophy. I don’t think that has anything much to do with being lower class, except perhaps comfortable people feel less need for magic to save them. Indeed, I think the use of “free will” by the patricians is a reason why incompatibilist complacency is mere quietism, which I find objectionable.

The compatibilist position is that it is essential to use the same term, free will, as people who do believe in magic and do believe in escalating punishments as the default cure for unacceptable behavior. The only reason I can see for this is that determinism is regarded as predeterminism, which is an insult to the vanity. Well, the notion that social arrangements can coerce people is an argument for people organizing to change those arrangements and those who object to majority rule (or the possible violence to renovate society) might object to such thinking and find the notion of individual freedom a sovereign cure for such wrong thinking.

30

Alex SL 06.18.23 at 2:03 am

steven t johnson,

There is no sense in continuing this. Apart from being increasingly off-topic, I have repeatedly described the position that I and compatibilists more generally hold and argue for, and your reply is always to the effect of, “actually, no, you don’t argue that, here is what you really argue, because I know that better than you do, and it is the same as what the magic-believers and libertarians argue”. You are doing the equivalent of what a religious fundamentalist does when they claim that every human actually knows that god exists and therefore atheists must be lying when they profess not to believe in his existence.

Unfortunately, to me this isn’t entirely unprecedented. Several years ago, Jerry Coyne had an obsession with this topic on his WEIT blog, and he and his incompatibilist followers did the exact same thing and consistently refused to understand that the few compatibilists in the comments agreed completely with them on physics and ethics but did not find it helpful to make certain terms taboo, because even if complete determinism is correct and despite us being the product of our environment, there is still a cognitive difference between a pebble and a human, and I/consciousness/mind/choice aren’t illusions merely because they aren’t concrete physical objects inside the brain or magic, instead they are emergent properties of complex higher-level systems.

Point is, there is no arguing with somebody who simply ignores what others are saying and pretends to know better what they think than they do themselves. Even if assuming that the partner in the conversation has made a mistake in their reasoning, there needs to be at least a minimum level of good faith commitment to trying to understand what they are actually saying.

31

Peter Erwin 06.18.23 at 12:31 pm

And, in fact, as sovereignty shifted to parliaments and people at the start of the nineteenth century, the death penalty for forgery and counterfeiting currency was abolished (and the penalties reduced over time). I suspect this is also due to the realization that where systematic forgeries are successful they do meet a social need…

I think thats a slightly dubious argument, because it suggests that abolishing the death penalty for counterfeiting was some sort of special event with an explanation specific to counterfeiting. But the 19th Century saw the abolition of the death penalty for many crimes (because there were many crimes that had the death penalty).

E.g.

By the late 18th century the English legal system, often referred to as the ‘Bloody Code’, established over 220 crimes in Britain that could attract a death sentence, including cutting down a tree, stealing from a rabbit warren and being out at night with a blackened face…. During the early 19th century, Britain removed the death penalty for a wide range of crimes, including pickpocketing, forgery and rape. By 1861, the number of capital crimes had been reduced to five, including murder, treason, espionage, arson in royal dockyards and piracy with violence.

32

steven t johnson 06.18.23 at 1:27 pm

The compatibilist position on physics and ethics has always ruled. Incompatibilism has never been successful in imposing its alleged predetermination, the position imputed by compatibilists. Thus, the current state of affairs demonstrates decisively the real ethics of compatibilists. The claim that compatibilists “really” believe in limiting punishments to allow for coercion and incapacity is refuted by what they’ve done with their domination of the discourse. You guys have had it your way. It is bad faith to argue that what you say you want is what you really mean. And it is preposterous to insist that making words taboo (a genuine misrepresentation in my view!) doesn’t help when no one has ever succeeded in making the words taboo.

Issues of culpability are routinely confused by the motte-and-bailey handling of the very notions of coercion and incapacity. The false framing of these issues obscures the issues. Misplacing the burden of proof, pretending that conscious choice is the norm and extraordinary evidence to the contrary must be presented is unjust, which apparently, as they say, is a feature, not a bug.

The irrelevance of the compatibilists’ alleged belief in physics and chemistry and biology is demonstrated by the compatibilism of openly religious predestinarians, believers in predeterminism. They too uphold the justice of punishments predicated on the notion of free will, even as they explicitly deny it! The compatibilism is that commitment to the status quo, so that believers in a supernaturally self-determining consciousness aka free will are compatible in the courtrooms with people who give lip service to physics and chemistry and biology. (All parties in every dispute of any kind have different perceptions of what the status quo is: Minor squabbles resulting are not clashes of principle.)

33

Alex SL 06.19.23 at 12:16 am

Yes, I get it now: the US death penalty was introduced by a compatibilist philosopher, and any laws enshrining rehabilitation of criminals and decriminalisation of drug use were only possible after everybody in the relevant countries had banished “free will” and “choice” from the public vocabulary. That’s what the history books will show.

However, if I were, like you, inclined to believe that it is terminology that makes people do bad things, I would worry considerably more about what callousness or atrocities the belief complex this started with could lead to in the wrong hands: that human minds are themselves nothing but large language models or suchlike, that (because we learn from other humans) we are all just plagiarism machines like generative AI, etc.

Because this has two sides: the conclusion that AGI is already close and we need to discuss AI personhood and AI rights, and the conclusion that humans aren’t actually that big a deal if guessing the next few words from a prompt is all there is to sentience. And to be frank, the former may already be concern enough all by itself, given how corporate personhood turned out in the USA.

34

TM 06.19.23 at 9:22 am

Maybe a side note, but it seems to me that the argument “if there is no free will, then punishment cannot have a deterrent effect” is not logically tenable. Norms and laws are part of reality. They have causal effects on people’s behavior, just as the blood sugar used as an example above has causal effects on a judge’s behavior. This is obviously not an argument in favor or against a specific legal system, just an argument that the design of the legal system matters, whether or not we interpret human behavior as expressing free will.

35

Fake Dave 06.19.23 at 12:51 pm

I’ve always found these free will discussions to be deeply tedious. There’s this insistence on putting profound moral weight on having the proper answer to the question (or using the question to attack others’ morality) that presumes far too much of a connection between practical ethics and abstract ontology. What free will “means” ethically is a different question than whether it exists. We can posit that human behavior is deterministic and irrational and still agree that people will undertake actions and those actions will have consequences. If other deterministic and irrational people want to influence those actions or avoid certain consequences, they might well decide to provide the appropriate stimulus to effect that result. Calling it “correction” or “reprogramming” instead of penance or punishment doesn’t mean the cell will be any nicer. If people are acting in response to much deeper biological/environmental forces than can be conceived of by mere thought, than it does not follow that changing the language of those thoughts will have any effect on our behavior (much less our morality) other than to replace terms some people consider insufferably self-righteous with ones others see as obnoxiously psuedo-intelectual.

I’m not saying the rational choice crowd or the “freedom to sin” people are right about anything, but attacking them as immoral while rejecting people’s freedom to make moral choices is nonsense. You can’t even call them hypocrites anymore because that presumes a “proper” connection between thoughts and behavior that is impossible if we’re all just doing what we’re going to do and justifying it afterward. If free will is illusory, then so is consistency and integrity. We’re all just doing what we do until circumstances lead us to do something else. Our whole existence is hypocrisy.

36

Alex SL 06.19.23 at 10:05 pm

Fake Dave,

“We can posit that human behavior is deterministic […] and still agree that people will undertake actions and those actions will have consequences.”

That is the approach called compatibilism in a nutshell. The rest of the discussion is whether terms like free will or choice should be expunged from the vocabulary because the uneducated hoi-polloi will deterministically support the death penalty when given these terms but support rehabilitation if these terms are withheld from them; whether emergent processes are correctly described as “illusion” because they are, well, emergent processes instead of concrete things or magic; and whether something doesn’t exist because it consists of parts. The triviality of those errors of reasoning is masked by the fact that they are being publicly argued by some rather high-profile people.

But despite this debate being tedious, I remain convinced that conceptual understanding of our minds and decision making is important, especially in the context of current claims about AI. The OP is about how easily people see intentionality or sentience in chat bots and LLMs. What happens next (when they realise that they have been talking to an AI) may depend to a great degree on what they have been primed to think the human mind is. There are influential singularitarians out there arguing right now that securing the existence of trillions of hypothetical, future, simulated minds – in other words, counterfeit digital persons – morally outweighs the welfare of billions of humans today.

37

TM 06.20.23 at 8:41 am

I found Bertrand Russell’s analysis of the free will problem illuminating. I cannot find his essay in “Philosophical essays” online but an excerpt from “Our Knowledge of the External World as a Field for Scientific Method in Philosophy” can be found here:
https://hackernoon.com/the-notion-of-cause-and-the-problem-of-free-will

38

KT2 06.20.23 at 11:38 pm

Bertrand Russell’s
“Philosophical essays” 
https://bertrandrussellsocietylibrary.org/br-pe/br-pe.html

Via:
BERTRAND RUSSELL 
Online Books and Articles 

“This is an online collection of over one hundred books and articles by Bertrand Russell.

“For a complete list of Russell’s books and articles see our online Russell bibliography. We also maintain achronology of Russell’s life and an introduction to his analytic philosophy.”

https://users.drew.edu/jlenz/brtexts.html

Comments on this entry are closed.