What went wrong with the Silicon Valley right

by Henry Farrell on August 20, 2024

“To promote open inquiry and free, market-based technological progress, you need an open society, not one founded on the enemy principle. The understandable desire to escape criticism, misunderstanding, and the frustrations of ordinary politics does not entail the radical remaking of the global geoeconomic order to confound the New York Times and its allies. The cult of progress and the technocapital singularity are Hayek’s “religion of the engineers” with the valences reversed—so that markets and AI rather than the state become the objects of worship. Over the last few years, Silicon Valley thinking has gotten drunk on its own business model, in a feedback loop in which wild premises feed into wilder assertions and then back. It’s time to sober up.”

Some critics of Silicon Valley might find the piece not critical enough, but it is not written for them. The intuition behind it, correct or incorrect, is that a better Silicon Valley right is possible – and a piece explaining why in an uncompromising but not completely inimical way, written for a journal like American Affairs, is more likely to push a few people in this direction than a jeremiad. Two minor corrections. One error crept in through editing – the Dread Pirate Roberts’ efforts to hire hitmen were not what led to the Silk Road’s demise. The other was present from the beginning – “Balaji”’s surname is Srinivasan, not Srinavasan. And if you want more on the “technocapital singularity,” this piece for the Economist and this, right here on this Substack might be helpful. You’ll find little enough in the American Affairs piece, which mostly focuses on the politics of business models.

Enough – read the article!

{ 23 comments }

1

SusanC 08.20.24 at 1:31 pm

The Scott Alexander incident that your article refers to was somethign of a clash of norms between Internet and print media.

For Internet media, revealing personal information about a user is definitely considered a violation of rules/norms, and is widely viewed as tantamount to incitement to violence. (“Here is how to find Scott Alexander’s house so that you can go there and murder him.”)

Like, if the newspaper was openly calling for people to go and kill Scott, that would be clearly against the rules even for print media (not 1A protected speech). It feel into a grey area where people familiar with Internet norms are like “Ok, thagt was bad even if not actually criminal.”

2

SusanC 08.20.24 at 1:38 pm

Also, one might say that the Internet convention is to argue why a position is wrong, rather than than physically attacking the person. In a newspaper’s response to Scott is to threaten him (either by getting his employer to fire him; or by getting someone to murder him), that is an admission that they dont have a good counterargument.

3

Alex SL 08.20.24 at 10:53 pm

The contradictions in how these people speak of freedom and then merely attempt to build new monopolies and break laws until they are so important that they are forgiven is worked out well in this article. Unfortunately, these parts make the positive take on Silicon Valley ideology look a bit stretched. The problem is, there isn’t really any idealism at the heart of what these people believe. Leaving aside the quasi-religious transhumanism and singularitarian subcultures, the core of their ideology can be comfortably summarised in three dot points, in order of decreasing importance:

First, I don’t wanna pay taxes.

Second, I don’t wanna follow any rules and regulations, because I think I am rich and powerful enough that I won’t be inconvenienced by their absence.

Third, I am the boss, so you don’t get to tell me that I can’t abuse, discriminate, and fire my employees at will.

The first is selfishness, the second is immaturity plus sociopathy, and the third is narcissism plus bullying. Remove these pathologies, and their alleged ideology would have to fall back onto classical liberalism with the realisation that while freedom is a good thing, other people also have legitimate interests, so my own freedom cannot be the absolute freedom of the psychopath.

Most fascinating about your piece is the oscillation throughout between a call for Silicon Valley “thinkers” to examine their contradictions and the open admission that they never will, because they are inside a bubble in which they cannot notice the contradictions. I agree with the latter: these people wouldn’t even wake up if you gave them an entire country to try their beliefs, and it spectacularly descended into warlordism and gangsterism. They would merely say their experiment has been sabotaged by statist forces, because in the end, they simply don’t want to pay taxes and follow rules, and if the world doesn’t give them that, it is the fault of the world, never their own for being unreasonably selfish and short-sighted.

A few asides:

Our capacity to reason evolved so that we could tell ourselves and others plausible stories that justify what we want to believe, papering over the holes in our logic as necessary.

This seems implausible to me. Yes, we frequently use reason that way, but surely the key selective advantage of the capacity to reason is to solve difficult problems in our environment so that we are safe and well nourished? Without that as a starting point, there would be no reason to evolve to be intelligent enough to require plausible stories in the first place. Intelligence for its own sake would surely be selected against as too expensive to maintain.

Regarding the “narrow corridor”, no doubt the specific way in which our society is organised today, with liberal democracy and scientific journals, is idiosyncratic and thus difficult to replicate, but plenty of other societies have achieved social association without tribal blood oaths, a government that keeps the peace for multiple religions and ethnicities, and impressive technological advances; Chinese history comes to mind in particular. Of course, it is unlikely that a theocratic city state would be a hotbed of innovation. But that we happen to live in the time when computers have been invented doesn’t mean that the path of getting to computers is extremely narrow.

4

somebody who remembers steve sailer got published by steven pinker 08.20.24 at 11:14 pm

a nice article, but i would say, if there is a way to improve it, its to identify how central anti-black eugenics is to their ideology (extremely). it isnt that they just hate the poor. they certainly do. but they specifically loathe black people with a passion and white hot fanaticism that would make your average kkk guy a little nervous about who they might turn on next. when biden said “two rapists is enough, i’m putting a black lady on the supreme court”, each and every person on this list melted down alarmingly. didnt biden know that black people have an average iq of 37, according to steve sailers latest?! how is elon going to stop people from posting “lmao” in response to his new ideas if the “national iq” stays low?!

5

wetzel-rhymes-with 08.21.24 at 5:10 am

Hey there, Professor Farrell. I appreciated reading the article. I am one of those who believe your article is “not critical enough”. I think you attribute the impetus for the Silicon Valley right too much to “anarchism” or “libertarianism” or a desire for “exit” or “escape”. For Peter Thiel, for example, Hayek is not the most important intellectual influence. For Thiel, I think that is the Catholic philosopher and anthropologist René Girard, who was Thiel’s professor at Stanford, and mine too.

I was in Girard’s classe with Thiel at Stanford, “Violence and the Sacred”. Thiel has written and spoken quite a lot about Girard’s influence on him. René Girard’s philosophy and anthropology are humanist, but like Hegel’s philosophy, Girard’s philosophy can become a fascist toolkit for a temperament like Dostoevsky’s Grand Inquisitor. As a toolkit for power, Girard’s philosophy is a handbook for the social construction of mimetic crisis. Thiel has written how Girard’s philosophy became a kind of How To for succeeding in business negotiations. I believe Thiel has applied the philosophy to develop a kind of neo-Orwellian propaganda which an open society will not be able to withstand.

Peter Thiel and Elon Musk both came out of the world of PayPal. I think Thiel influences Musk’s approach to Twitter, but who knows. Twitter has seemed to be all about sacrificial crisis and scapegoating since Musk assumed control. On white Twitter, there are black women street fighting, and on black Twitter, there are Korean grocers going haywire.

I believe Musk and Thiel are social constructivists of a kind of constant mimetic crisis that ultimately will destroy democracy in America. Thiel is a Christofascist, I think, and I believe he is the natural ally of Vladimir Putin, whose favorite philosopher, the White Russian fascist Ivan Ilyin, is like the dark doppelganger of René Girard. These people aren’t libertarian. They are fascist. René fought in the French Resistance. If he were alive today, I think he’d be disappointed in his former student. I loved the man. He would see Thiel’s take on Mimetic Desire for what it is, the philosophy of anti-Christ.

6

David in Tokyo 08.21.24 at 8:04 am

I’ll second Alex SL’s three principles of Silicon Gulch Capitalism.

But I’ll go one step further. I think Silicon Gulch is dead in that there’s no more money to be made. We’ve done Amazon and Twitter and Facebook and the iPhone. But there’s nothing left to do that will actually make money. So Silicon Gulch is flailing. AI isn’t going to save them*. WeWork failed. Virtual reality is stupid, and no one needs VR glasses if they’ve got a smartphone. The idea of the self-driving car is cute, but there’s no money to be made. The private car is an insanely stupid idea in itself, and the self-driving car doesn’t significantly fix any of the stupidities of the private car. (E.g. the best thing to do with a private car is to not drive it. If you drive it, a 30-minute commute will take you 1.5 or more hours. New roadways evoke 130% more demand than their actual capacity, so every time you build a road, congestion gets worse.)

(Note that electric cars have the same problem, and the amount of energy it takes to make an EV is pretty large. And most of the microplastic pollution problem is due to cars (rubber wheels on asphalt) and the higher torque of EVs makes the problem worse.)

But we’ve got a billionaire class that’s got a lot of blokes in it with more money that god and nothing to do with that money. Fusion energy is still 40 years off, yet there are at least a dozen startups, every one with a different and untested in any way confinement trick, that are actually building plants (none of which have a snowball’s chance in hell of working) on startup money. (JPL used a facility the size of two football fields times several storries tall to use a thousand zillion joules of energy to create a zillion joules of energy to create a few thousand joules of laser light, of which a few joules were applied to a pellet (that itself cost a fortune to make), producing three times that few joules in heat, none of which could be used to actually produce electricity. Yet we were told “fusion’s been solved: more energy out than was applied”.)

Basically, technology is over. It’s done. Everything left is hype and BS and stuff that won’t be actually usable for another 40 years.

Hey, I’m a luddite. Sue me. I don’t own a cell phone and I’ve never bought a car.

*: LLMs are super neato random text generators, but they don’t do reality, truth, logic, reasoning. They just regurgitate their training data with other words plugged in. Also, there’s an underlying falsehood to the whole “neural net” schtick: “neural nets” don’t look anything like actual mammalian neurons. In the slightest.

7

nicopap 08.21.24 at 10:57 am

@Alex SL.

Our capacity to reason evolved so that we could tell ourselves and others plausible stories that justify what we want to believe, papering over the holes in our logic as necessary.

This is a paraphrase of The Enigma of Reason. The American Affairs post even has a link to this very site: https://crookedtimber.org/2020/07/24/in-praise-of-negativity/. Sperber and Mercier make an excellent case that indeed it is a selective advantage. While it’s not for an individual, in a group of people, it’s going to be extremely effective at finding the best arguments, better understand the world, and improve survival chances.

Well, the book is great and I would strongly recommend it, despite its age it’s still relevant. I can only do a poor job if I try to transcribe the book’s thesis here.

8

J-D 08.21.24 at 11:44 am

A few asides:

Our capacity to reason evolved so that we could tell ourselves and others plausible stories that justify what we want to believe, papering over the holes in our logic as necessary.

This seems implausible to me. Yes, we frequently use reason that way, but surely the key selective advantage of the capacity to reason is to solve difficult problems in our environment so that we are safe and well nourished? Without that as a starting point, there would be no reason to evolve to be intelligent enough to require plausible stories in the first place. Intelligence for its own sake would surely be selected against as too expensive to maintain.

For human beings, other members of our species are one of the most important parts of our environment and present many of its most difficult problems. In this way, our capacity to tell others plausible stories that justify what we want to believe is one part of our capacity to solve difficult problems in our environment. It seems reasonable to suppose that this has been true for a great deal of evolutionary time, as it’s also true of our closest extant relatives.

9

MisterMr 08.21.24 at 12:23 pm

I propose these two propositions:

First, what we call intelligence is a sum and interaction of many different mental faculties, including in a large part language and ability in social interactions but not limited to that;

Second, language and ability in social interactions evolved in order to have groups of people coordinate better.

From the second proposition, it is evident how creating narrative that are compelling emotively might be advantageous even if the narratives are wrong: for example, “all those guys of group B are devils and must be exterminated” permits people in group A to destroy group B, even if it is not true literally.
This might have also positive outcomes other than the gruesome one above: for example “all people have human rights” is not a factual “truth” but a “fictive narrative” that encourages positive behaviour. (defining positive behaviour as a behaviour that increses overall happiness).

10

MisterMr 08.21.24 at 12:25 pm

Sorry, an add on to my previous comment:

But from the first proposition it should be evident that “reason” is not limited to that function, there are also other forms of reson, like solving a puzzle or similar, that have nothing to do with creating a narrative to influence others.

11

Alex SL 08.21.24 at 1:07 pm

David in Tokyo,

Thanks, and it is interesting to see somebody I mostly agree with but still have a few quibbles with. Yes to everything you wrote on LLMs and EVs. But I would qualify the no more money to be made as: there is (likely) no big growth market left that will equal the introduction of computers, the internet, and the smartphone – but there is plenty of opportunity to make money. They have established numerous quasi-monopolies, often because it is extremely difficult to leave social media networks and operating systems, especially if the latter are tied to an entire cloud storage ecosystem that contains all our family photos etc. Then there are subscription services. They can milk us forever because if we don’t pay, our data in the cloud goes away, our graphics editing software goes away(1), our music goes away, our movies go away, heck, even our employers’ entire IT infrastructure goes away if they don’t indefinitely pay fees to Microsoft and Amazon.

The thing is, and this is highly relevant to the OP, that it will be interesting to see over the next twenty years what it does to the tech bros’ self-image and to their social license if the industry has visibly ceased to be the engine of innovation and growth but instead become a bunch of leeches attached to the legs of any entertainment and to the arms of any productive activity that uses data processing. Will they retreat further into delusions of neo-feudalism, interstellar travel, and mind uploading? Will there be a backlash against the business model of forever subscription monopolies?

As for fusion, I can out-Luddite you: I refuse to believe that it is feasible until the first commercial fusion plant is running, as it has always been twenty to forty years away since I was born, meaning that first plant should have gone live at least a decade ago. For all I know, it cannot be done outside of a star.

(1) Admittedly, I am using open source software, but I know a few people who only get along with the commercial ones, so they fork over a hefty subscription fee every year.

nicopap, J-D,

I do believe that being able to convince or lie convincingly to others, and to have higher-order theory of mind to the effect of “Frank knows that I know that he knows that Linda likes him” is highly adaptive. My point is that there is no need to have this ability unless and before your fellow group members have already evolved to be fairly intelligent. Being able to tell a convincing story or lie to frogs would be a waste of effort, because they don’t understand any concepts more complex than food, mating partner, and predator, and waste of effort is selected against. So, what starts the trend? I’d say the advantage of reasoning capability initially (!) has to be in interactions with the environment. Only once your species is already reasonably smart (and, of course, social – note that orangutans aren’t) can social selection for smartness become plausible.

12

J-D 08.22.24 at 3:07 am

I do believe that being able to convince or lie convincingly to others, and to have higher-order theory of mind to the effect of “Frank knows that I know that he knows that Linda likes him” is highly adaptive. My point is that there is no need to have this ability unless and before your fellow group members have already evolved to be fairly intelligent. Being able to tell a convincing story or lie to frogs would be a waste of effort, because they don’t understand any concepts more complex than food, mating partner, and predator, and waste of effort is selected against. So, what starts the trend? I’d say the advantage of reasoning capability initially (!) has to be in interactions with the environment. Only once your species is already reasonably smart (and, of course, social – note that orangutans aren’t) can social selection for smartness become plausible.

In any species where the behaviour of individuals can be affected by signals received from other members of the species–and that covers a very wide range, including species with intellectual equipment far more limited than that of modern humans–there is the possibility of selective pressure in favour of greater capacity to deceive.

13

David in Tokyo 08.22.24 at 5:35 am

Alex SL wrote:
“…but there is plenty of opportunity to make money. They have established numerous quasi-monopolies, often because it is extremely difficult to leave social media networks…”

Yes. My main intended claim was that there will be no NEW big money things, that there are no more Amazons, Facebooks, PayPals to be had. Basically, the invest in a zilllion random companies and pray one of them turns into Amazon isn’t going to work any more. Because the folks selling companies are more interested in the techniques of selling companies than in actually doing something. Theranos, WeWork, and now LLM-based AI are all scams.

So, yes. The tech bros all think that they’re Steve Jobs, when all they are is leeches extracting rent. So the question is whether the current round of inanity from them is mere prelude or is the actual death throes of that worthless corner of humanity.

FWIW, the minimum Adobe LR subscription gets you LR, LR classic, and photoshop and is about US$10 per month. About what I was paying for buying a version of the software every year or two (for LR’s predecessor). Although I use photo.net (funny name; it’s just a ‘Doze nagware ap) which is easier for simple scans and the like.)

Not owning a cell phone means I avoid most of the rent seeking.

14

engels 08.22.24 at 11:51 am

Not owning a cell phone means I avoid most of the rent seeking

I would go a step further
https://theonion.com/area-man-constantly-mentioning-he-doesnt-own-a-televisi-1819565469/

15

SusanC 08.22.24 at 1:06 pm

From H. Beam Piper’s Space Viking:

A king owes his position to the support of his great nobles; they owe theirs to their barons and landholding knights; they owe theirs to their people. There are limits beyond which none of them can go; after that, their vassals turn on them.”

“And the people won’t help some other baron oppress his people; it might be their turn next.”

Which is to say, be careful what you wish for, if you’re aiming to be a baron in a techno-feudalist society…

16

anon/portly 08.22.24 at 6:07 pm

I thought the article was outstanding.

My one quibble would be mentioning Yarvin, which almost seems like a low blow.

For me learning some relatively basic econ (not just the more abstract classical liberal ideas discussed in the article) would convince a smart person there’s no “there” with Yarvin (and others like him), the glitzy patter is hiding some really dull-witted (from any ideological perspective) econ stuff.

I think it’s way under-rated how bad right wing ideas and bad left wing ideas feed off each other – someone over-exposed to the one can be more easily taken in by the other. (Always reflected in how ideologues hate the moderate critics on their own side more than they hate the other side).

17

anon/portly 08.22.24 at 6:32 pm

These tensions erupted in 2020, when a New York Times writer started researching a story that threatened to reveal the identity of “Scott Alexander,” a pseudonymous blogger popular with Silicon Valley rationalists. Prominent technology industry figures responded indignantly. As the New Yorker’s Gideon Lewis-Kraus explained it, they saw the somewhat hapless Times journalist as a living exemplar of everything they hated about the media “gatekeepers” who, they believed, were deliberately trying to tear Silicon Valley down.

I thought this paragraph was worded oddly, because wouldn’t it have been the final decision to reveal his identity that mattered the most? That obviously wasn’t up to the “hapless journalist.”

Also I would have guessed that another incident, when Taylor Lorenz ratted out Marc Andreesen for using the r-word, which it then turned out he hadn’t, might have offended people even more. That the NYT acted the way it did in the Alexander case didn’t surprise me – that’s what large organizations do – but the Lorenz thing (the willingness to employ someone of that quality) did.

18

somebody who knows there's always more and it's always worse 08.22.24 at 10:31 pm

while i do appreciate that youre trying to reach a conservative audience, and of course there’s always space limitations, i do think it’s key to understand the absolutely fanatical anti-black eugenics/race science beliefs of these folks. steve sailer is their guy. scott siskind loves charles murray and fully agrees no black person has ever accomplished anything notable in the history of humanity and never will. steve pinker (not quite in your bullseye here, but at least in the first ring) published sailer more than once! they’re fully on board with ethnically cleansing both silicon valley and america broadly in order to raise our “national iq” high enough that nobody will make fun of elon musk on twitter anymore. the shuddering hatred they have for black people is not very well disguised when they talk about homelessness and crime and poverty. download the right podcasts and it’s not disguised at all.

19

bekabot 08.23.24 at 4:47 pm

“In any species where the behaviour of individuals can be affected by signals received from other members of the species–and that covers a very wide range, including species with intellectual equipment far more limited than that of modern humans–there is the possibility of selective pressure in favour of greater capacity to deceive.”

Speaking of frogs — frogs love to deceive predators, and they don’t mind deceiving potential prey either. That’s why they tend to blend in with lily pads. You don’t have to be affected by signals generated by your own species to find it worth your while to be deceptive — all you have to be is affected by signals — and that’s a very low bar.

20

Alex SL 08.24.24 at 11:59 am

I see we are not going to see agreement on the question of what the ability to reason first may have been selected for. Perhaps partly it is the usual problem what we call reasoning. I find it absurd to assume that the primary reason cleverness evolved in nature is to deceive oneself and members of one’s own species when (a) most species aren’t even social, and (b) all species that are mobile and have a nervous system, even the ones that reproduce by throwing gametes into the ocean while never directly interacting with their mate, are under obvious pressure to understand and navigate their environment at pain of being eaten or starving to death. I should also note that frogs blend in with lily pads because of their skin colour and not because, and I quote directly from the OP to clarify what this is, after all, about, they are telling “plausible stories that justify what we want to believe”.

Those who believe that the neural wiring of everything from a proto-worm 600 million years ago to monkeys would be, and note that this is the claim here (!), be significantly less shaped by the need to avoid starvation and find hiding spots than it is by the need to maintain a positive self-image may well arrive at a different conclusion. But from my perspective as a biologist, you are building a house and decide that step one is to lay down the roof tiles. That’s not how that works.

21

Zamfir 08.25.24 at 7:29 am

@ Axel, I am going from old memory here. IiRC, Sperber and Mercier’s point is indeed that most of our (and frogs) brain is not doing “reasoning” in the sense that philosophers use the word. When we observe the world, make some internal model of it, draw conclusions and choose a path of action, it mostly goes through unconscious nonverbal processes. In particular, there is no way for other people to observe the “in between” steps leading to the outcome, its blackboxy.

And they claim that our reasoning is a mostly seperate process, one that generates a series of verbal, concious steps that follow from each other and lead to the outcome. But their claim is that this proces also cannot access the in-between steps of the real inference processes. Its job is, at least partially and initially, was that of a PR department. It generates something that would be convincing to others (to gain support or permission). It does so mostly from scratch, towards a fixed conclusion already made by other processes (that are typcially faster, more capable, and take into accounts aspects of your situation that might not convince others).

Their claim is that yet another (more unconcious) process tries to assess other people’s reasonings, by judging the quality of the argument but also by taking social clues into account etc. And this process can also look at your own verbal arguments, warns that they might not be convincing to others and requiring improvement. But it goes easy on your own attempts, and strict on other’s.

Now they say that these loops (back-and-forth with others and the internal practice-runs for that) end up as a genuinely good reasoning engine in the philosophical sense, that generate good step-by-step arguments that can lead to better inferences than the unconscious processes alone. But we tend to identify that system with the verbal, concious step, which is not a very good reasoner on its own. In particular, they claim that many lab psychology experiments are misleading about the strengths and weaknesses of human reasoning in real situation.

Looking back from 2024, that sounds surprisingly like an LLM

22

SusanC 08.25.24 at 6:47 pm

@Zamfir;

Hmmm… we know that LLMs are terrible confabulators. for example their stated reason for refusing a request is confabulated. (As can be seen in interpretability experiments where you just flip a bit in memory to make it refuse an innocent request, and it just confabulates a rationale for it).

Maybe we humans are a bit like that too. There are some hints that we might be.

23

SusanC 08.25.24 at 6:52 pm

one of the many theories for psychosis postulates that it is fundamentally different from conspiracy theories by being pre-linguistic.

You believe some stupid thing you read on the Internet -> fundamentally a linguistic phenomenon, creatures without language cannot experience this

You hallucinate -> pre-linguistic, a creature without language can have its visual cortex disrupted by hallucinogens

(This is not a universally accepted theory)

Comments on this entry are closed.