Facebook’s algorithms are not your friend

by Henry Farrell on February 7, 2016

Alex Tabarrok makes an argument that I don’t think is at all a good one.

BuzzFeed article predicts that Twitter will soon move from a time-ordered feed to an algorithmic feed, one that shows you tweets that it predicts you will like before it show you lesser-ranked tweets. Naturally, twitter exploded with outrage that this is the end of twitter.
My own tweet expresses my view ala Marc Andreessen style:

It is peculiar that people are more willing trust their physical lives to an algorithm than their twitter feed. … How many people complaining about algorithmic twitter don’t use junk-email filters? I want ALL my emails! … Think of the algorithm as an administrative assistant that sorts your letters, sending bills to your accountant, throwing out junk mail, and keeping personal letters for your perusal. The assistant also reads half a dozen newspapers before you wake to find the articles he thinks that you will most want to read that morning. Who wouldn’t want such an assistant? Moreover, Facebook has billions of dollars riding on the quality of its assistant algorithms and it invests commensurate resources in making its algorithm more and more attuned to our wants and needs. … By trusting the machine intelligence to filter, you can open yourself up to a much wider space of information.

Cory Doctorow prebutted that exact argument-from-self-driving-cars eleven years ago – many others have made similar arguments about non-transparent algorithms since. But the point can be developed further.

Alex’s more fundamental claim – like very many of Alex’s claims – rests on the magic of markets and consumer sovereignty. Hence all of the stuff about billions of dollars “making its algorithm more and more attuned to our wants and needs” and so on. But we know that the algorithm isn’t supposed to be attuned to our wants and needs. It’s supposed to be attuned to Facebook’s wants and needs, which are in fact rather different.

Facebook’s profit model doesn’t involve selling commercial services to its consumers, but rather selling its consumers to commercial services. This surely gives it some incentive to make its website attractive (so that people come to it) and sticky (so that they keep on using it). But it also provides it with incentives to keep its actual customers happy – the businesses who use it to advertise, gather information on consumers, and market their products using tactics of varying sneakiness. If Alex’s imaginary administrative assistant is going to do our filing for free, he’s also going to keep asking us, increasingly insistently, why we haven’t yet switched our house insurance to Geico (while surreptitiously chucking mail from rival insurance firms into the trash).

When Twitter – a company that is notoriously a service in search of a business model – tells us that “Twitter can help make connections in real-time based on dynamic interests and topics, rather than a static social/friend graph,” it probably wants to increase user growth and stickiness to keep investors happy. But it also probably wants it easier to market products, push sponsored tweets etc without it being quite so clear that they are bought and paid for. After all, that’s where its profit model lies. The extent to which social media allows you to ‘open yourself up to a wider space of information’ in some uncomplicated way depends on whether it’s in the interest of the for-profit providers of this media to open you up to the kind of information that you might have wanted ex post had you had enough time and search capacity ex ante. That, contra Alex, is at best going to be a vexed question for Twitter and its ilk.

{ 46 comments }

1

Rakesh Bhandari 02.07.16 at 2:27 am

Persuasive, important argument; but does writing and submitting this now mean that you are depriving yourself of the pleasure of watching the Republican debate? Are there other possible problems with algorithmic feeds due to the way users will be profiled?

2

Cry Shop 02.07.16 at 2:27 am

https://www.youtube.com/watch?v=p6vM4dhI9I8

TED Talk by Eli Pariser regarding how search engines and social networks tailoring search results using relevance algorithms based on one’s web history to re-enforce a limited world view.

3

Henry 02.07.16 at 3:11 am

If there was an available algorithm that could subtract all discussion of Republican (and Democratic) debates from my Twitter feed, I might have to change my mind …

4

CJColucci 02.07.16 at 3:20 am

I don’t want a self-driving car and don’t know anyone who does. Do people really want this? If so, why?

5

Rich Puchalsky 02.07.16 at 3:26 am

Just a few days after releasing a new version of the Greenhouse 100 (a list of the top greenhouse gas emitters in the U.S.) the site that I host the detailed information on was taken down Why? Because the hosting provider had Emailed me two warnings that I was using too much disk space , and gmail had decided that these warnings were spam so that I never saw them. (The hosting provider, Hostgator, does have a habit of sending spammy offers of various kinds that apparently use the same Email address as serious notices.) After they’d decided that I was ignoring their requests to delete files it took me 4 days to convince Hostgator to put the server back up.

Does this mean that “I want ALL my emails!” and that I won’t be using a spam filter? No. The magic of the marketplace has ensured that I’ll get so much spam if I don’t use one that I’ll probably miss even more messages. Note that as with so much under capitalism, my wonderful range of consumer choices is actually quite restricted and the available solutions come down to the lesser evil.

6

marcel proust 02.07.16 at 4:32 am

CJColluci @ 4: I don’t want a self-driving car and don’t know anyone who does. Do people really want this? If so, why?

Me, teacher, call on me!
I am in my late 50s and live in northern New England, in a rural area 5 miles from the nearest town, 6 miles from work and from the nearest (half) decent shopping area. Public transit comes no closer than 2.5 miles from my house. I’ve been here 15 years and hope to stay the rest of my life. But as I age, it will eventually become too difficult. Some of that is what happens in your mid-late 80s, but the need to drive myself around could reduce that by up to 10 years. So, self-driving car? Where, when? Sign me up.

7

Eszter 02.07.16 at 4:56 am

Yes, good points, Henry.

As you can probably guess, I have about a million thoughts on the topic. (Few people know this, but my interests in Internet skills initially developed as a response to the potential power that “portal sites” (the search engines of the late 90s) would have in channeling user attention (see Hargittai 2000 in Poetics). The question I posed was whether higher Internet skill level could help people route around those systems better.

Just a couple of points. First, can we please have more empirical studies about algorithms and less hand-waving at this point? To clarify, I’m not critiquing the post or the piece to which it points, I’m critiquing the related academic literature on the topic. Twenty years into related matters, it’s getting really old for people to say “algorithms will xyz” without getting their hands dirty and empirically testing the claims. (Again, this isn’t directly about this post, I’m just using it as an opportunity to rant about this point.)

Second, as the OP notes, FB’s business model very much must depend on the quality of its algorithms. It is shocking then how poor it seems (based on anecdotal evidence). Has anyone else gotten the Happy Friends Day video compilation on their feed? Mine had a few okay matches, but it was a bizarre compilation for the most part, I’ve heard similar comments from others. And a colleague had a picture of her with her deceased (for a few years now) husband on the cover. They already got in trouble for making that kind of a mistake in the past, hopefully they keep on tweaking away.

I have so many more thoughts on FB and its algorithm, I really need to write a post one of these days…

8

Cheryl Rofer 02.07.16 at 5:14 am

Yes, Eszter, please write about FB and its algorithm. I have a lot of thoughts about the utility of Twitter and how badly the management has handled its business model. Might write a post about that…

9

kidneystones 02.07.16 at 7:13 am

Every moment spent looking at a screen is a moment less to spend with friends and family in meat space. I was a relatively early adopter of many things internet. I’m now mostly out of the stream. We send paper letters and cards to those we care about, and receive the same now and then. Two thank you notes from my cousin’s kids to our family were recent high points. I (try to) separate the time I spend online or working on my computer from my meat space existence. When out of the house my smartphone remains on airplane mode, or is simply left on my desk. It feels great to step out into the world unencumbered by any connection to the world of zeros and ones. In our first class of the term all students are required to bring their smart phones out, turn the power off, look at their smart phones, and say good-bye. The sense of separation for some is palpable. Phones are secured in bags and bags are placed at the back of the class. Then class starts.
I use the camera in my laptop to record presentations, interactions, and exercises. And for monitoring. In short, I control my connection to the internet. It doesn’t (entirely) control me. I suffer the suggestions from sites I visit and understand that clicking on any link changes the ads that appear on my screen. All I need to do is remember not to click the adds, and that’s easy. I agree that a driver-less car will be a very welcome development for many. I commute entirely on foot and by train. I try to walk an extra mile or so each way to work. On the train, I can work if I feel the need. I find the time is better spent just thinking free, cellphone turned off, or on airplane mode. But that’s me.

10

Bruce B. 02.07.16 at 7:19 am

Eszter, another reader here who’d love to read your thoughts on this kind of subject. I’m aware of how little I know, and how much I’d like to.

11

Thomas Beale 02.07.16 at 9:38 am

One would assume that the Ad industry would be the most advanced in terms of social media algorithms. But if I buy timber shelves online (after having surfed half a dozen bookshelf-selling vendor sites), the next thing I used to see (pre Ad-block days) on YouTube (where I watch music instructional videos) was… 3 months of ads for timber bookshelves. That just shows how primitive such algorithms are. They are a LONG way from really ‘knowing you’.

My impression of the younger generation is that you can’t fool them that there is no manipulation going on – sites like factcheck.org and memepoliceman.com are evidence of this. So no matter how smart your ads (a certain South Park episode comes to mind) or genuine-looking your content, they know they are being lied to in subtle ways. The bottom line is that if they don’t get exactly what they want, they’ll dump your site/service like a lump of plutonium.

Older internet inhabitants are not that far behind. The days of dumbly absorbing any internet content as ‘straight’ are gone the way of ad-infected broadcast TV (hence the adblock war). The internet consumer of today cherrypicks what he/she wants and tosses the rest, overnight if need be. Good lucking taming that!

12

Tim Worstall 02.07.16 at 10:42 am

“Facebook’s profit model doesn’t involve selling commercial services to its consumers, but rather selling its consumers to commercial services.”

Quite so, the economics of which are explained here:

http://www.nobelprize.org/nobel_prizes/economic-sciences/laureates/2014/tirole-lecture.pdf

Part 3 on two sided markets.

13

Nick Caldwell 02.07.16 at 11:44 am

Every moment spent looking at a screen is a moment less to spend with friends and family in meat space.

Yes, probably literally lifesaving for many people with desperate family lives.

14

kidneystones 02.07.16 at 11:53 am

@ 13 Good point. Cheers.

15

bjk 02.07.16 at 12:17 pm

Twitter is an enormously profitable company. If it simply fired all of its engineers and collected profits on the existing revenue, it would be a great business. It only seems like a business “in search of a business model” because it’s trying to be Facebook and it isn’t.

16

Unlearning 02.07.16 at 1:15 pm

I might be missing something, but isn’t it patently obvious that algorithms are more suited to technical tasks like driving a car than to making personal judgements about what you want to see on your twitter feed?

17

Jeff Darcy 02.07.16 at 1:36 pm

The problem is that “algorithm” is too vague. It covers everything from simple steps with a clear relationship to clear goals, all the way to impenetrable thickets of what a computer scientists would call logic but a logician would call garbage. “Heuristic” would be a better word here, not because it’s perfect but because it more clearly gets at the fuzziness and variability that’s the problem here.

Some people are more likely to be comfortable with self-driving cars because, as complex as that activity is, it involves rules that apply equally to everybody and the way that a computer can find solutions seems relatively clear (even though there are surely some heuristics involved). A(n) heuristic Twitter feed is very different. Everybody knows that there is no universal set of heuristics. They know that different people would place a different weight on how recent a tweet is, likes given to the tweet, like given by the reader to the author before, likes given by *anyone* to the author before, author’s tweet frequency, etc. They know that a single Twitter-developed algorithm can not satisfy everyone. On Facebook right now, one of my friends is complaining about how their heuristics make posts from rarely-heard-from friends almost impossible to find, and many are agreeing with her. There’s no “solution” here and thus no optimal algorithm.

I don’t think it’s really inconsistent or peculiar for people to have these different attitudes toward self-driving cars and Twitter/Facebook feeds, if you look more closely at the specific nature and application of algorithms in each case.

18

JimV 02.07.16 at 1:49 pm

Re #4 (“I don’t want a self-driving car and don’t know anyone who does. Do people really want this? If so, why?”):

Answered seriously in #6. I was going to answer “Taxi companies.” And bus companies.

For my part, I don’t want cars at all. Big, expensive, complex machines that use a lot of resources, pollute, cause a lot of deaths and injuries, and require expensive maintenance and insurance, is how I see them. In Schenectady I could walk to work and get a bus every half-hour to any shopping center within 15 miles. In the suburbs in retirement buses are close to nonexistence (although the school system has a large fleet of buses that aren’t doing anything for much of the day) but I can get a lot of things delivered, e.g., raw, shelled filbert nuts from ohnuts.com (or something like that).

19

Abigail 02.07.16 at 2:18 pm

I don’t understand the reasoning behind Tabarrok’s analogy at all. A self-driving car might use an algorithm to decide what route it should take to get me to my desired destination. This may or may not be a good thing – there are presently several applications, of which the most famous is probably Waze, that crowdsource route selection based on input from other users, and there have been both positive and negative consequences of that approach (ask me, sometime, to tell you about the day Waze decided that a road that had been flooded and closed was the fastest segment for its commuters to use, because there were no cars on it).

But I don’t think anyone is interested in using a self-driving car that decides for you where you should go. And that is the correct analogue for what twitter is trying to do with its algorithmic timeline.

20

Cheryl Rofer 02.07.16 at 2:26 pm

Last night, after I left my comment here, North Korea launched that rocket it’s been looking like it would launch. The fact of the launch showed up quickly on Twitter, and a group of colleagues across the globe pooled and tested the available information. An astronomer at Harvard and colleagues in Monterey told me that the satellite would pass over my location in New Mexico, so I went out to look for it with my amateur astronomer skills.

That is one of the values of Twitter – bringing together experts who can converse about an ongoing event. Another value is in presenting that kind of discussion for others to see and learn from. It’s useful to the media, who contact many of the participants. They get a multiplicity of views, rather than the one in the dusty Roladex. There was, just a few days ago in the NYRB blog, an article by one of those dusty types figuring out about the Korean nuclear test in January what our Twitter assemblage figured out at the time.

Does that matter? I think it does, for multiple reasons.
1) It is creating a much more open environment for knowledge and understanding the world. We have so much information available openly now that it is possible to know a great many things that went unnoticed or were kept secret in the past. The classified communities are having a difficult time with this; it appears that the classified part of Hillary Rodham Clinton’s emails was largely from news media and was classified after the fact. And that’s only one example.
2) It allows other people to see how experts work, which educates them in careful ways of thinking and shows them how much is trial and error, back and forth, testing whether it makes sense: the fairing from the North Korean rocket went down in the wrong place, does that mean that it’s off course, but the satellite is in orbit, no the fairing isn’t in the wrong place, close enough, is it a stable orbit, and so on.
3) And, of course, it allows us to get together across the continents and time zones in a way that would be much more difficult or impossible without Twitter and a chronological timeline.

Does all this make a difference in human contentment or is it comparable to a self-driving car? Perhaps not in the homey terms of friends and family, although I have friends (in meatspace) that I wouldn’t have had without social media and closer contacts than I would simply by hand-written letters. In the larger world, the increased transparency is changing things immensely. The self-driving car issue is a bit different, so I won’t address that now.

21

Ronan(rf) 02.07.16 at 2:30 pm

It’s a peculiar strawman of an argument by AT though, because people weren’t complaining about the use of ‘algorithms ‘ (whatever that might mean in this context) but the actual specific changes twitter were planning to make to the way twitter functions.
I think AT’s last line is the most relevant point:

“As Tyler argued in Average is Over, the future belongs to people who can defer to the algorithm.”

It isnt the technology that interests Tabarrok , but (as with the likes of Cowen and Andreessen ) it’s the societal consequences. They actually do believe (where I think theyre probably correct) that there are a new class of genuinely brilliant people, doing astonishing things, who (where they lose the run of themselves) we should all instinctively trust and outsource all thinking too.
Tabarrok , Cowen and Andreessen are the courtiers who want to reinstate a new improved version of feudalism. This might seem overwrought, and I dont mean to denigrate them by saying it, but it’s absolutely the only reasonable reading of their body of work . So the objections in the OP dont really apply to Tabarrok et al because they are features of the society they want to create, not bugs.

22

bianca steele 02.07.16 at 2:40 pm

Agree that the problem isn’t that it’s an algorithm (though the word has come to mean something nontechnical now–on the MG thread a commenter is said to be autistic because he suggested the word was being used incorrectly), but that the algorithm seems designed to throw out a lot of stuff you actually want. Maybe the problem should be called Kondoing (after the trendy Japanese decluttering book that tells you to throw out everything “that doesn’t give you joy”).

23

jake the antisoshul soshulist 02.07.16 at 2:45 pm

Facebook has adopted the same business model used by the television industry since its beginnings. We are the product they sell to their customers, the advertisors.

I am of the opinion that economic systems tend to (d)evolve into some form of feudalism.
Whether it is Soviet/Maoist state feudalism or Capitalist private feudalism.

24

Kevin 02.07.16 at 2:45 pm

Personally I am just offended that he embeds his own tweet in his article. Who does that?

25

Rakesh Bhandari 02.07.16 at 5:26 pm

Haven’t read Pedro Domingos yet. But from an interview
Question: What is the difference between the algorithms that Netflix and Amazon use to recommend products you might like? Why is it important for consumers to be aware of these differences?

PD: Like every company, Netflix and Amazon each use the algorithms that best serve their purposes. Neflix loses money on blockbusters, so its recommendation system directs you to obscure British TV shows from the 70s, which cost it virtually nothing. The whole machine learning smarts is in picking shows for you that you’ll actually like even though you’ve never heard of them. Amazon, on the other hand, has no particular interest in recommending rare products that only sell in small quantities. Selling larger quantities of fewer products actually simplifies its logistics. So its recommendation system is based more on just how popular each product is in connection with the products you’ve bought before. The problem for you if you don’t know any of this is that you wind up doing what the companies want you to do, instead of what you want to do.

If you know — even just roughly — how the learning algorithms work, you can make them work for you by deliberately teaching them, by choosing the companies whose machine learning agrees best with you and by demanding that the learning algorithms let you explicitly say things like “This is what I want, not that,” and “Here’s where you went wrong.”

26

Henry 02.07.16 at 7:17 pm

Eszter – on the contrast between the fallibility of machine learning and hype, I liked this by danah, which you’ve presumably already seen.

27

Henry 02.07.16 at 7:42 pm

And speaking of which, Google Music radio (seeded with Sister Rosetta Tharpe) has just decided to play Leadbelly’s version of Where Did You Sleep Last Night for the third time in twenty minutes. Not that it’s not a great song, but the algorithms, sometimes they’re not so hot. More generally – how often the algorithms for Google Music and Spotify dip towards the deep grooves of popular songs that everyone knows rather than good stuff that you might want to know about but don’t. I presume that this likely suits a lot of people – but it doesn’t suit me – and I suspect that Twitter algorithms will have roughly similar biases. Which again may suit many people – Tyler is good in one of his books about how most people really don’t want to discover new stuff so much as read or listen to whatever everyone else is reading or listening to. But it suggests that the talk about these being tools of discovery is likely to be so much fluff and babble.

28

Alex 02.07.16 at 11:26 pm

Unlearning:

I might be missing something, but isn’t it patently obvious that algorithms are more suited to technical tasks like driving a car than to making personal judgements about what you want to see on your twitter feed?

Even here there are still personal judgements. If a self-driving car is faced with the task of killing a pedestrian, or swerving into a wall and killing the driver, what coding should we choose to tell it who lives and who dies?

29

hellslittlestangel 02.08.16 at 3:01 am

” The assistant also reads half a dozen newspapers before you wake to find the articles he thinks that you will most want to read that morning.”

Epistemic closure for everyone!

30

Lord 02.08.16 at 4:25 am

I fully agree, but aren’t the alternatives worse? If you wanted to see it, it wouldn’t have to be advertised for most part. It will always be a price we pay other than when we pay not to see it.

31

Guy Harris 02.08.16 at 5:23 am

Henry@27 (https://crookedtimber.org/2016/02/07/facebooks-algorithms-are-not-your-friend/#comment-658052 – for some reason (deleted comments?) the “@{N}” for comment N doesn’t always refer to the comment to which the person appears to be replying):

Pandora’s “Brazilian Radio” channel appears to think that Brazilian music consists of 1) “Girl from Ipanema”, 2) “One-Note Samba”, 3) “The Waters of March”, and 4) everything else, and plays each of those 4 equally frequently.

32

Guy Harris 02.08.16 at 5:27 am

Ronan@21:

Tabarrok , Cowen and Andreessen are the courtiers who want to reinstate a new improved version of feudalism.

Andreessen is a bit more than just a courtier.

33

Guy Harris 02.08.16 at 5:45 am

Me@32:

OK, so maybe there are “courtiers” who aren’t that rich but say what the rich want to be heard and “courtiers” who really are that rich and say what they want to be heard, and perhaps Andreessen is in the second category whilst Cowen and Tabarrok are in the first category (they don’t pop up the way Marc does when you ask Mr. Google about their net worth, for what that’s worth).

34

Ronan(rf) 02.08.16 at 7:37 am

True, but afaict from reading cowen anyway , the new lord class will be decided by brilliance ( iq etc) rather than explicitly by wealth or bloodline, so Andreesen would still be an outsider (the equivalent of a non notable made good. Respected but always an outsider, the same as someone who got into the charmed circle but wasn’t from the right family back in feudal days)
I might be overdoing this, but that’s my impression (more or less)

35

Stefan 02.08.16 at 11:09 am

Paul Bernal has written a great post on why algorithmically driven twitter feeds are a bad idea, including some specific use cases where having a time-ordered stream is precisely what makes it useful.

He also makes the point very clearly that algorithmic objectivity is an illusion:

The idea of using algorithms is very attractive, but it’s underpinned by an illusion that algorithms are somehow ‘neutral’ or ‘fair’. This is what brings about the idea that Google is a neutral indexer of the internet and a guardian of free speech, but it really is an illusion. Algorithms are human creations and embed ideas and biases that those who create them may well not even be aware of. They can make existing power imbalances worse, as the assumptions that underpin those imbalances are built into the very thought processes that create the algorithms. Yes, people can compensate, but even that act of compensation can bring about further biases. Where the essence of the idea behind an algorithm is to make Twitter more money, then that bias itself will interfere with the process, consciously, subconsciously or otherwise.

So, in short, for many twitter users, the timeline is both practically and conceptually the whole point. The only two problems with that are first that that works least well for new users – it takes time to get to a rich and well-tune timeline – and secondly that Twitter as an organisation doesn’t seem to share the power users’ passion. Those two problems are closely connected – so perhaps the real challenge it to find a way of introducing and supporting new users which amounts to more than ‘here are some celebrities you can follow (but who will ignore you completely)’.

36

JoB 02.08.16 at 12:12 pm

I agree that any algorithm introduces bias. I also agree that Twitter’s chronological type of approach is essential to its service.

That said, we have not exactly evolved to cope with the mass of information available now. So human capacity to find the right information unassisted by algorithms shouldn’t be any reference. If it were, there would only be a couple of globally known people and institutes that got viewers for their information. I would certainly not have found the excellent paper of M. Shuster on the ethical basis of language connecting thought of Davidson, Cavell and Levinas without Google’s algorithms.

If there’s anything wrong with search algorithms it is that they’re still too much limited to endorsing the celebrity-effect. It’s a very new technology so some experimenting should be looked at charitably, I think.

37

lemmy caution 02.08.16 at 5:04 pm

songza has moved to google music. kinda hard to find but this is the link.

i like the curated songlists.

https://play.google.com/music/listen#/now

spotify does do a good job finding songs for discover weekly

http://qz.com/571007/the-magic-that-makes-spotifys-discover-weekly-playlists-so-damn-good/

probably the best algoritmic music discovery. shame that it works best the first time you use it

38

Sebastian H 02.08.16 at 6:24 pm

“More generally – how often the algorithms for Google Music and Spotify dip towards the deep grooves of popular songs that everyone knows rather than good stuff that you might want to know about but don’t.”

I’ve long wondered if Pandora chooses so many covers of songs performed by no name artists because it has to pay less for those? Does anyone know if that is the case?

39

Jessica 02.08.16 at 8:59 pm

Whether or not I want a self-driving car depends entirely on whether it goes where I want or where some Google algorithm decides or even worse where some NSA-funded Google algorithm decides.

40

Troy 02.08.16 at 10:12 pm

Alex@28

My daughter, currently in a philosophy grad program, told me over Christmas break that this question with regards to self-driving cars is all the rage among her peers. But while I understand the appeal as a theoretical, I see no universe in which human beings are willing to pay several thousand dollars for an item that prioritizes the safety of strangers in the street over that of the owner/operator and his personal circle who might be inside the car.

41

Cry Shop 02.09.16 at 3:04 am

Troy: What city do you live in? Just want to know for my own safety.

42

Guy Harris 02.09.16 at 3:51 am

Ronan@34:

True, but afaict from reading cowen anyway , the new lord class will be decided by brilliance ( iq etc) rather than explicitly by wealth or bloodline, so Andreesen would still be an outsider (the equivalent of a non notable made good. Respected but always an outsider, the same as someone who got into the charmed circle but wasn’t from the right family back in feudal days)

Andreessen is credited as a co-developer of the Mosaic Web browser, so he appears to at least have technical chops, and thus might be brilliant enough to be an insider in Cowen’s world.

(BTW, whilst looking up Average Is Over online, I found the Amazon page for it, which quotes a Wall Street Journal writer praising The Great Stagnation:

“Tyler Cowen may very well turn out to be this decade’s Thomas Friedman.” —Kelly Evans, The Wall Street Journal

I have no idea whether that statement was made with a straight face.

43

Alex 02.09.16 at 11:33 am

Troy, but self-driving cars are, as I understand, safer than regular cars. I can easily imagine a future whereby governments force the adoption of them, meaning to drive you would have to use a self-driving car, and you might not have the individual choice over the algorithm. The algorithm that determines things in my scenario will be decided upon at a societal and corporate level.

44

Trader Joe 02.09.16 at 12:37 pm

@40 and @43
The algorithms related to stopping autonomous vehichles are not making value judgements between hitting pedestrians and killing the vehicles occupants….the sensors make no value distinctions between humans and any other object, they are all fixed obstacles to be avoided – they are programed to avoid both of them equally and will adjust speed /commence breaking or possibly as a last resort steer the vehicle, but only if the path is clear, as soon as an obstacle is detected. If the pure physics of the matter are that there is no possible way the car can be brought to a halt without hitting the human, than the human is hit. There is no choice between hitting a human and hitting a wall – it will seek to not hit anything and if unavoidable, be hitting the obstacle at the slowest possible speeed.

Its not really fundamental to the algorithms to make ‘heroic’ driving moves such as lerching the vehicle in some violent way to put passengers and the vehicle itself at risk. In the millions of miles such vehicles have been tested using all kinds of test track scenarios (things jumping out, poor or slippery conditions etc.) this ‘solution’ has never been reported. The secret sauce that makes the cars safer is that they can detect obstacts and react to them between 10 and 20 times faster than humans and the increased safety comes from that speed differential – nothing more. If a human reacts in 1 second, and the machine in .05 seconds, that’s a substantial difference in stopping.

So far the weakness in these vehicles is that they tend to drive overly safely – they move too slow and/or are too tentative in situations where there are lots of potential hazards (crowded intersections, limited field visibility). Its likely these weaknesses will be worked out and the sensors will be able to make more “human-like decisions”, but there’s still nothing in the software that makes value judgements – objects are objects and they are all to be avoided or if unavoidable, hit at the lowest possible speed.

45

Joseph Brenner 02.10.16 at 4:09 pm

CJColucci@4: “I don’t want a self-driving car and don’t know anyone who does. Do people really want this? If so, why?”

People still living in suburban car-commute hell look forward to self-driving cars because they seem like a drop-in technical fix: they dream of living *exactly the way they do now*, but rather than staring nervously at the bumper in front of them, they’ll be able to spend all their time on the net (where they’ll tell everyone about how wonderful this new gadget is that they have and you don’t).

With self-driving cars, we’ll no doubt see some safety and energy-efficiency improvements, but we’ll also see an amazing enabler of suburban sprawl: people will be willing to put up with commutes twice as long as they do now.

46

Joseph Brenner 02.10.16 at 4:21 pm

Guy Harris @ 42: “Andreessen is credited as a co-developer of the Mosaic Web browser, so he appears to at least have technical chops, and thus might be brilliant enough to be an insider in Cowen’s world.”

One take on Andreessen is that he got to play poster-boy at Netscape because the real programmers didn’t want to deal with it. His current claim to fame is a successful VC company, but one might wonder if it’s because of his wonderful decision-making skills, or if he’s in another poster-boy role.

When Marc Andreessen tries to argue with Paul Krugman about economics he seems like the stupidest man who ever learned emacs.

Comments on this entry are closed.