A few years ago, I got enthusiastic about the prospects for self-driving vehicles, and wrote a couple of posts on the topic. It’s now clear that this was massively premature, as many of the commenters on my post argued. So, I thought it would be good to consider where and why I went wrong on this relatively unimportant issue, in the hopes of improving my thinking more generally.
The first thing I got wrong was overcorrecting on an argument I’d made for a long time, about the difference between radical progress in information and communications technology and stagnation in transport technology. The initial successful trials of self-driving vehicles in desert locations led me to think that ICT had finally come to transport, when in fact only the easiest part of the problem was solved.
There was also an element of wishful thinking. As commenter Hidari observed, the most obvious use of self-driving vehicles is to provide mobility for 75+ Baby Boomers. As someone approaching that category, and having never liked driving much, this is an appealing prospect for me. And I liked the idea of taking other bad drivers’ hands off the steering wheel.
That framing of the issue is very different from the way a lot of commenters saw it. Should self-driving cars be seen as automated taxis, and if so is automation desirable or not? Is any improvement in car technology a distraction from the need to shift away from cars altogether? I don’t have good answers to these questions, but they indicate that resistance to self-driving cars won’t be purely a matter of technological judgement.
Finally, having put forward a position, I am usually tenacious in defending it. Within limits, that’s a good thing, particularly in the context of a blog where the discussion doesn’t have any direct implications for what happens in the world. It’s good to put up the strongest case, and test it against all counter-arguments. But that approach carries the risk of being obstinately wrong.
I’m hoping discussion here will help me deal with more consequential errors of judgement I’ve made. So feel free either to discuss the original question of self-driving vehicles or the broader issue of how to think about mistakes, and particularly mistakes I’ve made.
{ 52 comments }
Brett 12.09.21 at 8:51 pm
Peak Self-Driving Car Hype was probably around 2017-2018. The robo-taxi stuff stuck around longer, but it’s running into challenges as well. It’s just really hard to get it to full automation.
Maybe the trucks will do better, but it’s still a big challenge to get a system that reliably do self-driving. Here’s a very interesting piece from 2020, from the founder of the now defunct Starsky Robotics (a self-driving truck startup) talking about it.
TL;DR: It’s really hard to get to a safe level of reliability, and many of the competitors didn’t even try – they drew in investors by piling on features that maybe worked once in a while, over-promising and so forth.
afeman 12.09.21 at 9:19 pm
There’s an xkcd for everything: https://xkcd.com/1425/
J-D 12.09.21 at 11:51 pm
I paid for driving lessons for my daughter, but she still hasn’t got her licence. I expect her to do a better job of driving than I do once she’s over that hurdle. One day, perhaps …
Jim Harrison 12.10.21 at 12:10 am
I spent around 30 years doing writing and editing jobs for engineers and scientists so I’ve seen the rise and collapse of many brilliant ideas. I got paid to promote ’em so I’m kinda happy my name isn’t on the documents. Thing is, though, I also wrote up various plans that I personally thought were going nowhere but which just plain worked. The classic instance was back in the early 80’s when I wrote marketing copy for a book on packet switching, the technology that made the Internet possible, though at the time I struggled to find ways to make it appear sexy. A very rare case: a marketing guy underhyping something. The point is, it is damned hard to predicting whether a technology is going to work or make a difference. Guessing wrong isn’t a moral failing.
nastywoman 12.10.21 at 12:24 am
and I always thought that getting vaccinated against a deadly virus was/is a GOOD idea –
and now I have found out that a lot of people think it isn’t a good idea –
at all –
and they rather… die?
marcel proust 12.10.21 at 12:42 am
I’m hoping discussion here will help me deal with more consequential errors of judgement I’ve made. So feel free either to discuss the original question of self-driving vehicles or the broader issue of how to think about mistakes,
The second sentence is amazingly antiseptic in light of the first sentence. Can’t we make this discussion more ad hominem… pretty please?
John Quiggin 12.10.21 at 12:57 am
MP @6 Not sure if I’m getting your point, but TBC, I’m mainly interested in fixing my own mistakes here. I’ll edit to make this clear.
Moz In Oz 12.10.21 at 2:23 am
What I look for in these stories is the missing reason: “everyone does this. We don’t know why, so we’re not doing it”.
Silicon Valley geeks are really awesome at solving hard problems in novel ways, given infinite resources. They’re absolutely shit at accommodating humans. So any radical innovation that you wouldn’t want to be on the downside of, you wanna think long and hard about the downside of.
Also, do you make mistakes? Better be really careful around their products, and do your own risk analysis, especially failure mode analysis. If the TV playing an Alexa recording causes your Alexa to buy something for you, is that a problem? If instead “James, fetch the car” causes your car to go into valet mode and start reversing, is that a problem? What if you’re on the motorway at the time? Did Tesla think of that and write code so the car won’t do it? Are you sure? Really, really sure?
As XKCD suggests, the “stuff computers can do easily” is often non-intuitive, but it’s also worth remembering that AI is now at much the same point as humans are… except less efficiently. Given a few GW of electricity few hundred million dollars of computers can do amazing stuff. Sometime even more amazing than $1/hour worth of human can do. Sometimes literally so – people have faked AI using cheap humans.
It’s worth noting that cars as we have them today are a subtle problem all by themselves, as Tesla is busy finding out. Tesla is full of really smart people working really hard, and I’m sure than within a decade or two they will have worked out how to make a basic, reliable, driveable car. Possibly one valid for more than five years (thing of smartphones… now think of a car designed by the sort of people who make smartphones). But right now what I’m reading says you shouldn’t buy a Tesla, and you probably shouldn’t ride in one ether. They’re designed and built by people who just don’t know enough about cars to understand how they’re not capable of making a safe one.
There are so many trivial examples, like Tesla only realising after the first few times that a car you can’t even unlock, let alone start driving, without cellular data… that’s going to be a problem. And by “a problem”, I mean the next of kin will sue Tesla sort of problem. Existing car manufacturers have long lists of stuff like that, and exhaustive lists of past problems and consequences plus the experience and gut feelings of thousands of engineers, so that when they design cars they think about stuff like “when I drop my car key and it falls down the side of the seat, how do I get it out?” Other car makers make those mistakes, but fewer of them such that Tesla generate the majority of those stories recently despite hardly manufacturing any cars.
Cranky Observer 12.10.21 at 2:26 am
There is a general disdain for the profession of engineering in the political and economic commentary worlds, but even within the technology world there is very little understanding of what the people who integrate extremely complex systems do, whether at the high end of continental electric networks or the opposite end of the spectrum of systems integration on a modern CPU chip, how critical what they do is, or how many people have to toil away unseen at very high levels of proficiency to make it all work. And of course the management people who integrate the work of the integrators are doubly despised. Yet for every person who makes a billion with the next great app (“Like TikTok, except for cars!”) there are 50 or 100 engineers and support staff toiling away at the Apple, Intel, and nVidia chip design centers, TSMC, etc to make it all possible.
And indeed while automobiles appear to be a known and solved problem there are such people toiling away at the large makers of them as well (one of my classmates spent the last 20 years of his career as GM’s expert on car window mechanism lubricants). Can those people have blind spots? Certainly, particularly when “corporate goals” are involved (GM blowing a 120 year lead on electric vehicles, then a 20 year lead, then a 10 year lead to allow Tesla and new entrants to arise while ignoring the lesson of how it came to exist itself is a good example), but that doesn’t mean that when their experts say things or do things they are automatically wrong or ‘archaic’. GM, Daimler, BMW have been proceeding very slowly on autonomy research, releasing very small increments, for 15 years. They are not promising big bang advances, and they are working within the framework of the SAE and state/local roadway associations and agencies. Perhaps they know something? [1]
And oh yeah: I’ve been reading academic and industry papers on artificial intelligence since the 1980s, and have read some of the key papers back to 1960. We aren’t even close to having a glimmer of a candle inside a titanium box about how to create a true artificially intelligent system – but even the worst human bad driver referred to in the previous two threads as the lead standard of human driving applies sophisticated intelligence to the problem of driving every time he turns the key. No autonomous vehicle system released to date has shown anything close. And they are all tested on the wide clear streets of Silicon Valley and Phoenix, never on slush-covered sidestreets of Pittsburgh or Chicago…
I don’t think JQ made a serious mistake here other than not listening to informed critics, which is easy to do when the big names of the last Big Thing are saying this is a simple evolution from their previous success. But verifying and double-checking is always important, and listening to the very experienced domain experts (even when they are old and grey) is part of that.
[1] in 2018 one of my sources in a large legacy auto company told me they had something very close to Level 3 autonomy in the final stages of test, and that at a minimum it would be included and active for at least highway driving in high end 2019 models. In the big auto world that means it would have been greenlighted in 2017 or 2016. In the event that manufacturer did release a much improved driver assist package in 2019 models, but nothing like what was described to me, and my source stopped talking. I have to think that that manufacturer learned something in testing in 2018 that was serious enough to pull a feature planned for 3 years at the last minute. That’s the kind of testing that a gadfly such as Musk could not abide.
Moz In Oz 12.10.21 at 4:30 am
https://www.techdirt.com/articles/20211207/07023448070/report-showcases-how-elon-musk-undermined-his-own-engineers-endangered-public-safety.shtml
Not to harp on at Tesla, they’re just the obvious choice.
Fake Dave 12.10.21 at 4:43 am
I think predictions like this are troublesome because most new technologies have at least a decade of lag between “just a few kinks to work out” and “meets all industry safety standards.” Depending on the regulatory regime and industry culture, products may be rushed to market or delayed to the point of obsolescence or anywhere in between. At one extreme you get leaded gasoline and radioactive toothpaste. At the other you get Luddism, anti-intelectualism, and stagnation. It probably says something about capitalism or modernity or “Western Civilization” or whatever that most of us (except anti-vaxxers and about half the Green Party) seem more than willing to risk getting trampled by the march of progress so long as we feel like we’re going somewhere.
Self-driving cars are ready to go now, we’d just have to accept them killing a certain number of people. Eventually we probably will accept that risk (like we do with conventional vehicles), but first the public has to be reassured that every reasonable precaution has been taken to minimize harm. For that to happen, something has to be done to bridge the gap between a notoriously libertine tech sector and our generally high standards for automotive safety (not to mention a century of killer robot stories).
At the height of the Obama-era “innovation economy” when you could make billions running unlicensed taxis and hotels, marketing predatory algorithms, or selling other people’s data, it was just about possible to imagine self-driving cars similarly slipping through the cracks. Now that the public is beginning to realize how reckless, self-serving, and unaccountable these tech “geniuses” actually are and factions of both parties are willing to make hay over it (one of the few things we may genuinely be able to thank Trump for), the road looks a lot longer. The hype train is off the tracks and the unconscionable hubris it took to field test these deathtraps unmonitored in major cities has been exposed.
Never fear, the cheerleaders will be back sooner or later to accuse us skeptics of standing in the way of progress or costing more lives than we saved. First though, they’re going to need a better slogan because it turns out “move fast and break things” isn’t what anyone wants to hear from a car company.
MFB 12.10.21 at 7:24 am
It seems as if the tremendous hype around self-driving cars was rather foolish, but possibly it wasn’t. After all, someone must have sold a great deal of stock on the basis of the hype, and presumably they aren’t giving that money back again.
I suppose the thing to learn is that apart from the obvious problem that most new technology is wholly or in part secret, which allows the developers to conceal anything they don’t wish the public to know about it, there’s also the problem that the funding and the PR is wholly or in part secret, which allows the promoters to lie big-time, suppress facts big-time, and makes it very difficult to determine where the lies or the suppression are taking place, because one doesn’t really know who to trust.
It’s natural to be innocent and to believe what one is told. It’s also natural to be defrauded, and sometimes ruined.
Edward Gregson 12.10.21 at 8:50 am
I don’t think we should decry improvements in car technology as a distraction from shifting away from cars altogether because there is essentially zero prospect of shifting away from cars altogether. This distraction argument is frequently made about electric cars (often in response to someone being overly effusive about Elon Musk), but it doesn’t make a lot of sense. The most public transit-heavy places in the world (like Tokyo) are still extremely dependent on cars for personal transport, and that’s with favorable geography and density and decades of public transit-friendly policy. For a place like the US, public transit could and certainly should be improved in cities, but there is no prospect of cars not being a massive presence for decades to come.
Whether full level 5 autonomous cars will happen in the next decade or two probably depends on whether we need full reasoning artificial general intelligence (AGI as it’s called), or whether we can create safe enough software just through exhaustively documenting and coding/training for all the most likely edge cases. We have no idea how to do AGI, or when or if we might figure it out.
Right now, some of the most impressive-seeming results in machine learning have come from transformer architectures (Andrej Karpathy had a twitter thread about this). The particularly spooky-seeming results remarked upon in the press were for GPT-3, which is a large language model that leveraged semi-supervised learning in its training. They fed it an enormous corpus of writing scraped off the web, and programmed it to mask out words at random and then try to predict the masked words. So it could do supervised learning, but without needing a human to label all the datasets, i.e. semi-supervised learning, allowing a massive amount of training data and thus a massive model. If there was a way to do semi-supervised learning for the general driving task, it would substantially improve the odds of success, but I’m not sure driving lends itself to semi-supervised learning the way text prediction does.
David J. Littleboy 12.10.21 at 8:51 am
The zeroth problem with self-driving cars is that they don’t fix the problem that the private automobile is an incredibly bad and stupid idea. (If you are reading this and think you need a car, you are wrong, Cars are expensive, stinky, and dangerous. Integrate the amount you’ve spent on cars over the years, and think what retirement would look like if you had that much money in the bank. That’s me (I owned a car for one of my 69 years), not you, you sucker.)
The main problem with SDCs, the problem that’s going to continue to make them not happen, is that the current round of AI is a crock. We couldn’t figure out how to do anything like the sorts of reasoning people do very nicely on (back in the 70s/80s we were actually trying), so the field just gave up and went back to the essentially religious belief that if you do a zillion really stupid calculations per second “intelligence” will “emerge”. It’s seriously insane. (Specifically, it’s insane to believe that we can create intelligence without understanding what intelligence is.)
But the real problem with SDCs is that it’s the wrong idea. Providing improved safety systems/features would be useful. You tell drivers that they have to drive, and drive well, but some of the time, the car will help you out, maybe, but you have to be careful: driving’s hard and dangerous. If you tell drivers driving’s easy and the car can do it for you, then you’ll get people playing video games on their Tesla dashboards while driving. (This is already happening.)
afeman 12.10.21 at 12:55 pm
But right now what I’m reading says you shouldn’t buy a Tesla, and you probably shouldn’t ride in one ether.
Or share the road with one, for that matter. It’s discouraging how Musk has been able to ignore whatever safety or truth-in-advertising standards the US has left, and it’s been observed that he’s been at it long enough and has enough of an idiot fanbase that any enforcement at this point will be harder than it once might have been. I wonder at what point do the insurance companies start taking notice.
aubergine 12.10.21 at 3:27 pm
Seems to me there are three basic causes of the last few years’ explosion in the effectiveness of machine learning:
People got much better at programming computers to learn things. The algorithms are better, the training techniques are better, the way information is fed into machine learning models is better.
The hardware kept getting more powerful, which made the more effective techniques devised through 1 easier to implement.
A huge amount of time and effort has gone into identifying the kinds of problems that machine learning is good at solving, and into breaking down more complex challenges into sets of those kinds of problems.
It’s easy to look at all of the things machine learning can do today (speech and image recognition, image processing, text generation etc.) and drastically overestimate how generalisable these capabilities are, and how far we’ve travelled towards genuine AI with general intelligence. That third factor is the key, though, and some kinds of problems are much harder to solve using machine learning than others.
Take GPT-3 – when it’s working well, it can produce some really convincing stuff, and if you don’t understand how it works it looks like magic. But if you do have a basic grasp of the technology (alluded to by Edward Gregson @13), the magic goes away. It’s still extremely impressive, but it’s an extremely impressive statistical model based on the (relative) computability of language. It has nothing like general intelligence; it may not even be a step in the direction of general intelligence.
Meanwhile, making a car drive itself in a reliable way involves a completely different set of problems, and is really, really hard.
Looking back at those older threads, JQ, I think this is the mistake you were making. You observed how good AI was getting at doing some things, but you didn’t realise how important that third factor has been to those improvements.
So:
You need to make sure that you’ve found the right counter-arguments. Then you need to understand them, take them seriously and treat your own beliefs with enough humility to accept the possibility that they might be right and you might be wrong.
aubergine 12.10.21 at 3:31 pm
Just imagine the numbers 1, 2 and 3 before that clump of paragraphs near the start of that last comment and it will make more sense (I don’t know why they were stripped out; I guess I need to teach an AI how to write html for me).
CasparC 12.10.21 at 3:52 pm
New tech seems to go in three phases: the promise, disappointment, then realisation that it is here, but not quite in the form you thought it would take.
For the record, one friend told me how the lane guidance on his car allowed him to poor out a cup of coffee safely whilst driving, and another told how the cruise control also set a minimum distance from the car in front, so tech is coming in at a reasonable pace.
JimV 12.10.21 at 4:49 pm
The human brain contains about 100 billion (10^11) neurons. A flatworm has 200, and can use them to memorize a maze. Smart breeds of dogs have about 500 million neurons.
The biggest neural network I know of (there are probably bigger ones) has the equivalent of about a million nodes. It is the current world champion of Go. However, it turns out that a node in a neural network (today) is not the equivalent of a neuron. A recent paper reported that it took about 1000 neural network nodes to simulate the capabilities of a single neuron.
What this implies to me is that while simulating human intelligence is theoretically possible, it is probably practically impossible within the lifetime of human civilization. Of course human intelligence allows us to walk and chew gum and regulate many internal processes while thinking about math problems, and AI will still be able to solve many well-defined problems, such as playing Go. So far, however, driving a car does not seem to be such a well-defined problem.
Intelligence, by the way, is trial and error plus memory. In a well-defined problem, the trial and error can be simulated inside a computer, and millions of trials can be done quickly. Otherwise, the trial and error has to take place physically, one error at a time.
Aside from all this, the planet has a lot of human beings and a finite amount of resources. At some point we will have to decide how best to use and manage those resources. Two cars in every garage for the people in the richest countries seems a tremendous mis-allocation to me. At 75, I never have owned and never will own a car. (I have rented one on occasion so as not to have to travel by airplane.)
David Morris 12.10.21 at 5:21 pm
Kevin Drum still believes:
“Driverless cars are coming, folks. The first ones will probably be used in restricted environments (shuttle buses on a fixed route, maybe). Then they’ll get better. And better. And along the way we’ll all get used to them, the same way we all got used to smartphones. Insurance companies will figure out how to insure them. Legislatures will figure out how to regulate them. And they won’t require any changes to infrastructure.
https://www.motherjones.com/kevin-drum/2017/08/clapping-harder-wont-keep-driverless-cars-from-taking-over/
marcel proust 12.10.21 at 5:22 pm
MP @6 Not sure if I’m getting your point
Every so often, perhaps more frequently, I find myself feeling a bit puckish. This was one of those moments. I was role playing, pretending that I wanted to steer a serious comment thread down into the mud — viz. ad hominem — but felt constrained not to do so without permission from the blogger themselves.
I don’t have anything more to add to this thread, esp. nothing serious (and already I fear that what I “added” was actually a minor subtraction!).
David J. Littleboy 12.10.21 at 8:13 pm
JimV writes:
“A recent paper reported that it took about 1000 neural network nodes to simulate the capabilities of a single neuron.”
Yes. https://www.wired.com/story/how-computationally-complex-is-a-single-neuron/
This should have surprised no one. We have a good idea about what an average mammalian neuron looks like.
1. It is extensive in space. (The axon is hundreds to thousands of times longer than the size of its cell body.)
2. It makes a LOT of connections: over 7000 connections on average.
3. It computes complex logical functions of its inputs. (That is, particular patterns of inputs can turn it on or off regardless of all other inputs. (Each neuron is a pattern-matching engine.))
4. It’s one of a large number of different types of neurons.
The “neural net” model has none of these characteristics.
The current round of AI is a compete crock.
(The funny thing is, I completely buy the idea that t he human brain is a computational device that can (and should!) be understood in computational terms. But we’re not at the “simulate intelligence” or “create intelligence” stage yet. We’re at the “we don’t have a friggin’ clue as to what intelligence is” stage. We don’t know how to even characterize the amazing things (language, logical thought, amazingly kewl fine motor control (Kenny Barron’s piano playing!!!), etc.) human intelligence does. Yet the current AI field is committed to reproducing something it can’t even describe. Friggin inane.)
Cian 12.10.21 at 11:31 pm
I think it’s always wise to remember when dealing with stuff you don’t know that most journalists are reporting on topics that they are very ignorant on, and to check if industry pundits are talking their book. When you filtered out those two groups, you generally found researchers with a lot of experience in the field who were very skeptical about the chances of self-driving cars, and had very good arguments for why. It turns out they were right.
On AI generally, it’s always a good idea to remember it’s a (deliberately) misnamed field. As Aubergine says – machine learning is really a type of applied statistics that can be useful for dealing with certain types of problem that lend themselves to that kind of probabilistic analysis. Unfortunately these methods are extremely opaque, it’s hard to identify when your models have failed and they are unpredictable when applied to new datasets. They are also extremely difficult to train (maybe impossible) when you’re dealing with wicket problem areas (such as driving in a range of conditions), as you never really know if your dataset is representative.
I saw someone the other day tweet that if the cost of failure is relatively low, and the value of success is high, then machine learning is a useful tool. Otherwise you probably want to look elsewhere. That seems accurate.
eg 12.10.21 at 11:50 pm
I had a conversation with a couple of engineers about 5 years ago about the prospects of self-driving cars. The consensus was that the technical challenge, as incredibly difficult as it is, paled in comparison to the regulatory and insurance hurdles.
Na ga happen …
Alex SL 12.11.21 at 12:02 am
David J. Littleboy: The zeroth problem with self-driving cars is that they don’t fix the problem that the private automobile is an incredibly bad and stupid idea.
With the caveat that I can see the utility of cars in remote and thinly populated areas where public transport isn’t realistically doable, this right here is a big issue not just for self-driving cars but also for electric cars. Yes, they would be better than human-driven, petrol cars, but ideally we would do away with most cars full stop. They are frighteningly wasteful in a world of strictly limited resources and a really, really poor option in cities. (Cue the usual photos comparing how much space 200 people take up in 200 cars versus one tram.)
Sadly, many people absolutely insist on driving everywhere in a car, even parents who could easily walk their kids the two blocks between their home and the local primary school. And they will vote for politicians who structure everything around cars and neglect public transport.
Regarding AI, I am not an AI specialist, but I am using it, especially image classification. In my experience, most people fall into one of two camps: those who had or read about a bad experience and declare it all rubbish (“a self-driven car caused an accident once that a human would not have caused, so it can never replace human drivers who cause 100x as many accidents because they are drunk, distracted, or exhausted”), and those who think it is magic pixie dust and therefore have hilariously unrealistic hopes and expectations.
In my experience (again), the counter-intuitive reality is that AI can do amazing things that humans couldn’t do and at the same time fail spectacularly to do very simple things that even the most ignorant humans can do trivially. And that is the problem causing the aforementioned human perceptions: we have a very poor intuition for what AI can do for us.
I am sure AI is already starting to revolutionise those tasks its brute force approach lends itself to – for example, high-throughput scanning of enormous quantities of images and videos for faults or threats where a human would need orders of magnitude more time and miss half the threats because their attention can only focus for so long, and anything that requires a single and highly specialised capability. But for anything complex with lots of potential parameters I don’t see humans being beaten anytime soon; for starters, my brain brain can drive a car AND tie shoelaces AND read a report AND draw a diagram AND cook dinner AND so on.
Edward Gregson 12.11.21 at 1:32 am
For comparison purposes, I believe the synapses in a neuron are roughly equivalent to the weights in a neural network (to a first approximation). The human brain has roughly 150 trillion synapses. GPT-3, the large language model I was mentioning, has 175 billion weights. Wu Dao, a Chinese version of GPT-3, has 1.75 trillion weights. That’s about a hundred-fold difference, and I’ve seen it said that if we put the same level of resources into scaling up a large language model as we did the Apollo program, we could make something with roughly the computing power of the human brain today (although god knows what dataset we’d train it on, if we were just scaling up GPT-3). And Moore’s Law hasn’t ended yet, so give it another 15 years and the hardware may have improved enough to let us run one or more human brain equivalents inside an everyday datacenter.
So I think it’s the architecture and the theoretical understanding that is the limit on AGI, more than the hardware.
KT2 12.11.21 at 3:21 am
Vechicles. Humans. Extenalities. Systems integration etc etc. Ala the 5th Discipline by P Senge.
And what about a smart road as well.
‘dumb’ vehicle as one system
lidar / Traing set / AI
?
4 ?
5 Smart Road & conditions acting as neocortex and external feedback
And AI as several have pointed out it is not comoarabke to humans as yet. As the kids in the rear seat say “Are we there yet?”
Alan White 12.11.21 at 4:07 am
About doubt. And about-facing.
Most of my career I had no doubt whatsoever that pragmatism was not just wrong, but perilously so. What was more apparent than that truth must correspond to what the facts are? Workability? Balderdash.
Then came age and experience. I now am a committed pragmatist, if not about truth itself, then about methodology that yields truthiness (to invoke Stephen Colbert, who in jest reflected how regarding something as true can have pragmatic results politically, but got closer to how everyday truth works than even he understood). For a more academic model, then Nancy Cartwright’s philosophy of science. Look at how science works–not so much at its purely conceptual modeling. That’s truthiness in a positive pragmatic way rather than mere (Colbert) wishfulness–and so now how I see truth in relation to scientific practice. The data about Covid hospitalizations and deaths outweigh any alternative narrative about hoaxes or such. Anti-vaccers die in disproportionate numbers to the right-wing narratives. That works for me.
So John count me in as one who saw that I was not only wrong, but completely wrong about how truth in this world works, not just in particular cases, but generally.
nastywoman 12.11.21 at 5:06 am
and about ‘self-driving’ cars –
I never understood the…
the attraction?
As one of the major fun –
somebody can have
with her or his clothes on –
is driving a very old Lancia Delta down the Amalfi Coast –
(even by being completely against any use of combustion engine – BE-cause they destroy our climate and environment)
BUT on the other hand – the streets on the Amalfi Coast are made for the smallest engine of all – a Fiat Topolino –
Capisce?
(and no wonder that Australians know ‘nuthing’ about it)
nastywoman 12.11.21 at 5:19 am
AND –
just philosophically spoken:
What is it with people who don’t want to ‘master the machine’
anymore –
and instead –
want the machine to master them?!
RichardM 12.11.21 at 11:19 am
Meanwhile:
https://www.ttnews.com/articles/mercedes-beats-tesla-hands-free-driving-autobahn
If someone genuinely had a vision of autonomous driving that was faster than ‘sellable product in 2021’, they were pretty foolish. I wouldn’t have read Quiggin’s articles as arguing that. But, presumably, as he considers them to be mistakes, he does.
These things take time, and every salesman is convinced they have the new iphone that takes things from ‘bunch of cool technologies’ to ‘everyone will have one’. That is in no way incompatible with a driving license becoming a thing you would have to go to university to spend three years studying for by, say, 2061.
afeman 12.11.21 at 12:48 pm
To address the OP’s question of how to get it right, I’d observe the following:
There were a number of skeptics in the comments of the previous posts who apparently had relevant background and laid out both the existing technical obstacles and the historical knowledge of AI’s long habit of speculatively generalizing from successes in very specific problems, such as chess. I have some of the same background and could confirm all of it, and I think it holds in general that people who know AI and computer science, particularly with regard to interacting with a wet and dirty world (like xkcd’s author, a former robotics researcher), mostly shrug and shake their heads at the claims out there. It can be hard to sort out who does have the relevant background or not, and when somebody with the relevant background is bullshitting, but “listen to relevant experts†is a place to start.
The combination of vaporware, hype (particularly with calls in advance to alter public policy), with a dash of scheduling breakthroughs is a blinking red light and klaxon. Consider the technical and market successes of Google and Oracle, both of which established themselves without much fanfare to the general public. Even Microsoft and Apple, notorious for their hype, at least present salable products, and without, say, calling for the elimination of payphones as superfluous. Do you remember when we were supposed to alter urban planning to accommodate Segways, which were at least a finished product? Do you remember Segways?
A more diffuse heuristic of a technological damp squib is a certain carnival of fandom, particularly among those outside the field in question. JQ will be familiar with it from the discourse around commercial nuclear power. The technology in question has a Heinlenian high frontier inevitability for a certain type, if only those pesky regulators keep out of the way. It accordingly tends to attract a certain libertarian mindset, regardless of how much it contradictorily depends on leviathan. (Somebody recently pointed out all the ways Musk is actually the statist villain from The Fountainhead; I am not going to read it to confirm.) Skeptics will face a gauntlet of either moony faith in the hero of the story, or the sort of contemptuous confidence Kevin Drum exhibits above. It will have adherents who will never ask what they might be getting wrong, or acknowledge gaps or uncertainties.
I’d be curious about what others see.
Hey Skipper 12.11.21 at 3:55 pm
So, I thought it would be good to consider where and why I went wrong on this relatively unimportant issue, in the hopes of improving my thinking more generally.
This is from the first of your posts:
Suppose that in any crash between autonomous cars and humans, each is equally likely to be at fault. What is the probability of seeing 22 crashes caused by humans and none by autonomous cars. Obviously, it’s the same as that of a fair coin showing 22 heads in a row, which is 2^-22 or about 1 in 10 million.
The weakness in your thinking is that you neglected to ask the preceding question. Given the prevailing accident rate among all cars, over the period, how many autonomous cars should have been involved in accidents?
I don’t know what how many autonomous cars there were in 2018, but as Feb 2020, there were 769 registered in CA. Twenty-two accidents among 769 cars in one year sounds extraordinarily high compared to regular cars. (Unfortunately, finding the raw number of accidents in CA in a given year isn’t easy. However, just thinking about myself, kids, and friends, I get well past 769 vehicle-years with no accidents.)
Because you failed to ask the first question, your assumption of equal fault-likelihood is unfounded.
Assume 769 vehicles chosen at random. If only 2 of them were involved in accidents in the previous year, then the fact that autonomous vehicles are more than ten times as likely to be involved in an accident makes it very likely the reason is because they are more dangerous.
In addition, you could have considered an analogous system: airliners. Despite operating in a very simple environment compared to streets, by very highly trained, experienced, and monitored people, we are nowhere close to self-flying airliners. The reasons for that are just as applicable to autonomous cars.
Hey Skipper 12.11.21 at 5:22 pm
However, just thinking about myself, kids, and friends, I get well past 769 vehicle-years with no accidents.
Sorry, further recollection gets to three.
hix 12.11.21 at 7:19 pm
My impression was that for a while, the big car companies also expected/feared fast progress in that direction and did spend non trivial sums on catching up with Silicon Valley. At some point, things changed and they were moving towards the neither Silicon Valley nor we will figure it out anytime soon viewpoint.
KT2 12.11.21 at 11:33 pm
Humans 1. SDC <1.
Lots of relevant links.
“Mercedes Beats Tesla to Hands-Free Driving”
https://www.gizmodo.com.au/2021/12/mercedes-beats-tesla-to-hands-free-driving/
*
nastywoman +1 “driving a very old Lancia Delta down the Amalfi Coast”
“Marianne Faithfull – The ballad of Lucy Jordan 1980”
youtube com watch?v=9jNrKMV3eCs
JimV 12.12.21 at 12:30 am
“We don’t know how to even characterize the amazing things (language, logical thought, amazingly kewl fine motor control (Kenny Barron’s piano playing!!!), etc.) human intelligence does.”–DJL
Trial and error over millions of trials plus memory does all those things. I guarantee you a neural network AI could learn to play piano (a well-defined problem) better than any human.
(Personal tip on learning to finger-pick on guitar: close your eyes. It will devote more neurons to the problem. Probably works for piano also.)
Trial and error plus memory created all the biology on Earth, including human brains. A lot of that trial and error hard-coded a lot of brain regions, I grant you, and replicating that trial and error would be a huge task, and as I already said, I don’t expect we’ll ever accomplish that.
John Quiggin 12.12.21 at 1:13 am
HS @33 A good question. A little digging reveals around 5 million police-reported crashes per year
https://www.1800thelaw2.com/resources/vehicle-accident/how-many-accidents-us/
The car fleet is about 250 million, so one crash per 50 vehicle years – your estimate suggests one per 40 for AVs.
As you say, this doesn’t match your personal experience. The point of the OP was that a small subgroup of bad drivers (of which you are not a member) is massively over-represented in crash statistics. If they could be taken from behind the wheel, this would be an improvement
John Quiggin 12.12.21 at 1:16 am
Saw this straight after posting Repeat offenders six times more likely to be in serious car accident
William Berry 12.12.21 at 4:56 am
@Jim V:
I agree with basically all you say; just want to note that you’ve switched memory types here. Where biological evolution is concerned, it is genetic, or racial, memory doing the work (It’s a type of “savings memoryâ€. It is not purely random, but retains what has gone before. Dawkins ((in TSG?)) uses the analogy of a monkey producing Hamlet on a typewriter; typing completely randomly, there is no imaginable time scale in which it would produce the entire text of the play. But if every correct keystroke were to be saved, it would manage the trick in a surprisingly short time. Of course this assumes that there is a master copy that the monkey’s progress can continuously be compared to. In the biological case that template is the genetic package of germ cells or of embryonic individuals ((which is where transmissible mutations, useful or otherwise, will occur)) ).
Actual individual human memories (of the mental kind) collectively produce the memetic* (and largely mimetic!) structures that constitute culture, which follows its own patterns of evolution. As far as we know, human culture hasn’t been around long enough to have much of an effect (through epigenetic effects on phenotypes, which might then become more firmly established genetically?) on human biological evolution.
Anyway, not disagreeing with anything you said; just wanting to (hopefully) clarify a point, and to work it out for myself.
*Yeah, I know, memetics has fallen out of favor with the most of the smart folks (I sometimes wonder if a lot of this anti-Dawkins, and anti-TSG stuff ((see, e.g., “group selectionâ€)) isn’t partly driven by dislike of Dawkins for being such an asshole, but that’s another subject). But yeah, I still think it’s a useful concept, even as just a unifying metaphor (and isn’t that what a theory is, in a sense?).
nastywoman 12.12.21 at 6:04 am
@KT32
“Marianne Faithfull – The ballad of Lucy Jordan 1980â€
How… how…? philosophical?
and the Faithful singing:
‘At the age of thirty-seven
She realised she’d never
Ride through Paris in a sports car
With the warm wind in her hair’-
reminded me – that at the age of about 12 or 13 I had to write about ‘the world of the future’ and just like Prof.Q I constructed all kind of self operating (flying) cars and cities and landscapes full of fully streamlined automated houses and the utmost modernistic landscape and now (most of the time) – I reside mainly in cities where the people in the utmost painstaking way restore their historical landscape and just last week I helped to re-install s 350 year – totally analog – door – which – supposedly –
led to heaven –
(accompanied by German Blasmusik)
So my main mistake was just a complete lack of… ‘imagination’?
(and I really regret it)
hix 12.12.21 at 7:53 am
Think my theory during the last debate was that self-driving cars would reduce public transport use (unless you count the self driving car sharing as public transport naturally). That was a bit too excentric. That would only happen if self driving cars were very cheap and/or integration with at this point also self driving public transport rather bad. More realisticly, the first thing we would see from self driving cars would be equivalent to cheaper taxis, cheaper beyond what uber did. The tricky thing with car ownership right now are the fixed costs: You don´t get much cheaper insurance by driving less, the capital is spent, many maintenance and insurance costs are also fixed.
oldster 12.12.21 at 9:12 am
Re: Segways
They never did take off, did they.
But now, many cities are being overrun by e-scooters, which fill what is essentially the same ecological niche, in a much simpler way.
(Niche: battery powered mobility for people in a standing position, which does not require exceptional athleticism or balance from the operator, and moves them at roughly twice to three times walking speed.)
E-scooters are not perfectly integrated into urban life yet — they live uncomfortably in the bike lane and in the foot-traffic lane. As a pedestrian, I would like to see them consigned to the bike lane; cyclists may feel otherwise. But on the whole, I think they are a good thing for urban transit offering another “last mile” solution for those wearing business clothes and heels.
In any case: I can imagine someone looking at Segways 10 years ago and thinking: do we need all of the clever balance stuff? Couldn’t we fill that niche with something simpler? Segways may have helped e-scooters happen.
reason 12.12.21 at 1:35 pm
I’m wondering if we are putting a lot of the effort in the wrong place. Maybe we need more intelligent streets.
Joe B. 12.12.21 at 6:53 pm
” … particularly in the context of a blog where the discussion doesn’t have any direct implications for what happens in the world.”
Shame on you for bursting my bubble!
More on topic, I had a mole doing AI in one of the major autonomous vehicle companies who was very pessimistic that they could ever solve the problems with the data their sensors were giving them. That, and the horrific tech bro culture, led him to leave. Add to that the qualified immunity from prosecution for the deaths of innocents that these autonomous vehicle companies seem to have (or at least try to defend) and the level of trust in that tech that is pretty low. But yeah, this is a minor problem compared to the fact that I can buy cryptocurrency from Venmo.
KT2 12.12.21 at 8:59 pm
reason see me @27 above.
https://crookedtimber.org/2021/12/09/49362/#comment-815395
Can anyone fill in the blanks? AI heads, systems engineers. Any other systems external to vehicle as an aid to feedback enabling sigma 7+?
Hey Skipper 12.12.21 at 11:33 pm
As you say, this doesn’t match your personal experience. The point of the OP was that a small subgroup of bad drivers (of which you are not a member) is massively over-represented in crash statistics. If they could be taken from behind the wheel, this would be an improvement
I don’t doubt that a small portion of drivers cause most of the crashes.
However, the point of the OP seems to be this: … I got enthusiastic about the prospects for self-driving vehicles, and wrote a couple of posts on the topic. It’s now clear that this was massively premature …
In regard to autonomous vehicle (AV) safety vs. human operated vehicles (HOV), it is important to see where the AV mishap rate is on the distribution of drivers’ mishap rates. (Whether AV’s or HOV’s caused the mishaps is irrelevant).
I don’t know where one could find such a distribution, but it essential to knowing relative safety: is the AV mishap rate at the 20th percentile for human drivers, or the 80th percentile? (Where 99th percentile is the lowest mishap rate).
Another issue you should have considered is the driver-in-the-loop problem. The better AV’s become, the more likely it will be that their operators are no longer in the OODA loop, which means that when something unexpected happens, the startle factor will almost always lead to bad results.
This a serious issue with auto flight systems, and is, IMHO, an impermeable barrier to AV capability beyond driver aids (Automatic braking, adaptive cruise control, blind spot warning, lane keeping, etc.).
Assessing relative safety, and considering the issues in the way of fully autonomous auto flight systems, might have made you less optimistic.
Colin Danby 12.13.21 at 2:43 pm
Would I be right in thinking that sensors, at this point, are not the main issue? That is, it’s not too hard to gather lots of data about the car’s environment; the problem is making sense of it all in real time?
Tm 12.13.21 at 6:23 pm
I thought this article interesting (concerning AI and gullibility):
A Gallery Has Sold More Than $1 Million in Art Made by an Android, But Collectors Are Buying Into a Sexist Fantasy
https://news.artnet.com/opinion/artificial-intelligence-robot-artist-ai-da-1566580
Trader Joe 12.13.21 at 9:08 pm
I don’t think it was wrong to be enthused about the potential for AV. What may have been an error in judgement is imagining that Level 1 autonomy (lane monitoring, sensors etc) which a lot of vehicles possess at fairly affordable prices was a reasonably linear path to Level 5 autonomy (i.e. get into a vehicle with no steering wheel or break pedal that simply takes you where you want to go).
As with most technology, the path from level 1 or 2 which some cars/trucks now have to level 5 is really a J curve.
The other thing I think that is often overlooked in imagining the ‘happy future’ of greatly reduced traffic fatalities and crashes is the almost interminably long period of time where the “old tech” and the “new tech” co-habitate on the roads which can in fact produce its own trade-offs in loss frequency and severity.
To take only the US – there are roughly 300 million vehicles and around 15 million new vehicles sold per year. If we assume that there is 1:1 vehicle replacement in a fleet – it would take a minimum of 10 years for 1/2 the vehicles to have the ‘new tech’ and that’s if 100% of vehicles sold had the new tech. When the US mandated airbags back in the 1980s it was more than 20 years before 50% of passenger cars involved in crashes had an airbag. It was about 2015 when 80% was reached and that was for a 100% mandated technology. Something not mandatory would take far longer.
Bottom line JQ – even if we had AVs tomorrow and even if 100% of all new cars had it, the odds are you’d still be involved in one or more crashes between now and the end of your life (which hopefully would not be concurrent).
KT2 12.13.21 at 10:47 pm
Tm, tangential to OP, and re “A Gallery Has Sold More Than $1 Million in Art Made by an Android, But Collectors Are Buying Into a Sexist Fantasy”.
New directions. For AI not humans.
Who will consume †GPT-3 makes up for consistency with a prolificacy that borders on profligacy. It’s currently generating the equivalent of 80,000 books per day for the various apps that are hooked into it.�
Q1. How long before economics writers relatively cease to exist.
Q2. How long before economists are replaced.
Q3. Hiw long before vehicle and AI designers disrupted?
“Big Tech is replacing human artists with AI
Corporations are automating everything, even our pets
Eric Hoel
“In my office hangs a painting called “Rule of Consciousness.†An ironic title, as it actually signifies the end of the rule of consciousness. For it was designed by an AI. I keep it not because I like it. In fact, I hate it. Or rather, I’m afraid of it. It’s a sobering reminder of what’s coming, which is that human art is close to total control by corporations and no one seems to care.â€
[Example AI generated artwork. – ok imo ]
Caption. Art for The Intrinsic Perspective is by Alexander Naughton
…
“In fact, I’d go so far as to say GPT-3 itself is already a better writer than Nathan Englander. Under some reasonable metrics, it’s already a better writer than any living person for short pieces of prose or poetry. That is, writing in short sprints has effectively been “solved†in the way that Chess and Go have been. Oh, I’m not saying GPT-3 is a consistently better writer, even for short pieces. But the measure of a writer is not just qualitative, but also quantitative. And GPT-3 makes up for consistency with a prolificacy that borders on profligacy. It’s currently generating the equivalent of 80,000 books per day for the various apps that are hooked into it. Notably, GPT-3 is licensed by Microsoft, and is therefore closely guarded. You interact with it only via apps which act as oracles. It’s basically a genie stuffed in Microsoft’s basement you can Zoom with.â€
….
https://erikhoel.substack.com/p/big-tech-is-replacing-human-artists
Moz In Oz 12.13.21 at 11:07 pm
Sensors are part of the issue. It comes down to the working life of the sensor vs cost. Right now a lot of the sensors are electronics with that 5-10 year service life, being put into cars with a 10-30 year working life. At some point “remove all the electronics and put new ones in” becomes uneconomic if it’s even possible (due to old electronics not being made any more).
The company I work for makes electronics that are expected to last forever (by our customers) and be repairable forever (by our retailers). We’ve only been in business ~25 years but we keep parts on hand for the very first product we made, and everything since. But some parts are not manufactured any more, so once our spares on hand run out those products will not be repairable. And we make very small, very simple electronics. The idea of doing that for a whole car terrifies me… but a whole car is much more expensive to replace “sorry, we can’t fix it. Here’s the new model as a free replacement”… urk!
It’s important to keep in mind that no-one is currently even trying to make “autonomous all the time” vehicles, despite the obvious usefulness of that. One trivial example is Australia’s corrugated roads. Those stress test normal cars, the idea of making even a camera work in that environment is laughable (everything is vibrating, the camera will output a blurry mess)). Building a sensor that has moving parts in it to survive years of that every day doesn’t bear thinking about. But somehow people drive corrugated roads every day.
You can scale that up to just about any situation where people take cars, from flooded rivers to fire trails to the vehicle tracks that appear on your local sports oval from time to time. Think about someone who rarely to never actually drives the car they’re a passenger in being told “sorry, the current flood/fire/insect swarm/sunspots mean you need to drive manually”. Like those bushfire convoys we get here, but imagine someone driving for the first time in a decade being in the middle of that convoy. Suddenly “mostly autonomous” seems like a recipe for half the cars in the convoy not making it.
Comments on this entry are closed.