In May last year, I posted a tally of successful and unsuccessful predictions, and was challenged in comments on my optimistic views about self-driving cars. I’ve said all I can about the fire apocalypse for now, so I thought I might check how things were going. That’s motivated largely by the belief that autonomous vehicles will almost certainly be electric, a view shared by GM CEO Mary Barro, according to this report.
The same report points to missed deadlines and delays, mostly caused by a small number of high-profile fatalities, all but one associated with Teslas on autopilot (Level 2 technology in the jargon)
That slowed things down enough to falsify my predictions. Neverthless, GM is now petitioning the National Highway Traffic Safety Administration to allow up to 2500 completely driverless (no steering wheel or pedals) on the roads. At the same time, Google subsidiary Waymo claims to have started commercial operations of fully driverless (they prefer the term “rider only”) taxi services in its test area in the suburbs of Phoenix.
These are both small scale programs. The significance lies in the fact that they cross the uncanny valley of almost driverless (Level 3) technology with a human safety driver. The high-profile fatal crash that slowed things down in 2018 was caused by a distracted safety driver, and there’s no obvious way to overcome the problem of distraction in a vehicle that almost always drives itself.
The Waymo and GM vehicles are Level 4, capable of operating without any human intervention, but only in limited conditions (flat, sunny, Arizona suburbs). But, if the jump to fully driverless operation succeeds, it’s a matter of incremental steps to larger operating areas, a wider range of weather conditions and so forth. Given that they are being held to much higher standards than human drivers (who killed over 36000 people in 2018 without attracting much attention) the developers of AVs will have to be ultracautious in relaxing their operating constraints. But crossing the uncanny valley is the crucial step. If they succeed in that, the rest is a matter of time.
{ 98 comments }
Nicholas Gruen 02.05.20 at 7:12 am
In all these circumstances, I think it’s a pity that some effort isn’t expended to point out the unlevel playing field you mention in the article John.
It’s perhaps appropriate in this case that the autonomous vehicles are held to a higher standard (he said trying to think of arguments and settling for the political one of voter ‘confidence’.) But we should always be aiming for a playing field which is leveller than this!
And the way to do it would be with policies and institutions that made it transparently clear to the public that no moving vehicles are 100% safe but if and when driverless cars expand as a share of the car population they’ll strongly contribute to better safety.
We should do the same for letting nurses and other paramedics do tasks within the medial system, paralegals within the legal system and so on.
No need to do it in economic forecasting. No-one has any idea ;)
Alan Coovert 02.05.20 at 7:49 am
What a bunch of magical thinking. Not one intelligent thought that maybe relying on a four wheeled, 2 or 3 ton, complex, high speed box for day to day locomotion, no matter the fuel source, is ecocide writ large.
Tim Worstall 02.05.20 at 11:34 am
There’s an interesting implication to this:
“But, if the jump to fully driverless operation succeeds, it’s a matter of incremental steps to larger operating areas, a wider range of weather conditions and so forth.”
If so high speed rail is stone dead.
Freight doesn’t need high speed. High speed is also, by definition, not a stopping train. Therefore it requires travel to some not entirely local station, train, then onward travel from station to destination.
If your sofa (couch perhaps) will take you door to door, with your own wifi, bookcase and cup of coffee, then what use that railroad? And if the cars are electric (or algal oil fuelled, or hydrogen and fuel cell or….) then there’s not even an emissions point to the train either.
Not that there is all that much currently, a small car (say, Polo) with 4 people is already less emittive than a long distance train trip.
john 02.05.20 at 12:30 pm
Almost all non-commercial vehicles have AV features incorporated. We’re being eased into the future. Smaht Pahk.
Rapier 02.05.20 at 2:25 pm
Robot vehicles will be limited mostly to the glittering capitals, in areas poor people don’t live, and limited access roads between them. Costs will soar as anti sabotage designs like airless tires and camera systems will be included. Eventually the first law of robots will have to be abandoned as the robots will have to engage in self defense.
Unoccupied vehicles on the road will be despised by many people some of who will act on that hatred. This has nothing to do with the technical feasibility or robot vehicles, beyond their defensive measures. It’s about people.
Otherwise be happy.
JimV 02.05.20 at 3:54 pm
It seems to me that the best way to reduce the number of drivers is a combination of trains, buses, and walking. As it is, roughly 100 cars go by me (in both directions) every five minutes as I walk to the nearest super market, mostly with a single driver. It seems like a colossal waste of resources to me at a time when we are running out of renewable resources (due to over-population and over-consumption), to which driverless cars (so ultimately more license-less people could have a car) would only add.
(I’ve never owned or wanted to own a car, so yes, I am biased.)
Brian 02.05.20 at 3:59 pm
I agree, with a caveat. Full disclosure, I worked on a self-driving car project 17 years ago during the DARPA Grand Challenge era. My company developed a 3D mapping system based entirely on cell phone type cameras, quite a few of them. FWIW – I think this will need to be added to self-driving cars. One of my principles I developed working in automation was to use multiple modes of data collection. To get that last 1% takes the other 99% of the work. 3D mapping cameras are very powerful, but takes a lot of computing power to do fast. We ended when we were wrestling with putting our algorithmes into FPGAs. That’s why LIDAR and such are more common to use. It’s a simpler concept.
I worry about runners, cyclists, etc.. But once the car sensor systems get good, they will improve safety for everyone else.
Anyway – so far, self-driving cars have all been 100% electric in the current period. (Wasn’t the case during the DARPA Grand Challenge.) But electric cars don’t work so well where it gets cold. https://batteryuniversity.com/learn/article/discharging_at_high_and_low_temperatures
In large parts of the world winter temperatures are cold enough to wreck electric vehicle range and performance. So that’s not going to work for people that drive cars in those regions.
marcel proust 02.05.20 at 4:20 pm
Something about bots jamming comment threads seems called for here, or at least bots on the internet. Given their effect on politics in the US (and other places: I am given to understand such exist), it is appalling that they are not held to the same high standard as robotic vehicles.
eg 02.05.20 at 4:42 pm
In discussions with engineers, there tends to be recognition that the rate-limiting step where deployment of autonomous vehicles are concerned is less likely to be technology, and more likely to be regulatory.
fledermaus 02.05.20 at 4:49 pm
I really can’t give much credit to credulous business website clickbait (at least this one acknowledges that 10 years is an optimistic projection). City driving is the most difficult problem of self driving and we are more likely to have self flying cars than we are a driverless (oops, sorry passenger-only) car you call up on an app and it takes you to the theater downtown.
The hype persists because some companies have invested a lot of money in self driving cars and a lot of techno-utopians really love the idea of them, so therefore they are going to happen. But then again there is huge interest in a chemical that turns lead into gold, but we are not going to get that anytime soon either.
Cervante 02.05.20 at 5:43 pm
I still think you’re too optimistic, or at least your spin is. There is actually a fully autonomous vehicle operating already on the streets here in Providence, RI but . . .
It follows a single, closed loop in low-speed conditions. If there is going to be construction or some other form of disruption the operators are informed in advance. They do pay an attendant, not because he needs to intervene in the operation but so people won’t puke, piss or shoot up in the vehicle. So the whole thing is fairly pointless since for the same, pay, he could drive the thing. And it won’t operate in snow or with snow banks on the shoulder.
Sure, if you limit the vehicles to well-mapped, unchallenging territory and don’t operate when there are anomalies, they can work well enough. I would say for the most part that closed campuses are your most likely use. But as far as a vehicle that will take you anywhere, we’ll wait for the review from George Jetson.
Cian 02.05.20 at 5:46 pm
That slowed things down enough to falsify my predictions. Neverthless, GM is now petitioning the National Highway Traffic Safety Administration to allow up to 2500 completely driverless (no steering wheel or pedals) on the roads.
But what does that mean in practice? Is this for a trial? Will they be monitored? You’re assuming a lot here, based upon a single uninformative article.
At the same time, Google subsidiary Waymo claims to have started commercial operations of fully driverless (they prefer the term “rider onlyâ€) taxi services in its test area in the suburbs of Phoenix.
They can claim what they like, but if you read the article this is not a commercial operation. It’s limited to ‘beta testers’ who’ve signed an NDA, while the cars are still being monitored by a team of engineers who still have to intervene and help when the cars don’t recognize something. This is a laboratory test, that’s being marketed as something else for PR reasons.
Here are some problems that need to be solved for this technology to be remotely fail (and these are problem that may not be solved):
+ Mapping. Currently these cars rely upon maps that are extremely expensive and time consuming to produce, and which require constant maintenance. If Google don’t solve this problem (and so far they haven’t, or even come up with a convincing approach for how this could be solved), then these cars are always going to need to remain geofenced. Maybe acceptable for buses (assuming the maps can be produced cost-effectively, which is a big assumption), but useless for mass consumers.
+ Expense. The LIDAR cameras are very expensive, and nobody has managed to work out a way to reduce them to an even vaguely sane price. History is littered with technologies that failed because nobody could get them down to a reasonable price. Also the data bandwidth requirements required by these cars currently hugely exceed anything our current data networks can support, and there’s no reason to assume that’s going to change anytime soon (maybe ever).
+ The remaining problems are all very hard. Currently Google have solved the easy problems. What remains are the wicked problems. Identifying and dealing with appropriately something you’ve not seen before (something that is a regular occurrence for drivers). Dealing with tricky social situations on the roads (who has right away. What does that hand gesture mean. Can I merge). These are all very hard problems, that no AI technology today knows how to solve. It’s possible they are not solvable by any existing AI technology.
In the meantime, I hear there’s this technology called the electric bus that’s extremely cost effective and deployed at scale all around the world.
Cian 02.05.20 at 5:52 pm
In the late 1980s there was a lot of hype around Virtual Reality. Jaron Lanier wrote a book about it (and built a career on it). VR wasn’t new, it had been around since the late 70s, but everyone agreed that this was the future.
Fast forward to 2020 and Virtual Reality is starting to become a successful mass consumer item.
More pessimistically. Expert systems in the late 80s/early 90s were the AI that could. They found oil, helped doctors diagnose diseases. Fast forward to 2020 and Expert Systems have still never really amounted to anything. The history of technology is mostly a history of failure.
Doug K 02.05.20 at 6:07 pm
fully autonomous driverless cars require an artificial general intelligence. That’s been just five to ten years away, for nearly sixty years now..
The problem is much harder than anyone thinks.
As a computer scientist and AI researcher (in a former life) I believe we will never see level 5. Fully autonomous vehicles in restricted environments are possible today, but in uncontrolled environments the problem gets really difficult.
In 1995 (see link from my name) two Carnegie Mellon robotics professors wrote software and installed a desktop in a Pontiac minivan, which then drove itself across the US achieving 98.2% autonomous function.
I’m not sure anyone has done better in uncontrolled environments, since then.
This is Moravec’s paradox, what is easy for humans is hard for machines.
Moravec writes:
Encoded in the large, highly evolved sensory and motor portions of the human brain is a billion years of experience about the nature of the world and how to survive in it. The deliberate process we call reasoning is, I believe, the thinnest veneer of human thought, effective only because it is supported by this much older and much more powerful, though usually unconscious, sensorimotor knowledge. We are all prodigious olympians in perceptual and motor areas, so good that we make the difficult look easy. Abstract thought, though, is a new trick, perhaps less than 100 thousand years old. We have not yet mastered it. It is not all that intrinsically difficult; it just seems so when we do it.
Hidari 02.05.20 at 6:30 pm
I had a much longer reply in my head, but then apathy intervened.
So suffice to say: the question involved in considering whether or not a new technology will get adopted is: what problem does this technology solve? With cars, planes, phones, etc. the answer is obvious.
What problem is it that a driverless car solves that cannot be solved by, say, a taxi or a chauffeured limo (given that driverless cars are always going to be more expensive that ‘drivered’ cars, for obvious reasons)? Or Tuk Tuks, or ‘personalised buses’? All of which are always going to be much cheaper.
Also the most egregious error of technological determinism is that because a technology can be generally adopted, it will be. In which case: why no moonbases? It’s eminently doable. And why haven’t all the world’s undergrounds/metros/tubes been completely automated, something that is far easier than building a truly driverless car?
Whatever happened go Google Glass?
And I find the argument: ‘we have (sort of) solved most of the easy problems, therefore we are inevitably going to solve all the insanely difficult ones’….interesting.
There have been long discussions on CT about the ‘we went to the Moon therefore we will inevitably go to Mars’ argument.
steven t johnson 02.05.20 at 8:16 pm
Still inclined to think riderless cars will be feasible when the road are reserved exclusively for riderless cars, “solving” the difficulties of human error in oncoming traffic, cyclists, pedestrians, etc.
Still inclined to think the fundamental idea is an automated taxi service without the necessity of dealing with taxi drivers, much less get into a bus or subway car with other people. In the event of a privately-owned riderless car, the idea is to send it to pick up grandma and take her to the doctor or take the kids to soccer practice, etc. But I suspect the sixteen year old will still think you’re a failure if you don’t buy them a car.
Re-engineering the automobile, much less designing a comprehensive transit system for cities, appears to me to be far to large a fixed-cost to ever be feasible in the current economic system. (At the moment, replacing current infrastructure is a non-starter.) No doubt the final solution will be lowered consumption, especially meat for those so antienvironmentalist as to want to eat meat without paying the real price.
The Malthusian solutions of war, pestilence and famine will no doubt present themselves, much to the earth’s relief.
nobody 02.05.20 at 9:27 pm
Going from coping with perfect conditions to true autonomy in all environments is such a step change that I doubt incremental improvements will be enough to get us there. Automated driving in perfect conditions is basically a real-time image recognition problem. It’s difficult but the state of the art in computer science is good enough to know how to approach the problem. Resolving the bugs in good-conditions self-driving vehicles is in very large part just a matter of throwing enough computational resources at the problem. Full autonomy, however, means being able to deal with situations that require something approaching human-level judgement to navigate safely. That’s a much harder problem.
Full autonomy means interacting with police, interacting with construction flaggers, coping with temporary detours over unmarked, ambiguously marked, or unpaved surfaces, identifying defacto lanes created by drivers on snow-covered roads, dealing with single lane rural roads with uncontrolled passing pull-outs, being able to tell the difference between hazardous road damage (e.g. sinkholes, washouts, or oil on the road) and cosmetic debris (e.g. paint or shallow water on the road), and being able to make safe decisions when confronted with conflicting information (e.g. a green traffic light with a guy in a reflective vest standing in the middle of the intersection motioning for all traffic to stop). Unlike image recognition, no one has any idea how to cope with these classes of problems computationally. It’s quite possible they’re not solvable without a sufficiently sentient AI that treating such a system as property would arguably constitute slavery.
I think we’ll be able to get to the point of self-driving vehicles that can cope with best-case situations without too much more trouble. We’ll probably be able to get to the point of self-driving vehicles that work near perfectly on roads that have been augmented with guidance systems and some form of centralized traffic control. I’ll be very surprised, however, if completely hands-off systems that can equal human drivers under all possible conditions will be feasible this century, if ever.
oldster 02.05.20 at 11:08 pm
I bought my first car almost 50 years ago, and I have been a walker and a biker for longer than that.
I don’t mind the introduction of driverless cars, provided that we can introduce a new piece of legislation to accompany them:
every death or injury to a walker or biker is to be investigated as a homicide or unlawful assault, in a court of law. The defendant in the case of a traditional car is the driver; in the case of a driverless car, it is the owner (or CEO of the corporation that owns it).
In the US, drivers are almost never investigated, much less convicted. Of anything — not even negligence.
If you want to murder someone with impunity in the US, your best method is to use a gun — provided that you are white and male. Your second best method is to simply run them down with a car — and if you are not white and male, this is probably your first best method.
You will walk free. The biker or walker will die without a hope of justice being done. Cars will continue to have open license to kill.
And that is the same pattern that we have seen with the first driverless cars — oops! the Tesla killed a biker. Oops! The Tesla killed a walker. Oh well — they had no right to be in its way, anyhow.
Changing this attitude towards cars, whether driven or driverless, is far more important to me than the question of what’s behind the wheel.
Matt 02.06.20 at 12:31 am
Almost all non-commercial vehicles have AV features incorporated.
As-put, this is _obviously_ too strong – It’s true of “newer” cars in western countries (and probably Japan and Korea) but most cars on the road are not “newer” and most cars in many countries don’t have this even if they are newer. I’m sympathetic to the skepticism in many of the other comments, but wanted to point this out because, if we don’t understand the current situation, we’re unlikely to properly evaluate the likelihood of developments in the future.
John Quiggin 02.06.20 at 1:02 am
Lots of interesting comments, mostly sceptical as I anticipated. One thing that strikes me (coming from subtropical Brisbane) is the frequency with which the inability to handle snow/cold/ice is treated as a trump card. Lots of places (accounting, I think for most of the world’s human and car population) never see snow. Lots more (DC and Atlanta to name the only US places I’ve lived in) deal with it by shutting down. Is there a better case of something AI can’t handle, and human drivers can?
John Quiggin 02.06.20 at 1:36 am
@17 Following up on road conditions, a constant theme of road safety advertising here is “if it’s flooded, forget it”. Translating, human drivers are incredibly bad at making one of the judgements to which you refer, namely “is this water on the road safe to drive through”. As a result, they have to be rescued (or, sometimes can’t be) every time there is heavy rain, which happens a lot here. AVs will doubtless err on the side of caution, but that’s a plus in my view.
John Quiggin 02.06.20 at 1:44 am
@18 Actually, all five Tesla autopilot fatalities have been driver only. The single pedestrian death, attributable to a distracted “safety” driver was front-page news worldwide.
https://en.wikipedia.org/wiki/List_of_self-driving_car_fatalities
AVs are being held to a much higher standard than humans, and this will almost certainly continue. Partly, that’s because they are likely to be owned by corporations, which make a much easier target than (as you say) white male drivers.
spiro 02.06.20 at 2:26 am
One thing worth mentioning I haven’t seen yet in the comments: JQ takes Google & GM and presumably other developers’ reports at face value. Companies working on over-hyped and fiercely resisted technologies do not have any incentive to be forthright, nor do they have any history of same. As long as the information sources remain untrustworthy with strong incentives to lie and hype actual progress, the prediction game will be slanted in favor of boosting stock prices.
Chetan Murthy 02.06.20 at 2:53 am
John,
I think you might have said something in a previous post about how one of the problems we all have with relinquishing things to corps, is the deep-seated suspicion that once they have control, they’ll worsen safety standards to the point where we’ll wish we’d kept firmly in control. And I wonder whether this is some of what’s going on here: sure/sure/sure in a better world, AVs will be much safer than human drivers. But in *this* world, we all fear that the way corps will make them work, will be “dedicated AV lanes where a human setting foot in the lane is tantamount to a death sentence with your estate going to the corp to pay for cleanup.” And (of course) worse. We can’t even imagine the kind of shit these corporate masters will invent, to shave a tenth of a percentage point off their costs.
It’s a real and basic problem with the entire problem of corporate governance, this.
Luckily (for those corps) most of us humans will knuckle-under until we end up living in Larry Niven’s future-world dystopia where running a red light is a capital offense with one’s organs distributed to worth recipients, wot.
Sigh.
OK, that said, I -do- agree with you, that once AVs get a foothold in some (decent-sized) geographic area where they can and do operate with no human intervention whatsoever (big caveat there!) it’ll only be a matter of time and “grad student elbow grease” before they arrive pretty much everywhere that matters. Of course, how much time ….. well, that’s the rub.
One thing I’ve noticed with all these “well, humans can do this, and computers can’t” babble, is that once a computer cracks a problem, a flurry of arguments arise, for why that problem wasn’t really a good example of human decision-making. Nobody ever makes those arguments before the computers crack the problem — only after.
I remember when Deep Blue II beat Kasparov, his first reaction was to argue that somebody on the Deep Blue team had intervened (and cheated, *rofl* *snort*). And so, I kind of agree with you, that it’s all about establishing a beachhead on the other side of that uncanny valley.
Collin Street 02.06.20 at 3:14 am
There are actually sound public policy reasons for holding automated operations to higher standards than the average manual operator, mostly to do with scale, ease of making alternate arrangements, etc.
(What are the legal and practical practicalities with disqualifying an autopilit program from driving?)
Eric in Kansas 02.06.20 at 3:51 am
Rapier & EG above are headed in the right direction, I think.
Everybody here is forgetting that autonomous cars are not a technical problem.
Answer this question and I will agree that self driving cars might happen:
When (not if) your autonomous vehicle crashes, who accepts liability?
The programmers?
HAHAHA…
Self driving cars are an insurance nightmare. Until they solve the legal liability problems, fully autonomous cars will never be more than an experiment and/or stock scam.
Edward Gregson 02.06.20 at 3:51 am
For a long time, autonomous cars were almost hilariously bad when operated outside perfectly controlled labs and factory floors, and computer vision consisted of blob finders and edge detectors. Then in the last decade or two, improvements in computer hardware, massive new data sets for training and years of tinkering with architectures and training methodologies for neural nets conceived decades earlier allowed computers to start detecting objects from sensor data with similar accuracy and detail to humans (in the right conditions). This drastically improved autonomous vehicles’ data association capabilities, meaning that previously conceived schemes for control, planning, localization and SLAM that made sense but only worked well in lab conditions now could be made to work most of the time in the real world.
People got excited because vehicles that used to drive 50 feet at 10 km/hr along a straight road and then end up in a ditch could suddenly drive like people most of the time. Many people got excited enough to bet lots of money that full autonomy would be here soon. But these systems are not smart or reliable enough to be trusted to drive on their own like a person. To get to the point where they can match the flexibility and reliability of a human will require advances in AI that can’t credibly be given a solid near-term timeline. So while geo-fenced shuttles and prototype taxis with safety drivers may become a thing in the next decade, don’t expect a level 5 robot chauffeur any time soon.
Chetan Murthy 02.06.20 at 4:01 am
JQ @ 20:
Soe examples I see raised:
(1) “road construction”, “vehicle blocking a lane”, and stuff of that ilk. Basically, things that require humans to interact with each other (via lossy channels like hand-waving, abrupt starting-and-stopping, and other ways of signaling intent.
(2) substandard road conditions (lack of signaling, signs, painted dividing lines)
(3) pedestrians either disobeying laws, or exercising rights that conflict with efficient driving
–> e.g. there was that video a while back of an (Uber, IIRC) AV yielding to a pedestrian “jaywalking” — and the person posting it wrote it up as “gee, we actually detected this pedestrian breaking the law, and dealt with it safely!) where in *fact* it was a pedestrian merely crossing at an unmarked (T-shaped) intersection in downtown SF. In CA, pedestrians have the right-of-way at all intersections whether marked or unmarked (IIRC).
Everything that has to deal with humans, and everything that has to deal with the non-up-to-code-ness of the roadway, seems problematic, but those are the sort of things I’ve seen cited repeatedly.
Zora 02.06.20 at 4:27 am
A driverless shuttle bus service was introduced in Stockholm in 2018. A fixed route, low speed, automatic stopping at bus stops. I can find articles about the service starting, but don’t know if it’s still in operation. Seems as if this would be a good use of the tech. Better than trying to duplicate the personal automobile.
John Quiggin 02.06.20 at 4:35 am
Chetan @28 Of these, only the pedestrian (and also cyclist) case is a real problem. We already know where roads are substandard and (since the arrival of Google maps etc) where construction, obstruction and so forth are a problem. So, until the necessary grad student elbow grease is applied, AVs can just route around these situations. They are rare enough that this wouldn’t have much of an adverse effect on performance in the tasks that are likely to be tackled early.
As regards pedestrians and cyclists, my experience in both roles is that a significant number of human drivers exhibit a combination of aggressive disregard of the law, incompetence and outright malice that would be hard to replicate in a robot. That’s where I got started on this.
Having said that, I agree that what matters isn’t the technology but social control. If AVs came with rules providing absolute priority for pedestrians over all motorized vehicles, that would be hugely beneficial. If they came with the kinds of rules you mentioned (more likely in the current environment), not so much.
Chetan Murthy 02.06.20 at 5:35 am
JQ @ 30:
Oh, but it’s not just construction — by “vehicle blocking a lane” I mean stuff like delivery trucks. The UPS truck stops, leaving only one lane open, and drivers trying to get thru have to negotiate with each other to take turns. Certainly, not something you can route around, b/c completely unpredictable.
nobody 02.06.20 at 5:40 am
@21:
I’m thinking more of a few mm of water runoff rather than flooding. Deep flooding is obvious enough that it should be fairly easy for automated driving systems to identify. Motion analysis (e.g. is the surface of the road ahead moving in a direction other than the direction of travel) and LIDAR systems accurate enough to detect sudden changes in surface slope ought to provide decent cues here.
Accurately identifying the difference between water runoff and visually similar hazards (e.g. black ice, or thick oil) is a much harder problem.
Priest 02.06.20 at 5:45 am
One class of scenarios I haven’t seen addressed (which doesn’t mean that they haven’t been somewhere), is – once I’m in the passenger-only taxi, am I a prisoner in the vehicle until it reaches the original destination I entered on the app when I ordered my ride? What are the “Ride Interrupt Protocols”?
Example: halfway to my destination I realize I left my wallet at home. When this happened while in a Lyft, I apologetically asked the driver to loop back to my house. No problem, and I tipped extra.
Example: with my destination a few blocks straight ahead, the passenger-only taxi turns right on to a side street, heading in the wrong direction. When I was riding in a Lyft with a driver, I heard their WAZE announce the mistaken turn in advance and could interject with the driver to tell her to ignore it and continue straight.
I can think of some non-dangerous solutions to the first example. But for the second, short of a passenger voice override to control the vehicle, you’re trapped going who knows where. There are very good reasons why such an override would be a bad idea; even limiting it to “Abort!”, the AI would need to be context aware to recognize that pulling to the nearest legal curb on a residential street is safe, while pulling over to stop in the emergency lane on a many-lane urban interstate highway would be dangerous.
oldster 02.06.20 at 6:50 am
I think that we are in complete agreement here:
“If AVs came with rules providing absolute priority for pedestrians over all motorized vehicles, that would be hugely beneficial. If they came with the kinds of rules you mentioned (more likely in the current environment), not so much.”
But the future that we are speeding towards is not a future in which the law gives “absolute priority for pedestrians over all motorized vehicles”, but rather a future in which pedestrians have no rights whatsoever when they are killed by autonomous vehicles.
(This, by the way, is the reason that we cannot count on an “insurance nightmare” to rein in the spread of killer cars: insurance regulations will leave pedestrians uninsured, and tort law will leave us without remedies. Problem solved!)
Listening to the long conditional that I quoted, what I hear is a sincere advocate of popular democracy in the middle East, on the eve of the US invasion of Iraq.
“If the Bush administration respects the popular sovereignty of the Iraqi people and works tirelessly to foster a functioning democracy in place of Saddam Hussein’s dictatorship, that would be hugely beneficial!”
In both cases, it is irresponsible to advocate for policy A on the grounds that it would be beneficial under condition B, given that we know condition B will never be met.
Driverless cars and the legal regime they usher in will not be programmed by fellas with compassion and vision. They are going to be rammed down our throats by the worst people, by Peter Thiel and the Uber thugs, by libertarian tech bros who will move fast and break a lot of pedestrians and cyclists.
Given that we know this, we have a moral duty to resist.
Resisting the Iraq War did not mean you hated democracy; it meant that you could see that the conditions for it’s beneficial implementation were not being met, and a lot of lies were being told to cover this up.
So too, resisting the push for autonomous vehicles does not mean that you hate enlightened applications of AI technology. It means that you can see that the entire system is being rigged so that big corporations can kill people with impunity.
You can’t evade responsibility by saying, “but if Dick Cheney really means what he says, then I am sure he’ll do a conscientious job after the invasion.” No, this plan is going to be implemented by the worst people, and must be assessed with that fact firmly in mind.
Hidari 02.06.20 at 7:19 am
A number of other points:
1: It’s slightly strange in a left wing blog to see encomia to self-driving cars. As the article linked to in the OP pointed out, so far, the only commercial application of this technology is not in ‘personal’ cars, as everyone seems to think, but in terms of taxis. Why? Obviously because taxi companies are drooling at the idea of firing all their staff and making more profits (CF recently when Boris Johnson threatened to automate the London Underground because he didn’t like the human driver’s love of unions). In Singapore and Japan they are pursuing ‘human’ style robots because the alternative is allowing immigrants to do the job.
2: There is all the difference in the world between a ‘closed’ system (where everything is known about it) and an ‘open’ system (i.e. reality). This is a quantitative not a qualitative leap. It does not follow that because self-driving cars have ‘gone to’ stage 4 that they will make it to stage 5.
3: There seems to be a supposition that self driving cars are inherently an logically ‘safer’ than humans. What’s’ the evidence for this? Clearly at the moment self driving vehicles only work in very easy to drive in locations, which are all very safe. What’s the evidence, from peer reviewed literature, that if they ever get to stage five, they will be safer than human drivers?
4: There is currently one company that has a self-driving taxi business. So what? What matters is not whether it works, but whether it stays solvent long term.
There was that Japanese robot hotel, which opened a few years back. It worked for a while, but eventually they ‘sacked’ most of their robots and replaced them with humans.
Don’t get me wrong, I think there is a small niche for ‘level 4’ type automation, and this will create a small business niche, which will probably last.
The idea that ‘level 5’ cars will ever, en masse, replace personal cars (i.e. the cars that most people own and drive) will be a phenomenon of the 22nd century not the 21st, if it ever happens.
Matt 02.06.20 at 7:19 am
We already know where roads are substandard and (since the arrival of Google maps etc) where construction, obstruction and so forth are a problem.
I have read this a few times, and think I must be misunderstanding you, because it sure sounds like you’re saying that there are not unknown or surprising “substandard roads, construction, and obstructions”, unknown to even google. That’s just _obviously_ wrong. If AVs depend on that being the case, there will be huge problems.
Collin Street 02.06.20 at 7:39 am
I honestly don’t believe that it’s possible to get useful reliability out of a system that uses mapping for anything except route-finding, for the simple reason that mapping is only useful if you can’t respond to the actual situation in real time.
(What happens when the maps are wrong? Well, if you still get useful results when the maps aren’t accurate, then the maps are a dumbo feather. But the companies aren’t acting like that, they’re acting as if they need maps to be safe… but the maps won’t always be available/accurate, and that means the companies are building cars that they know will be unsafe from time to time. Which suggests pretty strongly they’ve abandoned making their cars actually-safe and that the whole thing’s a con).
Collin Street 02.06.20 at 7:50 am
For the money being spent on mapping, we could probably lay guidewires in the street, and the guidewires are more reliable and would last longer.
Cian 02.06.20 at 1:42 pm
Chetan Murphy:
One thing I’ve noticed with all these “well, humans can do this, and computers can’t†babble, is that once a computer cracks a problem, a flurry of arguments arise, for why that problem wasn’t really a good example of human decision-making. Nobody ever makes those arguments before the computers crack the problem — only after.
Sometimes these arguments are in bad faith, but sometimes they’re simply responding to the solution. Most AI technologies aren’t intelligent, and have little to do with how humans (or animals) solves these problems. It’s simply that researchers find an algorithm that can also solve it. Which I think is very impressive, but has little to do with how we commonly think of intelligence. And nothing to do with what we think of autonomous intelligence.
Chess is probably the best example of this. Chess computers don’t work remotely in the same way that humans do when playing chess. It’s simply a brute force solution that relies upon computers being a lot faster than human brains.
If you look at the types of things that computer AIs are good at it is solving problems that are highly routine and predictable (and to be honest, very tedious). Statistical and pattern recognition has got the point now that computers can be trained to look for patterns that occur with reasonable regularity (and where you have good data – which is rarer than people in the field like to admit).
As soon as any nuance enters the problem space, computers fare very poorly. And by poorly, I mean they fare less well in most cases than an insect. And certainly nothing close to what birds, or rats, are capable of doing. Throw an unexpected situation at a computer, and the odds are very poor that it will handle it well.
Computers are very poor at dealing with new, unexpected, situations. And absolutely hopeless at dealing with anything that requires understand even basic human cues, or social expectations.
Cian 02.06.20 at 1:54 pm
Chetan @28 Of these, only the pedestrian (and also cyclist) case is a real problem.
John, you simply don’t know what you’re talking about. All of the problems that he identified are problems that researchers (well the honest ones) admit they don’t know how to solve.
And they’re all very common – I’ve encountered every single one of these multiple times in the past month. I’ve also encountered ad-hoc road works, a traffic light that had stopped working (and so everyone had to negotiate their way through an intersection). These are routine problems in any city of a reasonable size. There’s a reason all the tests have been carried out in suburban phoenix and California. In November I was on a highway where at one point we were rerouted briefly onto one of the lanes on the other side of the road.
Delivery driver. Is that car pulling out, or not. Are those real road markings, or old road markings. How do I follow that cop’s instructions. The traffic light has broken, and now everybody has to negotiate their way through the crossing. These are all problems I’ve faced in the past month, that self-driving cars are not even close to dealing with. And there will be plenty more that I take for granted, that would baffle an AI.
We already know where roads are substandard and (since the arrival of Google maps etc) where construction, obstruction and so forth are a problem.
This simply isn’t true.
But because this touches on a deeper point that everybody seems to ignore, lets talk about mapping. Autonomous cars, as currently envisioned, require very detailed maps. How detailed? They have to be accurate to 5cm, and include very detailed information about what’s on the road. They need to be kept up to date, and if inaccurate autonomous cars fare poorly. They require specialised technology (so your autonomous car can’t update them), and require vast amounts of data to be transmitted and stored. That alone makes autonomous cars impractical upon a mass scale currently, as we simply don’t have the technologies required to store and transmit the data to and from cars in a practical fashion (and at this point engineers are thinking about all the other issues, such as reliability, error correction, etc – none of these are trivial problems if you want reliability). Even if all the AI problems were solved (and we’re a long way from that), there are a number of very hard infrastructure and engineering problems to be solved for the Google approach to be viable.
So, until the necessary grad student elbow grease is applied, AVs can just route around these situations.
John all the issues that Chetan Murthy raised are issues that current AI technologies can’t solve. They require a new approach to AI that currently doesn’t exist, and may never exist (at least in our lifetimes). This is like saying that Faster than Light drives just require a bit of grad student elbow grease to be a reality.
Dogen 02.06.20 at 3:09 pm
I call BS on your repeated statement to the effect that: “ AVs are being held to a much higher standard than humans, â€
Auto fatality rates are measured in deaths per 100 million miles driven.
At the time of the fatality in Arizona i looked into this. I found the fatalities rate for allcars was around 1/100 million miles driven.
I also looked up data for self-driving cars, which was a little harder to get. The best estimate I could make was self-driving are had gone a total of around 10 million miles, obviously mostly in highly favorable conditions. (I.e. It’s not an apples to apples comparison, but all in ways that favor the self-driving cars.)
So to be conservative, say 20 million miles of driving and one fatality, or a fatality rate 5 times higher than ordinarily vehicles!
Ok, the data are far too sparse to generalize and say “eek, self driving cars are 5 times more deadly than ordinary cars!†OTOH, it’s obviously a pretty strong counterpoint to the evidence-free claim that self-driven cars are, or will be, much safer than what we’ve been living with.
I’m not opposed to self-driving cars. And I think it’s possible that if they ever work at level 5 they *may* be safer. But I don’t see any evidence that they *are* safer, nor any reason to believe they *will be* safer.
And when you make a blanket statement of fact about them being held to a “much higher standard†I think you are blowing smoke. How about producing some evidence for that claim?
Kiwanda 02.06.20 at 4:29 pm
For sure, much more should be invested in mass transit, and in reforming zoning laws to allow housing close to transit hubs; mass transit is more efficient, lower carbon, safer.
But if there are going to be cars, it’s better if they’re safe; obviously self-driving cars should be used only under conditions where they are safer than human drivers. But then, they should be used. Those 36000 deaths should be avoided. And sometimes, like that friend of my son’s, killed over Christmas by a drunk driver, we are personally connected to the dead.
Driving a taxi/uber/truck is a pretty shitty job; there should be learning and training and jobs available, so that these jobs don’t need to be taken. Or is the argument that a shitty job that a machine could do is better than no job at all? Maybe.
On the other hand, cheap housing is not necessarily close to where the jobs are, so expensive and/or slow transportation keeps people, especially poor people, from jobs they might otherwise have.
I expect to be old enough, eventually, that I shouldn’t be driving; under current conditions, that means increased isolation, difficulty getting supplies, a more boring existence. Better, cheaper transit, including maybe self-driving taxis, would be very helpful.
So self-driving cars, to the extent that they provide cheaper, safer, faster transit, would help poor people and people who are too infirm to drive, and remove one kind of grinding job.
NomadUK 02.06.20 at 4:43 pm
You’ve obviously never lived in either Albuquerque or Montreal.
Sifu Tweety 02.06.20 at 6:01 pm
They don’t. Waymo’s rider-only pilot and GM’s plans are predicated on using teleoperation for safety drivers; they certainly are not planning to deploy vehicles with no mechanism for human intervention. The safety drivers just don’t sit in the vehicle. The rate of disengagements, even at the most sophisticated players, is still far too high for public road deployment without some mechanism for reasonably immediate human takeover.
Hidari 02.06.20 at 7:30 pm
@42 With the exception of extremes, there is no such thing as a ‘shitty job’. Jobs that seem great to me seem appalling to other people and vice versa. I remember a story a friend of mine told me. She worked in an office with a woman who was always late, always taking sick leave, always complaining about her (relatively safe, relatively well paid job). Eventually my friend broke and asked this woman: ‘Look. I know you hate this job. What is it that you actually want to do?’
Without skipping a beat this woman’s eyes lit up and she said: ‘Taxidermist!’. It seems that all her life this woman had fantasised about working with dead animals. And stuffing them.
Being a rock star, allegedly a dream job, didn’t work out too well for Kurt Cobain.
With the exception of a small number of jobs that are objectively awful, most jobs’ ‘goodness’ is in the eye of the beholder, or whether they are ‘good’ or not depends on other factors (whether or not you are well paid, how good your pension is, what your workmates are like, the hours, chances of promotion, etc). Also (not a trivial point) many taxi drivers work part-time.
I always suspect that most people (who tend to be middle class, I’ve noticed) who tend to describe whole areas of labour as being ‘awful’ are subconsciously positing, as the ‘ideal’ job, some kind of white collar office work. Helloooooooo…..never seen Office Space? Or watched the (British version of) The Office? Not knocking it, but there are whole swathes of ‘respectable’ middle class employment that strike me as a living death, and no one is talking about automating those jobs.
But…different strokes.
I might add that like the OP you accept without argument that self-driving cars are safer than non self-driving cars. To which I repeat: where is the evidence for this from peer-reviewed journals?
You also imply they will be cheaper than ‘normal’ cars. I can tell you now, they won’t be. Ever.
Dr Steve Cruel 02.06.20 at 7:54 pm
Have we forgotten our Ada Palmer seminar so soon? I seem to recall the Terra Ignota books having something to say about surrendering our safety to ‘autonomous’ vehicles.
B Roseman 02.06.20 at 10:22 pm
John, I think there are two elements of optimism in your view that I have a hard time supporting. 1) That the transition from cars with human drivers to autonomous vehicles (Level 5) will be somehow smooth and easy. I actually doubt that most roads can sustain a mix of both vehicles and that they will not be “safer” then what we currently have. People make mistakes, algorithms make mistakes (unknown circumstances causing unknown behaviors), and that until all vehicles on the road are fully autonomous and the infrastructure in the roads supports that, I don’t think we see any significant reduction of accidents, fatal or not.
2) I find your faith in the mapping of roads and the updating of those maps to be far in excess of the reality of current map tools. Just yesterday I drove into Seattle’s University district and was guided by Google Maps to turn onto a road that was closed for an ongoing construction project. The full road was closed, had been closed for several weeks, and was known to be closed by WA-DOT. How that information had not gotten into Google Maps by this time on a heavily used transportation route I have no idea, but that’s been my ongoing experience with all such driving guidance tools, whether owned by Google or not.
Can those issues be solved, probably. Will they be solved, in addition to the issues other respondents have raised? I have serious doubts that we will get there without governmental intervention to declare a flag day to switch to new infrastructure/vehicles, or something equally interventionist. And all of these issues are small compared to the idea that individual vehicles are a pox on the land and should not be the primary mode of transportation for all people at all times.
John Quiggin 02.07.20 at 12:00 am
I think I have to defer to the weight of opinion here. Still, a couple of final points.
1. Unlike everyone else here, I find the available information on road closures to be entirely adequate to avoid problems, at least when I bother to check it. As well as mapping apps, for example, I can go to https://qldtraffic.qld.gov.au/listview.html Top item is “Centenary Motorway Jindalee- QUU is repairing a burst sewer main. Right lane has reopened” Item also has info about pedestrian and cycle lanes and was updated about 15 minutes ago. There are another five items updated within the last half hour. If I can find this kind of thing using DuckDuckGo, I assume Waymo should be able find it using Google. But comments here suggest not. Is this a US thing, or is it just that I don’t drive in difficult places?
2. I’m as hostile to private cars as just about anyone here (I have one, but use it as little as possible). I don’t see them being replaced by public transport any time soon, so I’d rather see them made safer, forced by design to follow all road rules, electric, boring to travel in, etc.
Kiwanda 02.07.20 at 12:42 am
@45: “With the exception of extremes, there is no such thing as a ‘shitty job’. Jobs that seem great to me seem appalling to other people and vice versa.”
Jobs at which people work to mental or physical exhaustion, or at personal or emotional risk, for low pay, are shitty jobs. It may be that there are people who find personal fulfillment and deep satisfaction in picking fruit, cleaning dishes in a restaurant, roofing, logging, prostitution, assembly lines, mining by hand, driving taxis, filling Amazon orders, or cleaning in hotels, but I tend to doubt it. All laborers have dignity, and deserve respect and a decent wage, but these are shitty jobs. I’ve done the first two of these; how about you? Most of the labor that went into your smartphone, and quite possibly your clothing, was done by someone with a shitty job. The observation that some jobs are shitty does not imply that other jobs are not.
“I might add that like the OP you accept without argument that self-driving cars are safer than non self-driving cars”
Read more carefully.
Cian 02.07.20 at 4:14 am
It may be that there are people who find personal fulfillment and deep satisfaction in picking fruit, cleaning dishes in a restaurant, roofing, logging, prostitution, assembly lines, mining by hand, driving taxis, filling Amazon orders, or cleaning in hotels, but I tend to doubt it.
Assembly lines can be, and have been, made humane. That’s a managerial choice. There are a whole swathe of sex worker activists who don’t seem to have a problem with their job. There are loads of taxi drivers in London who love their job. I had a friend for many years who really enjoyed being a cleaner (he disliked the pay, and shitty managers – but he got deep satisfaction from cleaning). Most of these jobs are terrible due to shitty management. There’s no reason they have to be terrible.
Cian 02.07.20 at 4:34 am
John – that information is not presented in a format that’s very useful to computers. Could it be done? Yes. Is it likely, particularly in the US, that you can persuade all the districts to standardize on something that is useful to computers. I mean maybe, but having done quite a bit of work with local government IT, I’m skeptical TBH.
nobody 02.07.20 at 4:47 am
@48:
Maps won’t help the first vehicle to encounter an unscheduled obstruction. If a tree falls across a road and the first entity to come upon the scene–before anyone has had the chance to dial 911 (etc)–is an autonomous vehicle, the vehicle must be able to reasonably respond to the situation.
Reasonable response in this context includes easy things, such as notifying the police, but also very hard things like making access for emergency vehicles and/or finding an ad-hoc detour within the customary standards of vehicle use (e.g. it’s OK to detour through a commercial parking lot or on an empty sidewalk, but it’s not OK to detour through a residential driveway). Idling indefinitely while waiting for the tree to go away is not a reasonable response.
Within the next few decades, I think we’ll see similar levels of vehicle automation similar to what’s currently available in aircraft and trains: automation handles routine operations but brings humans into the loop whenever the unexpected happens. If the autopilot on an airliner flies into strong turbulence the autopilot throws up its hands and tells the pilots to deal with it.
Despite multi-decade head starts, much easier operating conditions than for road vehicles, and considerable commercial pressure, total autonomy hasn’t happened in aircraft or trains. It’s also just beginning to be considered for ships. I would be amazed if total road vehicle autonomy happens before the autonomy problem is completely solved for other transport modes that operate in less complex environments.
Chetan Murthy 02.07.20 at 7:14 am
John Quiggin @ 48:
I thought I should add a little “human explanation” to what Cian write. Cian wrote that the accuracy of the maps these AVs use is 5cm. From what I understand
(a) the AVs themselves cannot acquire this data as they’re moving around
(b) the data has to be post-processed to be suitable for use by AVs
(c) without this data, the AV is helpless and cannot safely move
I’m not a AI guy (systems&languages) but from what I understand, to a great extent the AV isn’t actually seeing the road and driving based on what it sees: it’s driving based on this incredibly detailed map, and using its LIDAR/cameras to match itself to the map and see moving obstacles.. So in a very real and deep sense, the AV isn’t driving the way humans do: you might compare it to a blind person having an incredibly detailed map of the city, and using their stick (and maybe rudimentary sonar) to situate themselves and detect moving obstacles. [We stipulate that they can consult this map of the city at lightning speeds — suppose this blind person has perfect recall.]
And, of course, they do this while running at a breakneck pace, right? *grin*
It’s a hard problem. A really, really hard problem. I still believe that once it gets solved *for real* in some nontrivial area, it’ll be possible to roll that out everywhere. But right now, it hasn’t been solved for real -anywhere-. [the discussion of safety drivers is important also, b/c it’s an obvious scaling issue.] [but then, the fact that the AV needs perennial and high-bandwidth connectivity is also problematic.] So many things are problematic.
Chetan Murthy 02.07.20 at 7:17 am
Hidari @ 45:
Since we often (often) disagree, I had to chime in and agree with you on this [at the risk of derailing .. I promise to not continue this little sub-thread]. Yes, there are no “shitty jobs” — there are only “underpaid jobs”. The original factory jobs (cf. Henry Ford’s River Rouge plant) were incredibly “shitty”, and factory owners experienced very, very high worker turnover. How did they deal with it? They raised wages until the turnover dropped to acceptable levels. That’s how the “good factory jobs” idea came into being, after all. Humans were the soft, organic cogs in these immense machines, and boy howdy did they hate it. But it paid enough to be worth doing.
That’s something we’ve forgotten about, in our New Gilded Age.
reason 02.07.20 at 8:59 am
With respect to “shitty jobs”, my view come be summarized that jobs don’t fall on a linear scale. There are two dimensions – interest, difficulty.
Jobs, that are easy and interesting aren’t jobs, they are hobbies.
Jobs, that are difficult and interesting are well paid and mostly popular with the people that do them.
Jobs that are easy and uninteresting and also often popular with the people that do them because they occupy their mind with other things while they are doing them. Pay is an important criteria for such jobs.
Jobs that are difficult and boring are shitty jobs.
reason 02.07.20 at 9:05 am
As an aside – the only reason that badly paid shitty jobs exist is because people have no alternatives. That is one reason we need a UBI – so such jobs will be redesigned to make them either more interesting or easier.
Zamfir 02.07.20 at 10:34 am
Kiwanda, pretty much any job can be made horrible by driving people to mental and physical exhaustion. In my experience, the big difference between good and bad jobs, is that people in bad jobs can’t say no when the bosses turn up the pressure.
It is rarely an intrinsic aspect of the work activities, but a reflection of the lack of power of the workers. They don’t have better alternatives lined up, so they have to accept exhaustion (or low pay, or high risk, unhealthy practices, usually several of that list).
My grandparents were small-time market gardeners, so I grew up with fruit picking (and vegetables, and flowers).They considered it hard work, but great work. The agricultural market trends to ever larger scale, so their was no future it. Otherwise, my parents would have loved to take over the business (and I might have in turn, who knows), and other siblings would have wanted into it as well even at a significant pay cut. Agriculture is full of such children, who regret that they cannot take over their parents’ business. I have distant cousins who migrated to a deep backwater of France, because soil was cheap and they could continue the ways of old.
That’s telling, I think. Lots of hired farm work is exactly the shitty work you describe, but the work itself is mostly the same as my family’s. Add more money (though no riches, the business ended for a reason), control over the work, the increased respect they got as small independents. Now the same work is completely different, to the point that people lament it’s disappearance.
Matt 02.07.20 at 11:14 am
I’ve been thinking a bit about the claim that it’s unreasonable to require higher (or significantly higher) safety standards for AVs than “normal” cars, and think there’s a good argument that this is wrong. There is a non-trivial argument that the current safety standards for “normal” cars – especially, but not only, in relation to pedestrians, cyclists, etc. – is too low. But – it’s hard to make changes to this, as it would require a large group of people to change their behavior, take on expenses, etc. That doesn’t apply to AVs. There, because there is no established interest group, and certainly no broad one, we can impose higher standards – perhaps the ones that really should be applied more generally – without much pain. And, as a plus, if these standards can be met, they can form a new sort of “general” standard, that we might hope to ratchet “normal” cars to over time. If this is plausible, then the idea that we shouldn’t hold AVs to “higher” standards is simply mistaken.
Kiwanda 02.07.20 at 5:44 pm
What I said: “Jobs at which people work to mental or physical exhaustion, or at personal or emotional risk, for low pay, are shitty jobs”
Chetan says: “Yes, there are no “shitty jobs†— there are only “underpaid jobs—
I’ll assume that Chetan is not responding to me.
What I said: “Jobs at which people work to mental or physical exhaustion, or at personal or emotional risk, for low pay, are shitty jobs”
reason: “Jobs that are difficult and boring are shitty jobs.”
Amen. Many of those I mentioned are in that category.
What I said: “Jobs at which people work to mental or physical exhaustion, or at personal or emotional risk, for low pay, are shitty jobs”
Zamfir responds: “Kiwanda, pretty much any job can be made horrible by driving people to mental and physical exhaustion….Add more money…. control over the work, the increased respect…”
Yes. I don’t see this as disagreement, although it sounds like it is intended to be.
I have to agree that many jobs, including fruit picking, sex work, cleaning, and indeed, taxi driving, can be done under conditions with enough pay and autonomy that they are tolerable. (I suspect that the no-shitty-jobs comments are from people who are perfectly happy to see *other people* take those jobs, but never mind, that’s not the question.) But these jobs, as they exist now and very likely in the future, involve work to mental or physical exhaustion, or at personal or emotional risk, for low pay: shitty.
What was my main point, again? Oh yeah. Self-driving cars, *if safe and cheap enough*, as they may well become, would reduce traffic fatalities, reduce the number of shitty jobs, increase mobility to jobs, and increase mobility for people who can’t drive. This is so close to tautologous that it ought to be uncontroversial, but maybe not.
J-D 02.08.20 at 12:37 am
There’s no legal problem. If there’s law that says you’re liable, then you’re legally liable; if there’s no law that says your liable, then you’re not. It’s got nothing to do with who accepts liability; ‘who is going to accept legal liability?’ is the kind of question that would be asked by somebody who has no idea how legal liability works.
There might be a problem deciding whom (if anybody) the law should make liable. If you ask me whom the law should make liable, I’ve got no problem answering: the owner of the vehicle should be made liable by the law (except where it can be proved that the loss was caused by somebody’s else action or negligence). If you disagree and think that somebody else should be made liable, then our disagreement may be a problem for us, but it’s not a legal problem. If we both agree, but we’re concerned about opposition to such law from the owners, we may have a problem there but it’s not a legal problem. If you can’t make up your mind what you think the answer to the question should be, you may have a problem there, but it’s not a legal problem.
Zamfir 02.08.20 at 10:32 am
@Kiwanda, I do think we have a disagreement on this topic. It’s where you say “self driving cars would […] reduce the number of shittu jobs”
If the problem is in the bargaining power than this will not automatically happen. One very plausible outcome is that taxi drivers have to find another job with even less alternatives, where they are more vulnerable to shittification.
I don’t see how any shittiness is inherent to taxi driving. Many people like driving. The people I know with driving jobs (mostly truckers, delivery van drivers, only one part-time taxis driver) generally like the actual work they do. When they are unhappy with the job, they want good pay, reasonable hours and less pressure, a nice car to call “their own” , respect from bosses and customers. They do not want to stop doing the work, and they are not looking forward to automation.
eg 02.08.20 at 3:03 pm
Doug K @14
A crucial insight I felt was given short shrift in Kahneman’s “Thinking Fast and Slow”
The book is fascinating, but because it’s mostly about very narrow cases where system 2 thinking is superior to system 1 thinking, the foolish takeaway for many is that this is true for all, or even a majority of cases. That’s a recipe for disaster …
J-D 02.09.20 at 7:39 am
Citation, please.
Michael Cain 02.09.20 at 6:45 pm
@39:
The recent MuZero game playing stuff is interesting. A neural network guides the tree search and learns by playing against itself rather than looking at human games. The most recent versions don’t include specific rules, the NN has to figure those out as well. The NN-guided search evaluates orders of magnitude fewer states than the older style software. For both Go and chess, human experts have said it’s like watching an alien play the game because the software does strange (but winning) stuff. The same software also learns to do well playing old Atari video games.
Michael Cain 02.09.20 at 6:51 pm
I am similarly optimistic about self-driving cars, with the caveat that they will be used in limited situations for a long time. At least in the US, I believe the primary situation will be driving aging Baby Boomers around their suburbs. The 75-year-old whose eyesight is failing, who terrifies their children if they happen to ride with Dad, can stay in that house for years if they can get in the small electric car and say, “Take me to Dr. Jones’s office, please.” Or to the grocery or cleaners.
Chetan Murthy 02.09.20 at 6:52 pm
J-D @ 63:
I don’t know of any citations, but there’s good research that shows that in almost all car accidents, one of the drivers broke the rules. And there’s lots of anecdata that nobody ever goes to jail for running down a pedestrian in a crosswalk: “it’s an accident man, what more do you want?” I know Atrios often reports on these incidents in Philly. And certainly the few cases we read about where somebody actually got prosecuted don’t leave me with much hope: the case of that fucker Bucchere who ran down a pedestrian with his bike (to make his speed record on Strava) is instructive: he got 1000 hours community service.
https://sfist.com/2012/04/05/cyclist_who_struck_pedestrian_at_ca/
Eric in Kansas 02.09.20 at 7:21 pm
“the kind of question that would be asked by somebody who has no idea how legal liability works.” @J-D 60
My understanding of US demographics is that the last population holding the beliefs implied by the content of your comment is white, and above average in personal wealth, formal education, and age. Your delivery suggests you are male.
Just guessing there.
My impression of your reply would have been much more favorable had you written something like: “the kind of question that would be asked by somebody who has no idea how the law surrounding legal liability is written.”
As for how legal liability actually WORKS, all I know is what I read in the papers, and what I have seen serving jury duty. Those experiences do not support any assertion that the written content of the law has any positive effect on the outcome of any legal conflict.
derrida derider 02.10.20 at 3:06 am
Looking at this thread, its clear we’re dealing with two separate questions:
1) Can we eventually build AVs good enough that they will be allowed to be driverless?
2) If we can, will this be a good thing?
I agree that the most likely answer to (1) is “no” – @7 and @14 both explain why very well. But on (2) I think those who claim that it would be forbidden or useless or evil are silly.
Any innovation that allows humans to get what they want with less human effort is rightly seen by the bulk of the population as a Good Thing; that’s history and psychology as much as economics. It is exactly what economists mean by “productivity growth”, and in this case is especially beneficial because the growth in productivity is extended to those who currently can’t participate in it (ie don’t have either a licence or alternative means of travel).
Will it result in Teh Evil Corporates owning us? No more so than at present IMO. AV production and operation is no more a natural candidate for monopoly than current cars.
As for the claim that it will be too expensive, the innovative part is basically just software. Inherently an awful lot cheaper than paying a human driver to move stuff; it will displace a lot of jobs but democratise transport even further. Whether the resultant new jobs are better or worse than driving is unknown, as is always true ex ante for tech change, but they will in the long run be better paid; that’s what widespread productivity growth means.
mclaren 02.10.20 at 5:52 am
2 obvious reasons why self-driving cars won’t appear in our lifetime (unless they show up in special lanes concrete-walled away from all other driving lanes with special entrances for humans):
1) Human safety drivers are 100% worthless because that protocol requires a human to pay continual attention with instant-reflex keenness and instant readiness to step in during hours or days or possibly years of completely uneventful driving. That’s not humanly possible. No one can do that. For a few minutes, or perhaps even a half hour at a stretch…but not for hours and days and weeks on end. Any system that relies on people being able to do that is equivalent to an airplane that depends on people being able to fly by flapping their arms.
Without human safety drivers, a self-driving car has to rely on perfect operation of a computer vision recognition system. Talk to any computer vision expert. That’s not in the cards. Not in my lifetime, not in your lifetime, not in the lifetime of anyone reading this.
2) “The Egress Problem” has not yet been solved by humans, and there is no sign that humans will be able to solve it in the foreseeable future. Therefore it seems preposterous to expect a computer system to solve it.
The Egress Problem boils down to a complex interconnected failure of maps, traffic engineering, and street & building naming, and the only known solution to the Egress Problem involves…humans drivers just learning over time that “the entrance to Parkgate Mall says 500 First Street on the map, but the real entrance where everyone goes is actually at 700 Archer Avenue a third of a mile away across the mall.”
https://www.theverge.com/2016/8/24/12628488/uber-maps-self-driving-cars-egress-problem
John Quiggin 02.10.20 at 7:12 am
@69 OK, I’ll bite. Assuming that the rider knows the score, why don’t they just tell the driver (human or robot) “Take me to 700 Archer Avenue”?
Matt 02.10.20 at 9:22 am
John – I assume that this would be something that an AV would learn quickly, but right now, on google maps it’s not at all unusual for the address of a large building to be listed as somewhere other than where normal people can or would get in. It’s not _that_ unusual for there to be no entrance at such places! I have found this a fair number of times w/ google maps in the Melbourne area, for example. For people now, when this happens, you either stop and ask or drive around a bit until you figure out what to do. With an AV, I assume there would be a way for it to learn to ask questions – “Do you want to go to the X mall main entrance?” and things like that. But – it would assumedly have to learn this, and it might be uncomfortable for people the first time, or even dangerous.
John Quiggin 02.10.20 at 10:48 am
OK, I still don’t see how the AV makes the rider worse off. If either they, or the AV, know the correct entrance, that’s great (and there are now two chances to get it right). Otherwise, they stop and ask, or tell the vehicle to drive around a bit, same as if they were at the wheel.
Peter T 02.10.20 at 11:14 am
@69 and 71 point to the level of design needed. You tell the car to go to 700 Archer Avenue, but you actually want to turn down the ramp into the underground car park and park there. Manual over-ride? Instruct car to “turn right down ramp”? If the average computer is any guide, the driver will say “take me to Parkgate Mall”, it goes to the wrong entrance, you tell it “700 Archer Avenue”, it takes you there but does not know there is an entrance, you tell it, “turn into Parkgate Mall” and it goes back to the wrong entrance, rinse and repeat…
Matt 02.10.20 at 12:49 pm
OK, I still don’t see how the AV makes the rider worse off.
Maybe it won’t be worse off – but it is at least not better. And, here’s how it -might- be worse: in my own car, I can more easily drive around a bit, randomly, and look at things. Will I be able to do this in an AV? Maybe, but then they will need to be even more complex, than they suggested. And, if the idea is that they will depend on maps, this helps so a problem – the maps often don’t have the right information. What will that do to them? None of this is impossible to over-come, of course, but it adds to the issues that will make it hard for them to be of general use.
Hidari 02.10.20 at 2:31 pm
‘At least in the US, I believe the primary situation will be driving aging Baby Boomers around their suburbs. The 75-year-old whose eyesight is failing, who terrifies their children if they happen to ride with Dad, can stay in that house for years if they can get in the small electric car and say, “Take me to Dr. Jones’s office, please.†Or to the grocery or cleaners.’
For various reasons based on personal experience, I think this is one of the few ‘real’ uses of personal electric driverless cars. I think it’s a small market, a niche one, but a real one, which will probably keep a few companies going for a some decades. Likewise, small taxi companies in specific neighbourhoods, which meet very specific criteria may want to invest in a few driverless cars. There’s doubtless a few other options as well (e.g. the ‘land trains’ in Australia, and a few other options).
Trader Joe 02.10.20 at 2:59 pm
One thing I haven’t seen mentioned is the inevitable problem of bug fixing. With zillions of lines of code, its a certainty near death and taxes that there will be bugs.
We’ve seen this with the Boeing 737-MAX flight control system (not even an AV)….at some point, some manufacturer of AV is going to write some code that is deemed to have a problem – who decides and how you decide if the ‘fix’ is good enough is a complex problem because presumably that code was already deemed good enough under all of the simulated scenarios.
In the case of the MAX – the plane has been shut down for about 2 years and counting. I’d think AV owners would be pretty upset if they found out their AV was essentially a driveway ornament pending a bug fix.
Someone noted upstream the problem of insurance – NOT A PROBLEM. In the US anyway most insurers have already indicated that ‘owner liable’ is the standard they intend to insure to. If its a fleet vehicle (a la Waymo) it would be a commercial coverage, if its personally owned, you yourself will ultimately be accepting initial responsibility for deaths/injuries caused by your property – the same as most everything else that people own which can cause harm.
Cian 02.10.20 at 4:01 pm
or tell the vehicle to drive around a bit, same as if they were at the wheel.
That would require a conversational AI, which is a thing that does not exist and we’re not even close to having currently.
Cian 02.10.20 at 4:09 pm
Michael Cain:
The thing about games is that they have strict and bounded rules. There’s zero ambiguity about what you can do, and what success means. Neither of those things exist for driving, which is why creating a self-learning AI for unbounded situations with fuzzy rules is currently impossible. Nobody even knows how to do it. Self-driving cars are taught by humans, who tell the car when it has made a mistake, and what is permissible. However when a self-driving car encounters a new situation, it does not know to teach itself properly how to deal with the new situation. The assumption people like google are making is that eventually they will have taught the cars how to deal with all the possible situations that might crop up. I personally find this rather unlikely.
Cian 02.10.20 at 4:14 pm
I have to agree that many jobs, including fruit picking, sex work, cleaning, and indeed, taxi driving, can be done under conditions with enough pay and autonomy that they are tolerable.
All of these are jobs that if carried out in good conditions, there are sufficient people that like doing them. London taxi drivers enjoy their work. There are people who choose to be sex workers because they enjoy it. I wouldn’t choose either of those jobs, but I also wouldn’t want to be a lawyer or a doctor.
Also there is no job that given the right conditions can’t be made intolerable. Read up on the Video game programmers at Electronic Arts for example.
The problem with workplace conditions are to do with power, autonomy and pay. Automation in situations where people have little power, or autonomy, tends to make working conditions worse. Whether it’s factories, or an Amazon warehouse. If workers are not empowered, managers will make decisions based purely on profit. And those decisions will rarely make workers lives any better.
Gorgonzola Petrovna 02.10.20 at 7:31 pm
“is the inevitable problem of bug fixing”
I don’t think it’s just the problem of bug fixing; it’s much worse. These days, computer code for massive projects is written and tested inside IT body shops like HCL Technologies. Thousands of kids, at $7-9/hr, who have little or no idea what they’re doing, and no incentive to do it well.
Cheap and plenty. Lessons of The Mythical Man-Month are long forgotten.
J-D 02.10.20 at 9:46 pm
Eric in Kansas
… is of doubtful relevance as I am a Foreignanian; and in any case the merits of my analysis and evaluation are independent of the demographic categories to which I belong. Speculate as much as you like if it amuses you, but be aware that I am not going to be drawn into that discussion. If you do want to discuss demographic characteristics, do you consider yours relevant? Your choice of screen-name suggests that you are male, but is inconclusive; the reference to Kansas suggests, but even less conclusively (if possible), that you’re white; in any case, I don’t care enough to go into it any further than that. The merits of your analysis and evaluation are independent of your demographic characteristics.
Yes, that would more accurately express my intention, and in retrospect I would have preferred to write that rather than what I actually did. The fact that you were able to read my comment and arrive at that better verbal formulation suggest that although my unclarity irritated you (an effect I regret and would prefer to have avoided), it did no worse than that.
I’m not sure what you would count, in this specific context, as a positive effect. I remember once being told by a lawyer that when a civil case ends up in a court (she specifically excluded criminal cases) it is a sign that somebody is being a f***wit: but no matter how the law is written, it’s never going to prevent people from behaving like f***wits (although that is something that would certainly count as a positive effect). There are reports in some instances of people being satisfied with the outcomes of court cases (civil as well as criminal); it’s hard to know how much or how little of the credit for this (if indeed there’s any credit to be assigned) should be assigned to the way the laws are written. (For that matter, it’s difficult to know how much of the blame, if any, for people being dissatisfied with court outcomes should be assigned to the way laws are written.)
Returning to the specific topic of discussion, if you were injured by a self-driving car and went to a lawyer about it, do you think the lawyer would say, ‘There’s nothing we can do because we don’t know whom to sue’? I don’t. I’m fairly sure a lawyer would look at existing law and arrive at a conclusion about whom to sue. I’m not sure whom that would be: my best guess is that it would be the owner of the vehicle, but if not the owner it would be somebody else. You laughed at the idea that it would be the programmers; I too think that an unlikely outcome, but I’m not as confident that you are that it’s impossible; in any case, if the programmers are not legally liable, it won’t be because they don’t accept legal liability (and if they are legally liable, it won’t be because they do accept legal liability).
Whatever the lawyer’s advice, if you decided to sue somebody, no matter who it was you decided to sue, you might lose (the outcomes of lawsuits are seldom or never certain); but you wouldn’t lose the case on the basis that the defendant declined to accept legal liability. Also, based on my experience of reading court judgements, there’s a good chance that a court judgement, even if you lost, would give strong indications, based on existing law, of who could, and who could not, be legally liable in such cases. (Under a US system, as far as I understand it, you might have to get to an appeal court to get this kind of judgement, because at first instance you’d probably have a jury trial, and juries don’t elaborate on their reasons in the same way as judges; here where I am, most civil trials are by judge alone without a jury, so you’d be less likely to need to appeal to get reasons for decision.)
See also the comment by Trader Joe.
Chetan Murthy
The Wikipedia article on ‘vehicular homicide’ cites an academic study which supports the conclusion that, in the US, those convicted of vehicular homicide receive lighter sentences than those convicted of other forms of homicide. It does not support the conclusion that drivers who cause deaths don’t receive criminal penalties; obviously not, since if that actually were the case, there’d be no convictions for vehicular homicide to be analysed.
I imagine (this is just a guess) that the reason there is specific category of ‘vehicular homicide’ (or something similar) defined by the law of most US states is that in cases where deaths are caused by drivers either prosecutors are reluctant to bring charges of murder or manslaughter or juries reluctant to convict on such charges, whereas the creation of a separate category (perhaps even with a different schedule of penalties) helps to overcome this reluctance. If so, the study cited by Wikipedia tends to suggest that vehicular homicide laws are effective as intended: they do get people charged and convicted who otherwise might not be, but with the resulting penalties less severe than they might otherwise be.
John Quiggin 02.11.20 at 11:23 am
“That would require a conversational AI, which is a thing that does not exist and we’re not even close to having currently.”
This, and lots of other comments, convince me that I must experience the world in very different ways from others. I think about telling Siri “tell me the way to get to X”, and think that this gives me all the conversation I need for the kinds of purposes we’ve been discussing. And, my limited experience of search algorithms suggests that something like “drive randomly around all the available roads in this area until I tell you I’ve found the entrance” would not be too hard to code. But everyone else in the thread disagrees.
Similarly, I remember, before GPS, calling a taxi dispatcher and saying something like “I’m opposite the hospital, entrance with a big blue sign, not sure what road I’m on, can you send a cab please” and getting nowhere. But it seems as if no one else had this kind of problem.
I have this kind of experience often enough that I think I must have some slightly abnormal wiring.
Trader Joe 02.11.20 at 12:11 pm
@75 Hildari
” I think this is one of the few ‘real’ uses of personal electric driverless cars. I think it’s a small market, a niche one, but a real one, which will probably keep a few companies going for a some decades. ”
Actually the best and most important use for AV is for “zero passenger” uses which in theory can cut traffic and emmissions by getting vehicles off the road. For example, I can take my AV into work at 6:30 a.m., send the vehicle home for wife to take kids to school and go to work use it how she chooses and send it back to me take home at 6:30….by cutting the number of vehicles our family has in half we can more readily afford the fact that they might cost 2x as much.
All of this is to say nothing of sending the car out to get serviced, pick-up groceries, or run other casual errands where the service provider simply handles the transaction and then I recall the car back to my home or office….that’s the payoff. With fewer humans in cars and few cars on the road that’s where the injury/death reduction pays off.
Zamfir 02.11.20 at 12:29 pm
Do you even need a “conversation”? When adresses don’t work well in navigation, I just point to a spot on the map. Typically a parking lot, if I don’t know the area. And that is already the 1% weird cases, most trips are simpler than that
I can imagine that self-driving car development gets stuck on safety-technical issue. Ever more unexpected edge cases, and never the demanded level of safety. I don’t know, but I can imagine.
But I can’t imagine a system that works well in that respect, but gets stuck on issues like the addreses. Unlike the safety issues, you can work around such practical issues, if there is no technical fix.
Cian 02.11.20 at 1:06 pm
“This, and lots of other comments, convince me that I must experience the world in very different ways from others.”
Have you considered the possibility that maybe you simply don’t have a very good grasp of the technical issues here, and that others (some of whom, myself included, have an engineering background) understand them a lot better.
“I think about telling Siri “tell me the way to get to Xâ€, and think that this gives me all the conversation I need for the kinds of purposes we’ve been discussing.”
This is certainly something a non-technical person might well think.
“And, my limited experience of search algorithms suggests that something like “drive randomly around all the available roads in this area until I tell you I’ve found the entrance†would not be too hard to code. But everyone else in the thread disagrees.”
You’ve written search algorithms? But no randomly searching roads in the area would not be hard to write, though the results wouldn’t be what you’re expecting…
“Similarly, I remember, before GPS, calling a taxi dispatcher and saying something like “I’m opposite the hospital, entrance with a big blue sign, not sure what road I’m on, can you send a cab please†and getting nowhere. But it seems as if no one else had this kind of problem.”
John – just the fact that you’re comparing these two examples as if they’re similar illustrates that you don’t really understand any of the technical issues here. You’re an economist – I’m not sure why you think that gives you an technical expertise.
Cian 02.11.20 at 1:08 pm
Fun thing about testing AIs is that they’re unpredictable black boxes. You have no idea what the model inside is, or where it might break, or how it could deal with unexpected inputs and permutations.
Matt 02.11.20 at 1:23 pm
I think about telling Siri “tell me the way to get to Xâ€, and think that this gives me all the conversation I need for the kinds of purposes we’ve been discussing.
Except that the situation we’re discussing now is one where the information in google maps is wrong – it won’t get you where you want to go – and I doubt that Apple’s maps are better. I, at least, don’t mean to suggest that this is a problem that can’t be overcome, but I think that you need to at least accept that, right now, it’s a problem.
MJ 02.11.20 at 8:15 pm
I’m not an expert in any of the relevant areas here, so this may be a worthless comment, but:
John Quiggin @82, it is a little surprising to me that you see your two examples as nearly equivalent, or a relatively simple matter of code to address. “Siri, tell me the way to get to X” allows Siri to just recite to you directions as found in an existing mapping program. And this assumes that either “X” is a specific address, or “X” is a place that is listed with the correct name, correct address, and does not have multiple locations. But, I concede, with a couple of tries, Siri can (in my experience) get you reasonably close, and it gets closer all the time.
Whereas “drive randomly around all the available roads in this area until I tell you I’ve found the entrance” has so much nested ambiguity that calls for serious coding and natural language processing. As soon as we get to the second word, we have problems: “randomly” — we could guess (and program) the car to “look” at the map and use a random number generator to determine “go left, right, or straight” — or choose amongst the other options if it is a multi-way intersection. Ok, that could be done, though this obviously works best in relentlessly gridded locations, and less well when there are many mulit-way intersections/long stretches of one-way road and/or no turn-arounds or intersections, and of course, a “random” setting could lead to “straight, straight, straight” fairly frequently–i.e., you’ve likely made a beeline away from your location, possibly ending up far away, again if intersections are spaced sparsely. This probably soluble for a lot of easy cases, a bit troublesome for many edge cases.
So I don’t think that is insurmountable, but it already much more complicated than “recite the directions to place X from place Y”, involving some of the “choices”/obstacles map-routing offers, but with a suite of new ones on top. But still, might be ok.
Ok, “all available roads” we can simply assume as normally marked roads and no shortcuts through parking lots or anything.
“In this area” – this is a judgment call; what is “this area”? My dad and I might describe “this area” very differently in how far we want to wander looking for an entrance, to say nothing of debating what various context clues mean–but perhaps people are ok with the inconvenience of having to constantly update their orders if they think they see the appropriate sign right after they’ve gone past it (only to find they misread it), as this happens with human drivers, too. Still, there is the added annoyance of being on the look out and having to keep barking orders instead of just pressing down the gas pedal gently to slow down to get a chance to read the sign more closely (and perhaps therefore choose to inconvenience those behind you; protocols around this (“wait, slow down… turn around at the next opportunity… no, that was the wrong sign, resume random search order… no! I saw it back that way — stop! What do you mean you cannot stop as you would block traffic?” would have to also be coded).
After all this, there is the relatively simple task of identifying “until I’ve told you I’ve found it” as meaning “listen for the phrase ‘I found it’ and stop as soon as feasible”. Of course, there are natural language variations of how people might phrase this request, but this is probably doable too.
All told, however, it is a SIGNIFICANTLY greater task than “tell me how to get to X”, involving many more decision trees and judgment calls. Maybe each of these is easily solved, but certainly not nearly AS easily solved as your first example.
Chetan Murthy 02.12.20 at 12:09 am
J-D @ 81:
First, your data-point supports the conclusion that a great way to kill someone is with a car. But second, “vehicular homicide” != “running down a pedestrian <>”. And of course, that latter thing, is what’s really at issue here. No, I don’t have stats. But from all the anecdata I’ve seen, when somebody kills/injures a pedestrian with a car, except for extreme circumstances that driver walks. But yeah, I don’t have data.
John Quiggin 02.12.20 at 2:05 am
“You’re an economist – I’m not sure why you think that gives you an technical expertise.”
I made a statement from personal experience, which apparently no one else shares.
I haven’t asserted any expertise, and don’t claim any, beyond some undergraduate level experience in coding and algorithm design. MJ got the general idea, which as he says, covers the easy cases (of the smallish subset of cases where the standard map takes you to the wrong place).
J-D 02.12.20 at 8:42 am
Chetan Murthy
I would have expected that there would be cases where pedestrians have been killed by automobile impacts and no prosecution of the driver has followed. It would be surprising if there were no such cases. On the other hand, it would also be surprising if there were no cases with prosecutions.
That leaves open the question of the relative frequency of cases with prosecutions and cases with no prosecutions. I have no idea what the answer is. ‘Somebody says that …’ is an insufficient basis for a conclusion in this instance.
Kiwanda 02.12.20 at 4:25 pm
Trader Joe: “Actually the best and most important use for AV is for “zero passenger†uses which in theory can cut traffic and emmissions by getting vehicles off the road. For example,…”
The practice in this example doesn’t decrease the number of cars on the road, it increases it; only the number of cars in use decreases. Although the latter makes a difference for parking, and also the cars could park themselves far from locations of interest to people.
Michael Cain: “At least in the US, I believe the primary situation will be driving aging Baby Boomers around their suburbs…”
I agree. Also: not just old people (a group that will be getting larger), but anybody who can’t drive, or for whom occasional cab use is cheaper than owning a car. (Which also reduces the number of cars that need to be parked.)
J-D: ” the merits of my analysis and evaluation are independent of the demographic categories to which I belong.”
I would advise you not to reveal such a crazy notion on twitter, or indeed, in some other contexts here. One particular analysis of yours has no merit if you’ve given some other analyses with no merit, or if your background on the topic has no merit, or if you have been judged individually to be a person of no merit, or if your demographic categories have been judged to have no merit, or if you have ever agreed with a person of no merit, or if you have ever followed on twitter a person of no merit, or if your analyses can be claimed to be *adjacent* to analyses by a group of no merit.
Hidari 02.12.20 at 9:32 pm
@83
It’s precisely an AV’s ability to cope with downtown traffic that is in question. Not its ability to cope with small almost vehicle free ‘low speed’ streets in the suburbs.
Incidentally the majority of people in the world don’t live in ‘the Global North’.
They live here: https://www.bing.com/videos/search?q=new+delhi+at+rush+hour+you+tube&&view=detail&mid=D52CA9771366135B1E7FD52CA9771366135B1E7F&rvsmid=3014FEE1ED6D743B99A43014FEE1ED6D743B99A4&FORM=VDQVAP
They live here: https://www.youtube.com/watch?v=Y8bfNplEmfo
They live here: https://www.bing.com/videos/search?q=crazy+driving+in+afrida&view=detail&mid=B5C9B4836B4B865E0B59B5C9B4836B4B865E0B59&FORM=VIRE
Good luck with your little AV in those conditions.
Incidentally, there is a far more ‘low tech’ solution to the problem of private cars, one states are increasingly taking.
First: ban all cars, of all sorts, from city centres (indeed, perhaps all vehicles).
Second: nationalise public transport, view it as a public good, and make it free.
There is no market for mass production of AVs, and there never will be.
fledermaus 02.12.20 at 10:50 pm
Trader Joe @ 83:
“Actually the best and most important use for AV is for “zero passenger†uses which in theory can cut traffic and emmissions by getting vehicles off the road.”
I see this argument a lot and it always confuses me. A car not in use is not generating traffic or emissions.
Without AV you drive 5 miles to work, park and drive 5 miles home. Your wife in a separate car drives the kids to school and goes to work (5 miles) and drives home picking up the kids (5 miles). For a total of 20 vehicle miles traveled.
But in your example the dead head runs are generating both additional emissions and traffic. You drive 5 miles to work, then send the car home 5 miles. Your wife takes the kids to school and goes to work (5 miles) sends the car home (5 miles). It picks up the kids at school (3 miles there and back) then drives 5 miles to pick your wife up from work and back (5 miles). Then goes to pick you up from work (5 miles) and drive you home (5 miles). I don’t see how this results in fewer emission or traffic. In fact it seems the opposite.
Collin Street 02.13.20 at 3:26 am
A car not in use is not generating traffic or emissions.
Depreciation; in real terms, opportunity cost of construction as amortised over operational life.
Is it cheaper or more expensive to have one car shuttling backwards and forwards or two cars sitting empty? No idea, too fact-centric. It’s a classic capital-vs-recurrent problem, innit.
reason 02.13.20 at 8:35 am
fledermaus,
1. Yes, but it certainly does reduce the requirement for park places.
2. But also consider the case of the wife takes one kid to school, drives home and then picks up another kid, drives that kid to school and then drives home and then goes shopping. Now she can go shopping with both kids, send the car on with the earlier school, finish shopping and go on to the other school with the other kid. Not having the taxi mama service opens other possibilities.
Chetan Murthy 02.13.20 at 10:04 am
J-D @ 91:
“I would have expected that there would be cases where pedestrians have been killed by automobile impacts and no prosecution of the driver has followed.”
These wouldn’t be recorded as “crimes”, eh? They’d be recorded as “unfortunate accidents”. And sure, we don’t have that data, though I’m sure it exists somewhere. Atrios was on this beat a long time ago, banging on about all the cases of even bus drivers running down pedestrians and getting no punishment whatsoever.
Matt 02.13.20 at 11:08 am
The discussion of deaths of pedestrians here hasn’t seemed very clear to me. I expect that the number of intentional killings of pedestrians by people in cars is very low. (If anyone thinks otherwise, I’d be interested to hear why.) But, given that, the right comparison for prosecution and punishment isn’t with murder (which is an intentional killing) but with other unintentional killings – mostly with negligent killings, or in some cases with reckless killings. To know if drivers are treated much better than others when they kill, we need to know how their prosecution and conviction rates, and the punishment they get, compares to negligent and reckless killings in other cases. My guess – without looking up statistics – is that they are lower, but not all that much lower. (If alcohol is involved, I expect they are not significantly lower.) No doubt there are more negligent or reckless killings by drivers than in many other cases. That’s because driving is one of the most dangerous things most people do. But, we tend to not punish other types of negligent or even reckless killings all that harshly, either, and the fact that juries can easily imagine themselves in the place of a driver in such a scenario no doubt also plays a role.
Comments on this entry are closed.