In 1956, General Motors hosted a car expo called “Motorama.” As with all car expos, Motorama was a chance to show off concept cars and other kinds of long-shot projects that they hoped would revolutionize the auto industry. One of the most forward-looking things on display wasn’t a car, but a movie.
The film was called “Key to the Future.” In it, we see a family of four cruising along a desert highway in a beautiful, futuristic GM Firebird. But this car was actually just the set dressing for what GM really wanted to show us: their vision for a self-driving vehicle.
For nearly as long as there’s been an auto industry, there have been dreams of a car that drives on its own. In 1956, the year that “Key to the Future” was shown at Motorama, there were nearly 38,000 vehicle-related deaths in the United States. Since then, there hasn’t been a year with fewer than 30,000 people killed in car accidents. And because more than 90% of all automobile accidents are all attributable to human error, for some industry people, a fully-automated car is a kind of holy grail.
However, as automation makes our lives easier and safer, it also creates more complex systems and fewer humans who understand those systems. Which means when problems do arise—people can be left unable to deal with them. Human factors engineers call this “the automation paradox.”
Last week, in our story about automation in aviation, we heard about various ways the industry is people trying to deal with this paradox. For one, pilots are being encouraged to practice manually flying their planes to keep their skills polished. Engineers are also trying to make smarter, more collaborative automation that doesn’t strip skills from pilots, but works with them to complete tasks.
But Google has a very different approach. Their plan for solving the paradox is to take human drivers out of the equation entirely.
“If you have a steering wheel there’s an implicit expectation that you’re going to do something with it,” says Chris Urmson of Google’s self-driving car project. Chris’s goal is to make a driverless car that is safe because it lacks a human driver. “You get to sidestep all of these control confusion potential challenges by taking that out of the way.”
In 2009, Google started retrofitting Toyota and Lexus cars with new technologies that allow the cars to drive on their own.
The cars can accelerate, stop at traffic signals, make turns, merge onto freeways, and avoid pedestrians with no intervention from anyone in the car.
Then, in 2014, Google started manufacturing cars of their own design: cute little two-seaters with no gas pedal, brake, or steering wheel. They are designed for the user to input a destination, and just sit back, and let the car do everything else.
Chris Urmson of Google says that if one of their self-driving cars has a problem on the road—if the computer malfunctions or the sensors break—the car can pull over, and a different car will come fetch the passengers, leaving the broken one for technicians to fix. Meaning: Google doesn’t necessarily envision us owning these cars. It’s possible that self-driving cars will enable a world in which all of us get around by robot taxi.
A world full of robot-taxis could mean fewer parking lots, denser urban cores, and less traffic. Since autonomous cars can make decisions and react faster than we can, cars in motion could get much closer together—and not just bumper-to-bumper, but also side-to-side. The Department of Transportation requires that highway lanes be at least twelve feet wide, which is about twice as wide as the average car. So, suddenly, a three-lane highway can become a six-lane highway without any new construction.
But these cars could also reorganize cities for the worse. If people could read or sleep or write emails in an autonomous car, they might feel fine about having longer commutes. And of course, your route data could be exploited by advertisers. Maybe Google knows you like Starbucks–will the car drive just a tad slower as you pass it? Or worse, route data could become a matter of national security. “Bad actors could take control of your car or a fleet of cars and you know, stop every car that’s on the Bay Bridge at one time for malicious intent,” says Costa Samaras, a professor at Carnegie Mellon University and co-author of Autonomous Vehicle Technology: A Guide for Policy Makers.
Clearly, there are still a lot of details to work out. But once the science is done and the policy is carefully considered, are people even going to want an autonomous vehicle? Will people still go on road-trips or tailgate in the stadium parking lot? Will you still be able to get in your car to go on an aimless, contemplative drive?
“I guess we haven’t really thought about that,” says Chris Urmson from Google. “My assumption is you can give it a destination of where you want to go, but you can always change it.”
But, sometimes you just really need to get in a car and go. Case in point: this scene from Total Recall.
Frustrated with the slow automated system, Arnold Schwarzenegger’s character rips the robot driver out of the car and pilots it himself. It’s funny because as fantastical as this sci-fi world is, we can recognize the same kinds of user annoyances we have in the present. And we can see Arnold as heroic because he can do the things the machines can’t.
And think about Star Wars: Luke turns off his automation and uses his own skills (and “the force”) to blow up the Death Star.
As much as we love building things that make our lives easier, it seems we never get tired of seeing someone cast the robots aside. We love seeing people do things by hand. Maybe because we all have anxiety about losing the ability to do something ourselves.
So how soon will we need to answer these existential questions about our cars?
Chris Urmson has said that his personal goal is to get the Google self-driving car done by 2020, so that his two sons, the oldest of whom is 11, won’t ever need to get a driver’s license.
But Raj Rajkumar, who is co-director of Carnegie Mellon’s Autonomous Driving Collaborative Research Lab, thinks it will still be 10-20 years before we have fully autonomous vehicles commercially available. Even though Rajkumar’s company Ottomatika built an autonomous car that has already driven itself from San Francisco to New York, Rajkumar doesn’t see taking the steering wheel out any time soon.
Carnegie Mellon has been working on autonomous vehicles since the 1980s, and Rajkumar imagines that the transition to full automation will be gradual. “The number of scenarios that are automatable will increase over time, and one fine day, the vehicle is able to control itself completely, but that last step will be a minor, incremental step and one will barely notice this actually happened,” says Rajkumar.
And when that day comes, Rajkumar says there will still be accidents. “There will always be some edge cases where things do go beyond anybody’s control.”
If and when people do get hurt in autonomous cars, there may be circumstances in which the passengers wouldn’t have been injured had there been a human at the wheel. These cases will be hard to reckon with, but we’ll have to keep in mind that there were 30,000 Americans dying every year in human-driven cars. If autonomous vehicles lead to fewer car accidents, then as with planes, we may need to accept edge cases and periodic failures as the cost of living in this safer world.
I loved the podcast, but it left me having a really hard time knowing where to come down on the automation related to automobiles. 27,000 deaths resulting from human error is completely unacceptable. I couldn’t help but think that if we were to just put down the phones, and take responsibility for our safety, and the safety of others, we would not be having this conversation. We as a culture have become so distracted and distractible that we’re hurting each other, and I think that’s really sad!
We could all benefit from people putting down their phones while driving, but the cell phone is hardly the lone culprit. There have been outrageous numbers of automobile-related deaths ever since cars hit the market. It was especially bad in the late 1960s/early 1970s (upwards of 50,000 deaths annually from 1966-73)—so at least we’re not there anymore. Also, the number of traffic fatalities per 100M has steadily declined over the years, so the problem now is actually not as bad as it had been prior to the cell phone era (though this might not be true outside the United States; worldwide, about 1.2 million people suffer car-related deaths ever year).
Even with distracted driving, your odds of getting into a fatal car crash in the US are the lowest now than they have ever been since the government began keeping records in 1921. (This is probably due to safer cars, and education campaigns around wearing your seatbelt and not drinking under the influence, but I haven’t researched this thoroughly.)
At any rate, there does seem to be a floor to the number of vehicle-related deaths that we just can’t get below as long as humans are drivers. We are the weakest link when in comes to staying safe on the road. Which is one reason that people such as Chris Urmson and Raj Rajkumar and their research teams are working on automating our automobiles.
Sources: https://en.wikipedia.org/wiki/List_of_motor_vehicle_deaths_in_U.S._by_year (It’s a wikipedia page, but the raw data comes from government figures); http://www.who.int/gho/road_safety/mortality/en/
You can take my steering wheel, when you pry it from my cold, dead fingers. There will ALWAYS be driveable cars on the road, unless the Nanny State steps in and outlaws them on public roads or highways. Good luck with that, Google.
I think you’re missing the point. Google is providing a better way, not the only way. Keep driving your car. But don’t be surprised in the future with self driving cars that your insurance goes way up. If your risk of injuring someone is 10x greater than sitting in a self driving car, would you be willing to pay 10x insurance for that privilege?
Transforming the car doesn’t *just* mean transforming our cities. It means transforming the culture, too. I live in Modesto, CA and I have to say the people in this city LOVE their cars. They have a classic car parade every summer (air quality be damned) and murals of cars downtown. There are car clubs for every make and model you can image and car shows seemingly every weekend in the summer. It’s so deeply ingrained in the culture here that I can’t possibly imagine it changing. I’m sure 50 years from now I’ll be eating my words but I know car culture will always be out there.
Interesting episode but it left one big question hanging in the air. In order to get all the efficiency of autonomous driving on a mass scale, you need to outlaw human driven vehicles as they become the “variable” incapable of prediction. I think more people would cling to their cars than to guns in this country.
This is such a crazy Pandora’s box – really cool to think about the future of cities and cars. I personally have already survived two car accidents, which is more than I care to have for the rest of my life. If we can’t have better trains or hyperloops, then lets self-driving cars-it! Also, would a child be able to travel solo in a self-driving car? Could you send packages in them?
Wonderful episode, but so frustrating to hear you continue the silly memes. We must stop perpetuating the Hollywood Plots as possible, let alone probable. The Johnny Cab scene, or t-rex in the mirror are devices that writers use to forward their story, not even close to any reality. (Why would the Johnny Cab have anthro-centric controls?)
You also missed a whole audience of users that have vast experience using automation at the edge-cases. Military pilots are trained to collaborate with various auto systems, as well as override them when it makes sense. I’d suggest that older pilots of the V-22 and F-22 (coincidence, I assure you), and any astronaut would have a lot to say about working WITH as well as around automation. I’m one to scream that we shouldn’t fetishize the military, but you wanna talk to smart people who’ve been through this? They’re the ones.
Automation won’t de- anything us, it will free-up us from dying because of our limitations.
Thanks for a thought provoking show, even if my teeth have ground down a bit from this one.
Thank you for this very thoughtful extended advertisement for Google. Please continue informing us about upcoming Google products. As for those too lazy to drive or who want less traffic on the road, might I point them to a local bus stop. For those who are looking to shake up city planning with different transportation or who are looking for a hands-off way of travelling long distances, how about bringing back trains? The concept of self driving cars worried me before this episode, and it continues to worry me. I have no enthusiasm for this new Google product, only dread.
Ditto! Why not use innovation to improve upon existing societal systems and structures? Bottom up, instead of top down?
Sorry for the necro, just discovering 100% invisible. I would _love_ for the US to rediscover mass transit and start plunking down rail and bus infrastructure, but I doubt it’s ever going to happen. Meanwhile, as a broke person with limited visual acuity, I still live here, and I’m constrained to the four or five cities in the US with decent transit (all increasingly expensive) – and the already inadequate transit seems to be always first in line for a budget cut when state coffers run low.
So please, roll on the self-driving cars and Cars as a Service. It sucks but it’s better than having a two hour, two transfer bus commute from where I can afford housing to where I can find a job, or being at the mercy of a flaky carpool. Would I prefer sensible transit policy? Yeah, sure. I’d also like to ride a unicorn.
I would like to point out that Luke does not do it “on his own” as stated in this episode (at roughly 15:00). He uses the force, which is very, very different than doing it on his own–You even have the clip of Obi-wan telling him to do so! Other than that, I’ve loved these episodes on automation.
Yet another wonderful episode from my favorite podcasters. Thanks guys!
When this one was over, a scenario pieced itself together in my head. Went a little something like this:
Saturday morning, the local smart-car depot, and I’m on morning shift. Cars are coming in thick and fast for cleaning, and yup, I’m one of the janitorial staff. Hey, it’s a job, I’m providing for my family, and the benefits are good.
Hoo, here’s a doozy: vomit on the left seat, McDonald’s wrappers strews all over the floor and … whoa, is that what I think it is ground into the floor mats?
“Hey Tom, get over here and get a load of this!” My Supervisor comes over, takes one look inside, shakes his head quietly and says: “Better get your kit on mate.”
So: gloves, respirator, and a well-honed gag-reflex control, and to work I go!
Like I say, it’s a job and most of the time, it’s all right. Saturday morning shift is often tough. Good to sign up for though, as I’ve heard it greases the wheels of moving on up. So is Thursday morning as that follows Welfare Wednesday.
And it’s a lot better than 30,000 deaths per year, like in the crazy old days. Now it’s down to, what, 10,000 or so? Stuff still goes a bit wrong once in a while. A guy once said “out of my cold, dead hand”. I think he meant “out of my cold, dead ribcage.” But whatever. People are still people.
the US driver ranks as the fifth worst in the world. People are allowed on the road who can’t use a stick shift. Don’t know why you’d want one. It’s called control. Car control. It’s taught in driving school.. Driving school ? Yeah… that is were the rest of the world send people who want to learn to drive. >> And “speed Kills”… that BS. If it were true Duitsland would have the highest highway mortality of any country. They don’t . >> Most people drive scared… and based on their skill level… they should.
Thanks for listening
Stephn J Lewis
After listening to the previous show and to this one, I have come to the conclusion that driverless cars are not a good idea. As we become more and more modernized in a technological world, we lose the skills that brought us to this point. Pilots are being told to practise the skills they learned in flight school by occasionally taking their planes off autopilot and driving them manually. If we leave all the driving up the cars themselves, we further cripple ourselves. As her college graduating class valedictorian, my daughter’s address celebrated the deadly sin of Sloth. Why? Because it is mankind’s inherent laziness which has caused his greatest technological discoveries, all so that a machine can do his work and he can relax. We are forgetting how to live, how to forage, how to build a dwelling and a fire over which to cook the food we hunted or gathered. We are dependent on some conveyance driven by someone or something else to get us from point A to point B. This can’t be a good thing.
As great as the series often is, this episode raises more issues than it’s prepared to handle – even as a two-part series – and veers distressingly into the techno-triumphalist ideology of Google et al. The most glaring omission is the other side of the history of automation in the twentieth century, i.e., that the technology’s first and most immediate function is to eliminate somebody’s job. Manually driven cars are unlikely to become obsolete in our lifetime, but taxi drivers and truck drivers are very much in danger of that (it’s already a sad loss of skill and knowledge that so many cabbies rely on GPS, but that’s another story). So yet another sector of the economy gets absorbed by the “do no evil” corporation, reaping obvious benefits for them and ambiguous results (at best, to the extent that anybody has bothered to really ponder them) for the rest of us. The other problem is that all these uncertainties, and our responsibility in deciding them, are kind of swept under the rug of inevitability. “Whether we like it or not, this is the future…” That’s a fallacious approach to history*, and a dismal perspective on life.
and why not: http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/
What about cars being hacked for nefarious reasons? The latest Wikileaks revelations about the CIA’s hacking programs are beginning to raise doubts about the official story on Michael Hasting’s accident.
Automation also brings us new ways to die. People told not to rely on the Tesla automation that die doing so. Pilots who use the automation and technology to put themselves into situations they cannot handle when the automation breaks or is inadequate.
I’m interested in how a society with shared robot taxis would accommodate parents hauling young children. The thought of installing two or three car seats for every trip is awful. And where would you put the car seats while you did your errands, or groceries, or went for a bush walk? Or would you have to wait till a car with the specific arrangement of car seats for your aged children pre-installed was available? Or, would the cars be so safe you wouldn’t need car seats?
“Maybe instead of owning these cars you would hail them from off the street. And when you get out it would drive off and pick up someone else”. Doh. We already have these. They’re called buses, trains and trams – which *already* use roads much more efficiently than cars. Google, thinking in the box again… eye roll.