License to (Not) Drive

An exclusive look behind the scenes at Google’s autonomous car testing center.
Image may contain Mirror Vehicle Transportation Automobile and Car
Sasha Katz

I have backed out of a lot of parking spaces in my life, but today is different. As I adjust the mirrors, fasten the safety belt, turn on the ignition, and put the Lexus RX450h SUV in reverse, I feel the scrutiny of my passengers, professionals who may be skeptical of my ability to control the vehicle. I make a couple of turns, and then, on a dashboard display that I’m not allowed to describe to you (trade secret!), something turns green. That’s my signal. I press a button on the steering column, and a female voice accompanied by an icy synthesizer note — the kind of thing you hear when monorail doors are about to close — intones the word, “Autodrive.” Something catches in my throat; it may be the closest thing I’ll know to flying the Millennium Falcon when it thrusts into hyperspace. In truth, not much really changes. The Lexus rolls forward and rambles down a street in a neighborhood that is all streets and no buildings or people, a Potemkin village of roadways. There is an intersection ahead with a stop sign. The car stops. My foot has not touched the brake.

I am behind the wheel of a Google self-driving car.

The former Castle Air Force Base sits on approximately 2,000 acres in Atwater, California, in Merced County. There was never a castle there; the name honors a downed World War II pilot. For many years, it was a home for long-range bombers with the Strategic Air Command. After the Cold War, the government decommissioned the base, opening it up to commercial purposes. In 2014, Google rented 60 acres — now expanded to around 100 — to test its self-driving cars (SDC) and train its drivers. Googlers refer to the facility, 2.5 hours from the company’s HQ, as simply “Castle.” Mission control at Castle is a double-wide trailer that seems more like the op center at a construction site than a dispatch center for the future. There are desks, a ratty sofa, and instead of the high-end espresso maker commonly found at the company’s facilities, a coffeemaker that Joe DiMaggio would recognize. The most Googley objects are what look like military-grade water ordnance; they are actually Bug-a-Salt rifles that shoot pellets at the swarms of insects that are ubiquitous during the Central Valley summer.

Lunchtime at the Castle facility.

Noah Rabinowitz

I’m there for my own training—to find out what it’s like to not drive a car while sitting in the driver’s seat. When I heard that Google has a formal program to qualify people to operate cars that, at least aspirationally, don’t need anyone to operate them, I volunteered. Going through the whole program was out of the question — it takes four weeks, full-time, and god knows what liabilities might be involved. But the company decided that with a compressed lesson on the basics, it would be okay for me to putter around at its private testing grounds. As an afterthought, a rep asked me, “Uh, you are a good driver, right?”

Sure!

I was in, and not just for the opportunity to sit in the front seat of Google’s car. In the process, I was rewarded with a very rare look at Google’s SDC industrial complex, a co-evolved infrastructure of hardware, software, drivers, and engineers that the company hopes will contribute to its argument that autonomous systems are the future.

The test-driving program evolved out of the company’s need to log a lot of time driving autonomously. “We had two objectives,” says Chris Urmson, director of the autonomous car program, which is in the process of leaving the Google X research division and becoming a separate Alphabet company. “One was to drive 1,000 miles of interesting roads, and the other was to drive 100,000 miles of roads. At the time, this was 10 times more than anyone had ever driven before with one of these things. We realized we couldn’t just have our developers in the cars all day — we had to get some people to come out and drive them.”

So, in 2009, Google began hiring nonengineers for the project. It figured out the profile of these drivers pretty much on the fly. The first hires were tone setters like Brian Torcellini, then a recent urban studies major at San Diego State, taking some time off after graduation — surfing trips and the like — before facing up to a career. A friend who worked at Google recommended him. His interviewers were vague about what the job entailed. “Basically they were like, ‘Do you like cars? Do you like technology? Do you want to drive pretty much all day, everyday?’” he says. “I said sure, I’m happy to sign up. They led me to believe that I might be working with the Street View team, but I walk in and see self-driving cars being built. And I’m like, okay, this is awesome.”

There were only a few like Torcellini, and the engineers still did the driving. For the first couple weeks, Torcellini recalls, he mostly sorted screws. When he finally got behind the wheel, he drove in circles in the parking lots. Around that time, Urmson and Dmitri Dolgov, who leads the software team, began to informally train the drivers. “We had some quizzes on what the different parts meant and how they worked and what can go wrong with them,” says Urmson. “And then we showed them the techniques we were using for how to hold the steering wheel, how to be ready to react and what to react to. And then we demonstrated it. We then took them out there for a while and had them sit in the left seat and observed them and gave them feedback.”

Eventually, the SDC engineers certified the drivers — no mortar boards or “Pomp and Circumstance,” just an informal acknowledgement — and let the drivers go out on their own. “That was actually a huge day for us, when we transitioned from the engineers to the operations team being the folks taking the cars out,” says Urmson. “It was kind of cool to get to the point where some folks that don’t have PhDs are getting the thing to work every day.”

Torcellini says that not-driving on public roads felt intuitive — after all the hours he spent in the parking lot, he was used to sitting behind the wheel of a car that drove itself. Initially, testing was limited to highway driving. But in early 2014, Google felt it had essentially mastered the elements of highway driving and focused on the knottier problems of street driving.

Test-driving at the Castle facility.

Noah Rabinowitz

Urmson and Dolgov hand-certified the first two dozen or so drivers, but now that more were coming on, Google needed set up a more formal system of training. Torcellini, now head of operations for the testing program, helped to create it. Along with gaining new responsibilities, he became one of the lucky drivers who moved from contractor status to full-time Googler. “It’s much more structured, and we teach people exactly what they need to know,” he says of the formal program.

Stephanie Villegas, who runs the Castle testing program, took the same path as Torcellini from contract driver to Google employee. She previously worked as a marketing and communications manager at a place called Azalea Boutique. She has a bachelor’s degree in art from Berkeley. This is typical of Google’s safety drivers. One had been doing spreadsheets at Oracle. Another was an assistant manager at a bakery. These backgrounds are not accidental; Google want its hires to view the experience as drivers, not as engineers trying to deconstruct what’s going on in the software code.

The company does not say how many test drivers it uses, but you can do the math yourself: There are 53 self-driving cars currently involved in the program, a mix of Lexuses and the little bug-like prototypes that go no faster than 25 miles per hour. Every day, most of the cars go out on the streets of Mountain View, California, and Austin, Texas, in day and night shifts, typically with a driver and a c0-driver. Google doesn’t share how much they are paid, but it’s safe to say that it’s a lot less than an engineer gets.

Right now, I suspect a lot of millennial liberal arts grads are wondering, “How do I get to be a Google test driver?” From my somewhat limited sampling, I would say the best start is to be a friend of someone who’s already doing it — a lot of the people I spoke to came in via the friend-of-a-friend route, like Torcellini. That said, Google is looking for people who are smart, socially engaging, and observant.

One other thing: “You just really have to be a kick-ass driver,” says Torcellini. “That doesn’t mean you can take a hairpin turn at 50 miles per hour and drift the car around cones and stuff like that. It’s really paying attention to everything and predicting how the social aspect of driving works.”

Indeed, Google takes candidates on a test-drive, observing how well they handle the car and whether they can display their automotive skills while conversing.

Mark Verdugo, who helped set up the Castle course, made sure it included a variety of driving conditions. There are streets laid out like suburban and semi-urban neighborhoods; there are slip roads and T-stops. A working traffic light. A stretch of highway. A rain simulator. Even a traffic circle. The only similar facility is the University of Michigan’s Mcity in Ann Arbor, where Ford tests its autonomous vehicles. The setups are similar, with Mcity having the advantage of a simulated tunnel. “But ours is 100 acres,” says Verdugo. “Theirs is 40.”

Map of the Castle facility.

In the first week of the training program, drivers come to Castle for experience in the closed setting. They start off with classroom instruction, then navigate the Castle course. At first, they do simple cruising, but as the week goes on, the driving conditions get “spicier,” to use the terminology the group uses to depict increasingly challenging scenarios. The safety drivers must share the road with human-driven conveyances, including rental cars commandeered by experienced safety drivers or bicycles borrowed from the company’s headquarters.

Bikes add spice to the training process.

Noah Rabinowitz

Those bikes are only one kind of prop used in testing, many of which are stored in a big shed to the side of the trailer and a wide garage that holds a small fleet of Google’s prototype custom-built self-driving vehicles. Villegas calls the shed “my toy chest.” There are traffic cones, road signs depicting various hazards, mailboxes, faux plants, roller skates, umbrellas, walkers, and — this is unsettling — fully dressed, child-size dummies.

Hazards abound in the storage shed.

Noah Rabinowitz

Sometimes Google uses human props, known as “professional pedestrians.” I had the chance to chat with one, a recent graduate of UC Merced (a good source for Castle workers) named Cassandra Hernandez. She took the job while matriculating and still punches in a few times a week. Since I encountered her on a Monday, her tasks that day were “not very spicy.” But on Thursdays, she assures me, the action is jalapeño-level. Does she ever get nervous participating in scenarios where she might be plowed under by a berserk robot Lexus SUV — or, in a more humiliating nightmare, a Herbie-esque prototype? Not really. “We just have to learn to trust,” she says.

For my own experience, I also wanted to test one of the tiny prototypes, built from the ground up as autonomous vehicles but retrofitted with pedals and steering so they could be tested with the driver teams. But Google was firm in denying me that. I was okay with that decision when I learned Google allows only its most experienced safety drivers—those who’ve worked at the job for six months to a year—to handle the cute little bugs, and then only when they return to Castle for more training. I got to watch a few during their first attempts to master manual control on these nippers. It’s kind of tricky: The steering wheel, more like a joystick installed on its side to the right of the driver, works like a crank, with the driver gripping a handle (they call it the “corn dog”) to spin the device, like turning a gear. “It only takes a few hours to get used to,” says Jared Mendiola, the operations manager in charge of the process. Clearly, the drivers acclimating to these bugs have not notched those hours yet. No traffic cone is safe as they try to negotiate a slalom course in reverse.

So I was fine with my chance to (not) drive an autonomous Lexus. That would be plenty.

“My job today is to prepare you to be a test driver and explain to you that safety is always going to be our number one priority.”

Greg Hanabusa, an operations manager for the project, is my instructor. The training session takes place in Maverick (conference rooms at Castle are named after characters in Top Gun).

“Safety is the primary concern,” he reiterates. “If anything makes you feel uncomfortable, change to manual.”

The term for switching back from autodrive to manual driving is “disengage.” Hanabusa explains there are two kinds of disengagements: desirable and undesired. The first kind includes discretionary reasons like going back to base, taking a meal break, or making a phone call. “In our safety policy, we don’t allow phone calls,” says Hanabusa. Google also instructs its drivers to disengage when they encounter reckless drivers or tailgaters. These can be common, because sometimes human drivers get overly curious when sharing a road with a car driving itself.

The undesired disengagements are when the car or the software isn’t behaving correctly, or some real threat emerges. In those cases, the car itself bails — too spicy for robots!

The most valuable tool the test team has for making sure things are running smoothly is the laptop on the co-driver’s lap. Using an interface called x_view, the laptop shows the world as the car sees it, a wireframe representation of the area that depicts all the objects around the car: pedestrians, trees, road signs, other cars, motorcycles—basically everything picked up by the car’s radar and laser sensors. X_view also shows how the car is planning to deal with conditions, mainly through a series of grid-like “fences” that depict when the car intends to stop, cautiously yield, or proceed past a hazard. It also displays the car’s path.

x_view shows the world as the Google Self Driving car.

If the co-driver sees a discrepancy between x_view and the real world, that’s reason to disengage. “The level of communication needs to be constant,” says Hanabusa. “Of course, the test driver sees something in vehicle behavior. Like if the steering jerks to the point where it goes beyond a certain threshold and makes him uncomfortable.”

There’s a code for every kind of disengagement. For instance #FOD means “foreign object or debris.” “In those cases, the driver has to disengage and go around,” says Hanbusa. “We don’t want to run over any branch or piece of wood.”

Every time a driver disengages, the Googler riding shotgun must make note of it. A pin is automatically inserted in the map of the trip, and the co-driver jots down a brief explanation. In a typical trip, there will probably be a number of those, sometimes dozens. At the end of the shift, the entire log is sent off to an independent triage team, which runs simulations to see what would have happened had the car continued autonomously. In fact, even though Google’s cars have autonomously driven more than 1.3 million miles—routinely logging 10,000 to 15,000 more every week — they have been tested many times more in software, where it’s possible to model 3 million miles of driving in a single day. The team analyzes each disengagement, assessing which ones to send to the engineering team, who will address the flaw that necessitated the disengagement. At least once a week, there’s a new build of software that incorporates the fixes and includes other new features. Drivers then comment on whether the changes improve or degrade performance. Other times, engineers will develop scenarios to see how cars do in certain situations and construct them, like little automotive theater pieces, at Castle.

As my lesson proceeds, I begin to understand that commandeering a car that drives itself is not a Seinfeldian exercise in nothingness, but a fairly demanding job. There’s no texting, eating, smoking, or binge-watching Netflix. As a driver, you are constantly poised to wrest control from the car. Your hands hover over the wheel, at nine and three o’clock. Your right foot is cocked over the accelerator or brake pedal. (That’s not comfortable, but Urmson says drivers get used to it.)

If you’re riding shotgun, the demands on your attention are just as rigorous. Comparing x_view to the actual world requires attention and the ability to type notes for the engineers in a moving vehicle.

One other thing: On the console between the driver and co-driver is the BRB, the Big Red Button. And that’s what it is: something like you’d find on the control panel of a Bond villain, reached only after escaping a shark tank and breaking out of the death grip of an iron-jawed henchman. The BRB is the ultimate disengagement, instantly handing back control to the driver. I am not to press it unless other disengagements fail. Oh, hell, I am not to press it at all. (Later I found out that it cuts off the self-driving system entirely and turns it into a regular Lexus. Kind of like Kryptonite.)

“That’s it,” says Hanabusa. Four weeks of training in less than an hour. Time to (not) drive.

The Big Red Button.

Noah Rabinowitz

Google’s ultimate goal, of course, is to make a transition from testing to systems where no safety drivers are needed — just passengers. For some time, Google has been convinced that the semiautonomous systems that others champion (which include various features like collision prevention, self-parking, and lane control on highways) are actually more dangerous than to the so-called Level Four degree of control, where the car needs no human intervention. (Each of the other levels reflects a degree of driver involvement.) The company is convinced that with cars that almost but don’t drive themselves, humans will be lulled into devoting attention elsewhere and unable to take quick control in an emergency. (Google came to that conclusion when it allowed some employees to commute with the cars, using autodrive only on premapped freeways. One Googler, perhaps forgetting that the company was capturing the whole ride on video, pretty much crawled into the backseat for a phone charger while the car sped along at 65 miles per hour.) Google also believes that cars should be able to move around even with no humans in them, and it has been hoping for an official go-ahead to begin a shuttle service between the dozens of buildings it occupies in Mountain View, where slow-moving, no-steering-wheel prototypes would putter along by themselves to pick up Googlers. It was bitterly disappointed when the California DMV ruled it was not yet time for driverless cars to travel the streets, even in those limited conditions. The DMV didn’t even propose a set of requirements that Google could satisfy to make this happen. Meanwhile, Elon Musk, CEO of Tesla, is barreling ahead, introducing a driverless feature in his Tesla cars called Summon. He predicts that by 2018, Tesla owners will be able to summon their cars from the opposite coast, though it’s a mystery how the cars would recharge themselves every 200 or so miles.

But maybe Musk is not the first. When I discussed this with Urmson, he postulated that in most states — California not among them—it was not illegal to operate driverless cars on public streets. I asked him whether Google had sent out cars with no one in them to pick up people in Austin. He would not answer.

Google’s most recent transaction with the California DMV came this week when it submitted a voluminous report on disengages, specifically those that might have prevented collisions with other objects. Apparently there were 69 instances where drivers disengaged to prevent these potential calamities. After simulating what would have happened had the car continued in autodrive, the company claims, only 13 of those incidents would have resulted in the Google SDC actually touching another object. Two of those were traffic cones, 10 were other vehicles, and one was a human being crossing the street at an intersection. Google also noted that the majority of those instances occurred in the early part of 2014, and more recently, as its engineers corrected the flaws that led to the incidents, there have been many more miles between incidents. (Though the rate is still higher than the average number of miles a human drives before hitting something or someone.)

Because the number of incidents where Google SDCs caused actual collisions is zero, the report proves two things: By its own admission, the cars are not yet as safe as Google wants and expects them to be, and the Google training program is quite effective, preventing 13 out of 13 situations where the car might have made unwelcome contact. In addition, when these disengagements were vitally needed, the drivers took an average of .84 seconds to take control.

Only the most experienced drivers get trained on prototypes.

Noah Rabinowitz

I gained a better idea of how Google works on actual streets by riding as a passenger in a sortie through the streets of Mountain View. (No way they would let me sit behind the wheel.) My drivers were the aforementioned Dmitri Dolgov and Nathaniel Fairfield, both principal engineers who have been with the project since the beginning.

Google principal engineers Nathaniel Fairfield and Dmitri Dolgov.

Noah Rabinowitz

One reason they stick to those not-so-mean streets is that Google has carefully mapped them. Mapping streets for autonomous driving is a far more comprehensive task than collecting Street View data. Roads have to be traveled several times so that lasers and radar can pick up all the roadside features and peculiarities. The way Google has rigged its cars, they simply will not operate autonomously if they are not in a mapped location. (When we drove to Castle in a self-driving Lexus, it was manual all the way.) They also won’t run during moderate to heavy rainfalls.

We headed out of the newish Google X headquarters (which began life in 1966 as a shopping mall). Right away, we ran into a spicy situation: After braking fully for a stop sign, the car was confronted with a steady stream of traffic on the busy road where it intended to make a right turn. The car nudged, nudged, nudged forward but would not do what any experienced driver would have done — aggressively edged into an opening, even if it meant that one of the northbound vehicles might have had to slow down or touch the brakes a bit. Instead, the car seemed paralyzed, and you could almost smell its fright and confusion. Dolgov disengages — just to hurry things along, the car would have eventually found an opening — and Fairfield duly noted it.

From there, it was a fairly routine ride, remarkable only for the car’s ability to drive pretty much like a human. The one difference, as in the case of the first stop sign, was that the car displayed timidity when faced with a situation where aggressiveness was called for. Urmson later explained that in the current software build, the team was specifically experimenting with the caution/aggression issue, and that could have accounted for a behavior the system might have mastered earlier.

It turns out I was unaware of the biggest event of our jaunt. At one point, the car drove over a small pile of leaves that had blown into its path, without breaking its pace. After the drive, Dolger and Fairfield told me that moment represented a ton of work. Apparently, for a very long time, SDCs regarded such harmless biomass as obstacles to be avoided, and the newfound ability to roll right over a bit of dead foliage was one of the more recent triumphs of technology.

Steven and Greg Hanabusa.

Noah Rabinowitz

Finally, it’s time for my test-drive. Greg Hanabusa is my co-driver. In the backseat is Stephanie Villegas; it’s common for two operations people to monitor a rookie driver. Crowding beside her are a Google PR person and my photographer. I’m more nervous backing out of the parking space than I am at the prospect of driving a car that drives itself. After all, who can blame me if something goes wrong? It’s the ultimate no-fault. The car goes into autodrive and begins to drive very much like I would—if I believed violating speed limits was a cardinal sin. I hold my hands at the recommended “nine and three,” tilt my foot to hover stiffly over the brake pedal, and marvel as the steering wheel moves by itself and the car stops politely at stop signs. I grant it bonus points for giving a tiny burst of speed to enter an intersection at a yellow light. Since the car was programmed to follow a predetermined course through Castle’s layout, I really felt like the car had a mind of its own. (Of course, I also felt that way about my first car, a vintage VW Bug prone to impulsive swerves.)

The most impressive moments came when the car exited a turn and accelerated, especially as it hit a little strip of open road that represented highway driving. It moved confidently but not recklessly. “I know what I’m doing” was the message I got from the robot car. I felt I was in good hands, even as I made sure my hands were within an inch of grabbing control. But at other times, you can tell SDC is programmed to emulate something with a Florida license plate. When the light turns green, it waits a couple of seconds (1.7 seconds, to be exact) before proceeding, just in case someone’s running the light. Drivers these days!

I test various ways to disengage: The “off” button on the steering wheel (the easiest), the accelerator, the brake, and the steering wheel. (This last is the trickiest because turning the wheel often means veering from what is already the optimum direction.) All seem to work fine. On advice of my co-driver, I do not test the Big Red Button.

I do get a glimpse of how being a test driver might be tedious. Though the motion of the car is utterly familiar, managing a Google self-driving car is anything but carefree. Because I’m a hair trigger away from disengagement, I have to pay a lot more attention than is required from actually driving. And after the novelty wears off, it’s also a bit boring. I feel like I’m moving in an amusement ride before the scary part starts. Can we order some spice?

The diciest part of my ride comes after a stop on a stretch of roadway dotted with traffic cones representing a construction area. The car slows to a crawl. A very slow crawl. And then it’s hardly moving. It’s clearly unhappy. I’m told to disengage. My co-driver duly notes the incident. In a little while, the green signal comes on and I’m ready to go again. Synth tone… “Autodrive.” Can’t get enough of that.

Finally, it’s time to disengage for the last time. I do it via the “off” button on the right side of the steering column and drive on my own for a bit before pulling into a parking space near the trailers. Done. And I have no idea when I will next sit in the driver’s seat of an autonomous car.

Inside the bug-like prototype.

Noah Rabinowitz

Two big takeaways. I have adjusted my view of how long it will take before we are all driving around in robot cars to later rather than sooner. From Google’s steady stream of upbeat reports — all those miles safely driven, with the only accidents being fender benders that were totally the fault of those goddamned human beings — I had come to assume we were almost there. (Who am I to contradict Sergey Brin and Elon Musk?) But sitting in the Lexus under orders to keep my hands at nine and three, ultraconscious of the possibility that I’d have to disengage at any moment, trusting the passenger with the laptop to identify situations where the car’s assumptions might not square with reality, and knowing that Google does not dare to let its cars drive on streets that are not exhaustively mapped, I have difficulty viewing No Drive Day as imminent. We’re maybe 95 percent there, but that last 5 percent will be a lengthy slog. One other thing to consider: Google, at least as far as it admits, hasn’t even scratched the surface of big-city driving, which is Brick Lane vindaloo in terms of spiciness. If SDC pilots have to disengage when they encounter reckless drivers, autodrive will be available about zero percent of the time in Boston. When I share my pessimism with Chris Urmson, he says I’m mistaken. What I have observed in Castle and Mountain View, he tells me, is a function of Google’s obsessive need for safety during a period of testing. Its caution comes because constantly probing for weaknesses and exposing them while avoiding risks will bring the cars closer to the moment that the general public can leave the driving to Google. When I ask him when he believes that will happen, he has his answer handy. “Before my son turns 16,” he says.

Chris Urmson’s son is now 12 years old.

I mentioned a second consequence of my adventure. Not driving the Google car changed the way I drive real cars. Hopping into my rental car the next day, I was hyperaware of every potential complication and threat. I found myself mentally constructing an x_view of the conditions around me, gauging where dangers lay. By trying to mimic the vision and caution of a Google car—which tries to think like a human, without the human failings—I believe I’m a better driver.

But no matter how good I am, I’ll always be prone to the lapses of attention and limited perception of a flesh-and-blood driver. Google’s car has the edge there, and that advantage will grow. I will, however, merge into traffic more smoothly. And I never did brake for leaves.