Google Driverless Car: The Basics

Google Driverless Car: The Basics

Much is being talked about the Google sans-driver voiture in the news, a lot by consumer-orientated journalists who exhibit very little understanding of cars and an tendency to behave like a teenage girl at a Taylor Swift concert at the mere mention of Google. In the next couple of articles, we’ll look at the basic principles of the vehicle and issues surrounding the technology and the company.

 

Why have a car that drives itself?

Google says:

At the most basic level, driving is about processing information around you—e.g., stop signs, cars changing lanes, people crossing the road—and making decisions based on all of those signals. We’ve been teaching our self-driving cars to do the same thing, by using sensors that are always alert and attentive to hundreds of objects simultaneously and can see 360 degrees around the vehicle—like being able to look through the windshield, out the side windows, and at the rear view mirror at the same time (even in complete darkness). In contrast, the human field of vision is 120 degrees and we can only zoom our attention in on one thing at a time.

Our car has three types of sensors which are complementary to each other. Lasers show us the shapes in the world around the vehicle, even at night. Cameras provide image and color, which is important for detecting signs and traffic signals, and recognizing objects like cones at a construction zone. Radar is good at detecting vehicles and objects far ahead and determining their speed.

In essence, the principle is this: technology can provide a greater level of control than a human over a vehicle and at a consistent rate. We assume that the sensing technology alone is only part of it and that Google’s strength (as opposed to an established car company) will lie in using big data to interpret and anticipate what those items that have been detected in the immediate environment are likely to do.

 

  • Green path = path that our vehicle intends to follow
  • Red boxes = cyclists
  • Yellow boxes = pedestrians
  • Pink boxes = vehicles
  • Green fences = correspond to an object that potentially affects our speed
  • Red fences = correspond to the location where the car will stop until it’s safe to proceed

How Google’s self-driving cars work

First, we map the road. Before any route is driven in self-driving mode, we first create a detailed, digital map of all the features of the road—including things like lane markers and traffic signs—so software in the car is familiar with the environment and its characteristics.

Then, the car sees what’s around it. When in self-driving mode, our software interprets hundreds of objects with distinct shapes (e.g. cyclists and pedestrians) and “reads” traffic signals and signs.

It then predicts the behavior of what it sees. In driving over a million miles, we’ve developed probabilistic models for how thousands of road situations unfold and how other drivers, cyclists, and pedestrians are likely to behave—including subtle signals (like a car slowing down, or a cyclist wobbling and flailing his arm) that indicate they might be about to do something that would put them in our path.

Then it compares what’s happening in real time with what’s in our models, and responds in the safest way. We’ve taught the car to make decisions by combining existing models of how objects in the world should behave with real-time info about how they’re actually behaving.

Rare situations: People love asking us how we’d handle situations that are rare but possible — an object falling off a truck, a cyclist suddenly darting into our path from between parked cars, a woman in an electric wheelchair chasing a duck around the middle of the road. (That last one really happened!). Rather than teaching the car to handle very specific things, we give the car fundamental capabilities for detecting unfamiliar objects or other road users, and then we give it lots of practice in a wide range of situations. Most often the best approach — for our software or for a human driver — is to slow down or come to a stop until more information about the situation is available.

This summarises in very basic terms that the car is less about the physical technology and is almost entirely about collating data, building evolving models of behaviour (which can be processed very quickly by the car’s computing systems) and having a particular behaviour for each possible scenario. What isn’t clear yet is the default or failsafe modus operandi. Presumably when nothing makes sense to the car, it will stop and go no further. It’s worth noting that a million miles is not much in the wider scheme of things with London Taxis doing more than that in two days.

Leave a Reply

Your email address will not be published. Required fields are marked *