Renault-Nissan Alliance Team
What cars will have to learn from us
In his latest LinkedIn post, Doi Kazuhiro, VP, Alliance Global Director at Nissan Motor Corporation offers his view on autonomous drive technology.
Today was a special day for me: Carlos Ghosn, Chairman-CEO of the Renault-Nissan Alliance tested a prototype car we have been developing in the Silicon Valley.
But this was not just another test drive: For the first time, our CEO was behind the wheel of an autonomous-drive car on the busy streets near the Renault and Nissan Research Center in Sunnyvale, California.
To make this possible, and to bring the technology to consumers, we are working with computer science specialists and even anthropologists, to learn how to deal with real world driving situations.
Entering a new automotive world
Thirty years ago, automakers began developing technologies to help us avoid making driving mistakes. Gradually, our cars started to detect our behaviors and help us make the right decisions. This ability to protect us from our own mistakes has paid off dramatically.
Our streets are much safer; we have seen a significant reduction of the number of accidents, thanks to these safety technologies. Today, lane-departure warnings and emergency braking systems are part of the daily safe-driving experience of millions of people around the world.
With the advance of autonomous drive technology, cars are now able to make decisions for us.
In Japan, Nissan recently introduced an advanced autonomous-drive technology called “ProPILOT,” which gives drivers the option of hands-free driving in a single lane of highway traffic.
We plan further rollouts and enhancements of this technology in the coming years. By 2020, we plan to introduce technology that will allow for autonomous driving in city traffic.
To realize this vision, we will bring together the technological building blocks that already exist today on our research prototypes.
Using cameras, sensors, radars and lidars, coupled with image-recognition software and our current safety technologies, we are able to let the car “see” its environment and follow a route set on the navigation system.
But the biggest challenge is the software. It’s the software’s intelligence that will decide what the car should do in any given traffic situation.
The new frontier is to replicate human behavior.
Artificial Intelligence meets anthropology
To create a fully autonomous vehicle able that can navigate the complexities of our modern cities, I work with a team of data scientists, human-machine interface specialists, behavioural experts, sociologists and even anthropologists in Tokyo, Paris, the Silicon Valley and elsewhere around the world.
Our challenge is to analyse human driving interactions and translate those into working data models that we can load into our cars, to prepare them to face any driving situation.
Today, through our global research network, we are studying how drivers make decisions on the road, to acquire the data to create this model.
This is a whole new level of learning. For example, a traffic light seems relatively straightforward, even for a machine. Red, you stop; green, you go. But intersections are often a kind of controlled chaos, filled with trucks, buses, motorcycles, bicycles and pedestrians in addition to cars.
And how we drive through them can be affected by construction projects, the weather and other factors.
Developing autonomous drive technology requires us to be able to create the conditions for our cars to resolve these everyday conflicts, in real time. This is where Artificial Intelligence comes in.
Our ultimate goal is to have the car behave like a human driver. The car of tomorrow should be able to drive smoothly and should be socially accepted. We call it “human-like autonomous driving.”
Being able to predict and react in the right way is critical for our future vehicles. When briefing Mr. Ghosn about this particular test drive, I explained that a key factor for us was to evaluate how close to a human being our car would behave.
Today, our prototype car properly followed all the traffic rules on our test drive. But that ability alone is not enough to handle all the driving complexity of the real world, where split-second decisions to initiate a lane change in heavy traffic or to enter a roundabout are not always dictated by rules alone.
Our cars will need to predict the behaviors of other drivers, cyclists and pedestrians and be able to interact with them, just as humans do. We are not quite there yet. What we humans know how to deal with, and what machines will have to learn, is to handle the extreme complexity of the real world.
It’s a complex challenge, but one that promises to a major payoff to society by reducing driving fatalities and injuries on an unprecedented scale.