Earlier this year, the HumanDrive project sent a driverless car 370 kiliometres down country lanes and via busy motorways. The consortium running the project included Nissan, the universities of Cranfield and Leeds, Atkins, Highways England, Aimsun, Horiba Mira, SBD Automotive and Connected Places Catapult – as well as Hitachi Europe, which developed the software for perceiving the external environment and planning a safe path accounting for obstacles, restrictions and other road users by harnessing AI and machine learning.
The aim of HumanDrive was to build a driverless experience that felt smooth and natural, and not “jerky and robotic, as many of the autonomous driving capabilities have been,” says Nick Blake, chief innovation strategist at Hitachi Europe. “We want to improve the comfort for passengers and the acceptability.”
Advertisement
To do that, the team blended artificial and human intelligence, using road data as well as professional drivers to teach the system. “The data was collected from professional test drivers”, says Ioannis Souflas, senior researcher at Hitachi Europe, adding it was used to train the cars to drive in a manner more comfortable to passengers.
That came with challenges, however. It was difficult to get enough data about how good drivers react in dangerous situations – after all, that would require asking people to put themselves in danger. “If you’re using human driving data to train an AI model, we have a lot of data around good driving in normal, safe environments, but not a lot of data that tells you how to get out of a problem,” says Blake. “There’s very little data that we can train models on to get it right.”
As Souflas notes: “You can’t train your AI to make left and right turns by only driving straight.”
To fill in the gaps, there’s simulation. “You might not be able to capture all the edge cases with professional drivers, but you can do tricks in machine learning and augment your data, creating artificial edge cases,” Souflas explains.
Advertisement
But humans learn to drive differently than machines – to start, most of us learn to drive as teenagers, after 16 or 18 years of vision development. “We’ve learned already to have good perception,” Souflas says. “All we learn then is how to control the vehicle.”
One solution was to split the AI into four different modules. The first would manage perception, pulling in data from different sensors and inputs; and the second uses the pulled data and localisation data to understand the scene. The next AI module plans the route, while the last controls the car. Some systems combine all or some of these aspects into one, but by splitting the AI into different sections. Different modules enable Hitachi to add features and functions to cover varying driving environments and styles, as well as inserting rigorous validation and safety checks through the software process.
The HumanDrive project equipped a car with the technology – and AI smarts – to learn to drive itself like human, only much more safely
The human input was largely incorporated into the path-planning module, says Souflas. We all see things in roughly the same way, and a map can tell us where we’re located – those are “truths” rather than value judgements, notes Blake. “But path planning is the point where, as drivers, we have different behaviours,” adds Souflas. “This is the part where we have some variability from person to person.”
Advertisement
The modular AI idea was also welcomed by car manufacturers, as the project revealed they aren’t keen on the opaque decision-making inherent to generalist AI, which are like black-box systems. If there’s a fault on a test drive, manufacturers want to know where the error occurred — did the car’s sensors fail to see something in the road, fail to understand what it was, or fail to navigate around it? “Manufacturers are very conservative with the use of technologies that are not transparent,” says Souflas.
By splitting out each core function’s system, it’s easier to spot where a fault happened, and to fix it. “If there is a failure in perception, they can fix that problem and not worry about the path planning,” Souflas explains. Plus, being able to spot where a fault lies lets the team know how serious it is — the nearer a problem is to the car being controlled, the more pressing it is. “If there is a failure in planning or control it is much more important compared to perception, because it’s closer to the action,” he adds.
And, by building AI in a modular format, it can be reused elsewhere. For Hitachi, this development work is much wider than driverless cars. Alongside vehicles such as trams and other rolling stock, these systems to sense, interpret and take action could also be used to automate production lines, for example. Naturally, a factory has different control requirements than a car, though the sensing system could potentially stay the same. “The modularity of the system allows multiple functionalities or applications of these basic technologies,” says Massimiliano Lenardi, Head of the Automotive and Industry Laboratory of Hitachi Europe.
Hitachi is also using some of this technology in its Smart Spaces work, with smart sensors able to track people through public areas. “A vehicle being able to see objects along the road uses the same techniques as our video intelligence, where you might want to separate people going through a train station so you can track them,” Blake says. “We use pretty much the same techniques.” That said, there’s one main difference, notes Souflas: in cars, the processing time is much faster.
Having modular AI means systems can be updated separately, and that brings the idea back to the core of HumanDrive, which was the reintroduction of human thinking to solve problems in the driverless car process. One of the next challenges being addressed by Hitachi’s developers is navigating urban areas, as GPS can be overwhelmed in densely populated places, making navigating cities difficult.
One solution being considered is teaching the car’s location system to look around as a human would. Most of us don’t need to dig out our smartphones and pull up a map to know where we are – we simply look around and recognise our surroundings. “We are creating functionality that can localise from visual features,” explains Souflas. “For example, when you are in Kings Cross, you don’t need GPS to tell you you’re there. This builds redundancy in the system, fusing AI with traditional approaches to provide improvements.”
Advertisement
Souflas adds: “In order to develop this AI and this intelligent software, you need human intelligence.”
—
Modern life is saturated with data, and technologies are emerging nearly every day – but how can we use these innovations to make a real difference to the world? Hitachi believes that social innovation should underpin everything it does, so it can find ways to tackle the biggest issues we face today. Visit Social-Innovation.Hitachi to learn how Hitachi Social Innovation is Powering Good and helping drive change across the globe.