This article originally appeared on Autodesk’s Redshift, a site dedicated to inspiring designers, engineers, builders, and makers.’

Tesla CEO Elon Musk once said self-driving cars are “solved,” comparing autonomous vehicles with “elevators that used to require operators but are now self-service.” A little more than a year later, 40-year-old Joshua Brown died while driving a semiautonomous Tesla Model S.

A truck made a legal left in front of Brown’s Tesla, but instead of stopping, the car sped under the truck’s trailer. According to Mike Demler, a senior technology analyst at The Linley Group, the Model S’s computer-vision detection system of sensors, radar, GPS, and image-processing software was not intended to be used in hands-free mode. “The sensors weren’t designed to account for cross traffic, only highway driving,” Demler says. “They could identify the backside of a car, not the side of a tractor-trailer.”

Side view of a Tesla Model S.
Tesla Model S. Courtesy Tesla.

Many industry experts agree that the future of autonomous vehicles has made great leaps during the past decade. Google, Volvo, and Uber are testing cars on public roads, and nuTonomy operates a fleet of six automated taxis for public use in a limited district of Singapore, a city-state with a supportive government and predictable weather. Demler expects the first deployment of fully autonomous cars to be on predesignated routes, such as on college and industrial campuses. He adds that Ford, BMW, and GM plan to have driverless cars available for sale in five years—likely for limited commercial use along constrained routes.

Going the Distance. Still, the Tesla incident is a major setback for the industry. It underscores an important point made by John Leonard, a Massachusetts Institute of Technology professor who works on robot navigation, in a 2015 talk: For all the promising safety and environmental benefits, autonomous vehicles need to address fundamental perception and semantic questions—how to make a left turn against traffic, how to interpret the hand signals of a crossing guard or police officer, how to handle snow-covered roads—before widespread deployment on public roads.

Front view of a car mounted with a Civil Maps sensor.
A car with mounted sensor. Courtesy Civil Maps.

To “see” the road, many autonomous cars rely on LiDAR (laser radar), a spinning cylinder that usually sits on the car’s roof. By bouncing a laser off an object and measuring the time of flight, LiDAR can judge distances and “see” in 360 degrees. But if high-resolution laser scanners, cameras, radar, and other sensing tools are the eyes of self-driving cars, their cognitive abilities can come from intelligent algorithms and artificial intelligence that interpret this raw data against information from reference maps, says Sravan Puttagunta, CEO of California-based Civil Maps.

As Puttagunta explains it, the Civil Maps platform (like those of competitors Mobileye, Delphi, and Bright Box) gives an autonomous vehicle contextual awareness, not merely a 2D navigation map. It uses artificial intelligence to process raw data coming from high-resolution laser imaging so that cars can localize precisely and make better tactical decisions—like what to do at a four-way stop or a roundabout.

Once the car is localized, Civil Maps’ software is able to project semantic map data into the field of view of the car’s sensors. This helps the vehicle’s decision engine contextualize the environment and selectively focus on relevant road features such as traffic signs, lane markings, signal lights, and so forth. It creates a machine-readable augmented reality (AR) map. That experience informs the car’s computer about what it should be doing and how it should interact with the road infrastructure.

“One of the benefits of the augmented-reality perspective is the ability to localize a car and superimpose information on top of a signal, so if a car can’t see the signal, it can expect it,” Puttagunta says. “This is also important in harsh weather conditions when it might be hard to see lane markings.”

The company’s AR maps also offer a visual display so the passenger can understand the vehicle’s intentions and what the vehicle perceives. Over time, this will help build passenger trust and confidence, Puttagunta says.

With highly efficient map-data compression technology, Civil Maps is able to update and share that data in real time with other cars across 4G cellular networks. The company uses signature-based localization; it’s similar to how Shazam uses an acoustic signature to verify a song with a few notes, Puttagunta says, but based on road-level cues that can be aggregated and refined to improve safety.

Rather than sending out a fleet of cars to map a particular city, as Uber is doing in Pittsburgh, Civil Maps plans to crowdsource the data and partner with auto manufacturers, Puttagunta says. With more than $6.6 million in seed funding, including an investment from Ford Motor Co., the startup is working with partners and major automotive OEMs across three continents.

Moscow street-view images captured with Remoto Pilot's stereo-vision system. 
Moscow street-view images captured with Remoto Pilot’s stereo-vision system. Courtesy Bright Box.

The Race Is On. Civil Maps is up against stiff competition, including Swiss-based Bright Box. The company’s aftermarket artificial-intelligence car platform, Remoto, has already been deployed in prototype tests by several automakers, including Infiniti, KIA, Hyundai, and Nissan. “The winner will be the company who achieves a working system with a minimum number of sensors and provides tech for volume OEMs,” says Bright Box CTO Alexander Dimchenko. “The key factor is who will gather the biggest volume of data.”

Dimchenko points out that competition among suppliers, software companies, and auto manufacturers has brought self-driving-vehicle costs down significantly. At the first Driverless Car Summit in Detroit, in 2012, Google disclosed that its driverless test cars contained about $150,000 in equipment, including a $70,000 LiDAR system. By comparison, Dimchenko estimates the 2018 cost of a fully autonomous Honda CR-V sports utility vehicle at $29,000–$30,000.

But even if the costs become feasible for ride-hailing services or individual riders, significant challenges remain before self-driving cars are deployed on public roads en masse, Demler says. Among those challenges are: the compliance of state and federal regulations; the need for standardized driving tests and insurance-company buy-ins; the expansion of smart infrastructure; and, perhaps most important, the uphill task of earning the public’s trust.

For bullish self-driving-car proponents such as Grayson Brulte, co-chair of the City of Beverly Hills Autonomous Vehicle Task Force, boosting public confidence through exposure is crucial. Driverless cars have the potential to eliminate distracted-driving deaths and solve urban congestion and parking problems, he says, but only when people are ready to embrace them. This spring, an undisclosed manufacturer will come to Beverly Hills to test-drive autonomous vehicles on public roads under real-world conditions—a moment Brulte eagerly awaits. “A child born today will never drive,” he says. “When you really start to understand that fact, things get interesting.”

This is part of content-sharing series with Redshift to introduce global technology trends, information, and innovation to a Southeast Asian audience.

For more article like this, check out the section Voices.