San Diego, CA, July 20, 2004 -- More than 43,000 people were killed on U.S. highways last year, and over three million were injured. The National Highway Traffic Safety Administration (NHTSA) estimates that approximately 30% of the fatal crashes - and 10,000 deaths - were due to driver distraction. Those alarming statistics are driving university researchers, automakers, and transportation agencies such as NHTSA and the Federal Highway Administration to invest in new approaches to reduce the toll from highway accidents.
Enter LISA: the Laboratory for Intelligent and Safe Automobiles. Located within UCSD's Jacobs School of Engineering but including faculty from Psychology, Electrical Engineering and Cognitive Science, the lab is developing "smart" car systems based on on-board cameras and other sensors, computer-vision processing, and new tools to assist the motorist to drive more carefully. "Our research team is looking at technologies that will improve the safety of drivers and occupants as well as the overall traffic," says Mohan Trivedi, professor of electrical and computer engineering in the Jacobs School. "We are really interested in looking at the entire ecology of driving, including the driver and the driver's interactions with other things inside and surrounding the car."
Trivedi directs UCSD's Computer Vision and Robotics Research lab, and with funding from the California Department of Transportation (Caltrans), his initial work focused on networked camera systems for highway to improve traffic flow and accident management. Trivedi's team also developed a mobile interactive avatar (MIA) - a robot-like device with sensors and two-way, audio/video communications over high-speed wireless networks, that could be deployed faster to an accident site, thus giving emergency first responders a look at the scene even before arriving. It became clear that some of the same systems could be installed on-board cars themselves, to capture the context inside and around the vehicle. Automakers were intrigued, including Daimler Chrysler's Mercedes division, Volkswagen, and Nissan.
The initial LISA research began with a simple hypothesis: that conversations between a driver and passenger in a car are safer than a driver talking on a cell phone, because the passenger knows traffic conditions and the state of the driver, so he or she can modulate the conversation, or even stop talking if conditions are risky. Funded by Daimler Chrysler and a UC Discovery Grant in 2001, the research aimed to see if it would be possible to monitor the driver's state and convey it to outside callers with whom they were speaking on a cell phone. Inside the lab, researchers equipped a Mercedes test frame with a variety of cameras and other sensors to monitor the driver's facial expressions and other parameters. "We created a system for recognizing and monitoring the facial affect of the driver," says CVRR researcher and graduate student Joel McCall, "and it works in real time."
McCall and other researchers developed the system to monitor the state of the driver continuously, analyze it using thin-plate spine morphing, and convey the information instantly to the outside caller. Sending the actual video would take up too much bandwidth, so instead, the researchers developed 'emoticons' that could change instantly but require almost no bandwidth.
The project allowed Trivedi's team to explore the types, positioning and numbers of cameras needed to capture what happens inside a car. "But once we did that, we realized that we would have to make our systems very robust," notes Trivedi. "The software needs to recognize different types of backgrounds outside the vehicle, different lighting conditions and shadows depending on the time of day and position of the sun. Given all these variations, we realized that we had to start instrumenting actual vehicles on which we could do systematic, experimental trials."
Trivedi and his team took their research on the road - literally. They outfitted real vehicles with sensors, processing power, and computer storage. The first was based on a VW Passat, and was dubbed the LISA-P. The second became the LISA-Q, because it was based on Nissan's Infiniti Q45.
With funding from VW and the UC Discovery Grant program, the researchers used the LISA-P to address an important safety problem: Although airbags have saved many lives, more than 200 people have died on U.S. roads since 1996 (and many more were injured) because of the impact from an airbag. Most of the victims were children or short women who may have been leaning forward at the instant of the airbag's deployment.
To determine posture, the UCSD researchers used stereo cameras and software to build a 3-D volumetric description of the occupant, and for nighttime driving, they used thermal infrared imaging. "The cameras allowed us to examine the posture of the occupants in real time, which is necessary if you want to create a system where the car itself can 'tell' the airbag to deploy, deploy with less force, or not deploy at all depending on the posture of the occupant," explains Trivedi. "We have to do the analysis very accurately, and the major research challenge is that the decision must be extremely fast."
Extremely fast, Trivedi adds, means under 20 milliseconds, a bit less than the time it takes for an airbag to deploy after impact.
Nissan approached Trivedi and his team with a much bigger task. The Japanese automaker asked LISA researchers to develop a driver assistance system that would draw the driver's attention to possible problems while minimizing the impact of growing distractions that compete for a driver's attention. "Drivers are finding more and more ways to be distracted, including cell phones, DVD players, and Internet access," says Erwin Boer, an outside consultant to both Nissan and LISA. "Say you want to book a movie and you call up a theater and they give you seven movies and times. For drivers to keep that in memory, they are internally re-hearing it, and in the process they are cognitively so overloaded that perceptually they may be blinded to some degree."
To capture and analyze the data needed to monitor the driver, car interior and nearby cars and roadways, researchers installed more than two dozen sensors on the LISA-Q. "The goal is to have a vehicle which is maintaining awareness of the state of the driver, of the traffic around the car, the goal of that particular drive, and the state of the vehicle itself (the vehicle dynamics)," says Trivedi. "With that awareness, if the driver makes a maneuver that could lead to an accident, the car would give the driver some notification in time to make a correction. We have developed several new types of robust algorithms for capturing the full 360-degree surround as the vehicle is moving. We have developed new types of omni-directional, vision-based motion analysis and ego-motion detection, as well as new algorithms that allow us to do lane detection and lane tracking."
The project goes well beyond computer vision, into the areas of cognitive science, signal processing, and even psychology. For that expertise, Trivedi turned to the California Institute for Telecommunications and Information Technology [Calit ], within which he leads an interdisciplinary team of researchers working on Intelligent Transportation and Telematics at UCSD.
One of the first challenges for the psychologists was to assess the safety of driving while doing various other tasks. "It is known that distractions do degrade driving performance somewhat, but no one has yet resolved whether that occurs because of switching back and forth, or if it is because specific mental resources are shared between the two tasks," explains Hal Pashler, a professor of psychology at UCSD who specializes in human attention and multi-task modeling. "In this project we are doing the kind of fine-grain studies that will help us understand what's causing the interference and maybe eventually help to reduce it somewhat."
Over the past two years, more than 100 subjects have taken test drives on a simulator program commissioned specifically for the project. In one test, the program assesses how quickly the subject hits the break pedal when the break lights of the car in front go on. Other tests require doing a variety of tasks that require a quick response. "Our first line of research was to understand whether the breaking is affected by these other tasks, and it seems to be pretty clear that it is," says Jonathan Levy, a postdoctoral researcher in Pashler's lab. "Our current research aims to augment the information to the driver, so that the breaking task is either less affected or not affected at all."
"We find that there are some critical bottlenecks in certain kinds of mental operations that are always done one at a time, even if intuitively they seem trivial and easy, and they tend to involve the planning of actions," explains Pashler. "When it comes to planning two different actions, they are often done sequentially, one at a time. We are now examining whether these bottlenecks occur in driving just as they do in laboratory tasks, and the goal is to use this information and bring it to the driver's attention to improve his performance - without distracting him further with the warning itself."
The task of figuring out which interfaces work best to assure driver attention and safety fell to Jim Hollan, a professor of cognitive science at UCSD and an expert on human-centered design of interfaces. With Trivedi's group, Hollan's Distributed Cognition and Human-Computer Interaction Laboratory designed detailed experiments to develop ethnographic models of driving behavior. So far, over two dozen test subjects have each driven the LISA-Q on driving runs on San Diego freeways. The sensory system of LISA-Q collected and recorded audio and visual records of the driver's behavior, including what they were looking at; where their hands and feet were at all times; what was happening around the vehicle; its precise location; and over 18 parameters of vehicle dynamics. "All told," notes Trivedi, "they collected about a quarter of a terabyte of information from each 90-minute experimental run."
"There are rich possibilities for new kinds of interfaces, and having rich, detailed visual records of real-world behavior of drivers is exciting because it helps us understand the phenomenon well enough to really build interfaces that support rather than hinder people in their activities," says Hollan. "For example, with active cruise control, the car adjusts its own speed. But we can take that same information from the radar and provide it to the driver in a different way, for example by force feedback on the accelerator pedal. So you could feel things out in front of you, and as you got closer, you would have to push down on the accelerator more. We are at the preliminary stage of understanding the phenomenon enough, and people enough, so we can explore alternative types of interface technologies."
The LISA team also includes an expert in signal processing: Jacobs School electrical and computer engineering professor Bhaskar Rao. Signal processing is vital when there may be dozens of sensors on board, and the data must be accurate and instantaneous to ensure the right safety decision. Rao has also developed novel algorithms for analyzing the intent of the driver's lane-change operation. Currently his group is working with Trivedi to extend and evaluate those algorithms on-board the LISA-Q, especially their robustness. "Robustness is important, because the tolerance for error is much lower in a driving situation," explains Rao, who runs UCSD's Digital Signal Processing Laboratory. "By interacting with people who are not typically electrical engineers, I get to learn about issues that probably I would not otherwise be thinking of, and that certainly factors into my research."
Rao is not alone in lauding the multidisciplinary approach to LISA projects. "We have to perceive complex activities from many different perspectives," agrees cognitive scientist Hollan. "We need to understand driving in a way that will let us design systems that make the car a safer place for drivers." Adds psychologist Pashler: "We just need different concepts than those we are familiar with, so it is exciting to get exposed to a variety of different disciplines' perspectives."
With nearly 35 faculty and graduate students involved, the Laboratory for Intelligent and Safe Automobiles and the LISA vehicles deployed so far have created a capability that exists on no other campus. "These were built as general-purpose testbeds, and in some sense, we opted to over-sensorize and over-interface these vehicles," says Trivedi. "We are now in a good position to tackle many other research projects that will require some or all of the capabilities that exist within each testbed." One case in point, Trivedi adds, is studying how to prevent accidents due to drowsiness: "We already have everything we need to do posture analysis and the blink-rate analysis which might be indicative of someone getting sleepy."
LISA researchers also hope to tackle new research challenges, including systems customized for drivers on city streets rather than highways; technology tools to offset slower reflexes as seniors age; and allowing smart cars to 'converse' with 'intelligent' infrastructure. "We are getting to the stage where we can have cars talking to other cars, and to the infrastructure around them such as bridges, intersections, and roadways," says Trivedi. "That means we can utilize this data acquired over wireless channels to help develop smoother and safer traffic flows. That is one of the key issues we want to investigate in the future."
Such research could have global implications: A recent report from the World Health Organization estimated that roughly 1.2 million people die each year on roads worldwide.
Laboratory for Intelligent and Safe Automobiles
Computer Vision and Robotics Research Lab
UC Discovery Grant
Hal Pashler Group
Distributed Cognition and Human-Computer Interaction Laboratory
Digital Signal Processing Laboratory
Jacobs School of Engineering
Electrical and Computer Engineering Department