Cameras mounted above traffic lights at busy city intersections were probably put there to monitor traffic conditions and provide visuals in case of collisions. But researchers in Texas are exploring their use for other functions, such as helping planners to optimize traffic flow and identify sites most likely to have accidents. And they are looking at how to utilize collected data without requiring individuals to slog through hours of footage.
At the IEEE International Conference on Big Data this past week, the researchers presented a new deep learning tool that can recognize objects -- people, cars, buses, trucks, bicycles, motorcycles and traffic lights – within raw traffic camera footage from cameras based in the city of Austin. It can also characterize how those objects move and interact. The information can then be analyzed and queried by traffic engineers and officials.
The traffic analysis algorithm the researchers developed automatically labels all potential objects, compares them to other previously-recognized objects and analyzes the outputs from each frame of the raw footage to uncover relationships.
The system has already been tested out on two practical examples: counting how many moving vehicles traveled down a road, and identifying close encounters between vehicles and pedestrians. Preliminary results showed an overall 95 percent accuracy automatically counting vehicles within a 10-minute video clip -- researchers were also able to automatically identify a number of cases were vehicles and pedestrians were in close proximity.
"We are hoping to develop a flexible and efficient system to aid traffic researchers and decision-makers for dynamic, real-life analysis needs," said Weijia Xu, a research scientist at the Texas Advanced Computing Center (TACC). "We don't want to build a turn-key solution for a single, specific problem. We want to explore means that may be helpful for a number of analytical needs, even those that may pop up in the future."
"Understanding traffic volumes and their distribution over time is critical to validating transportation models and evaluating the performance of the transportation network," said Natalia Ruiz Juri, a research associate who directs the Network Modeling Center at the University of Texas’ Center for Transportation Research. "The highly anticipated introduction of self-driving and connected cars may lead to significant changes in the behavior of vehicles and pedestrians and on the performance of roadways," she said. "Video data will play a key role in understanding such changes, and artificial intelligence may be central to enabling comprehensive large-scale studies."
“Video analytics will be a powerful tool to help us pinpoint potentially dangerous locations," added Jen Duthie, a project collaborator and consulting engineer for the city of Austin. "We can direct our resources toward fixing problem locations before an injury or fatality occurs."
The researchers plan to explore how automation can facilitate other safety-related analyses -- identifying locations where pedestrians cross busy streets outside of designated walkways, understanding how drivers react to different types of pedestrian-yield signage and quantifying how far pedestrians are willing to walk in order to use a walkway.
To see more examples of the researchers' traffic analysis tool, click here.